I don't do web stuff at all, but I really enjoyed this article. I am convinced that software engineers (not to mention others) have thrown the baby out with the bathwater in our brave new world of 32GB memories and fibre-optics. By all means the generous hardware capabilities let us do amazing things, like have a video library, or run massive climate computations, but mostly those resources are piddled away in giant libraries that provide little or no actual functional value.
I don't really pine for the days of the PDP-8, when programmers had to make sure that almost every routine took fewer than 128 words, or the days of System/360, when you had to decide whether the fastest way to clear a register was to subtract it from itself or exclusive-or it with itself. We wasted a lot of time trying to get around stringent limitations of the technology just to do anything at all.
I just looked at the Activity Monitor on my Macbook. Emacs is using 115MB, Thunderbird is at 900MB, Chrome is at something like 2GB (I lost track of all the Renderer processes), and a Freecell game is using 164MB. Freecell, which ran just fine on Windows 95 in 8MB!
I'm quite happy with a video game taking a few gigabytes of memory, with all the art and sound assets it wants to keep loaded. But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
An addendum...Back in the 1960s, IBM didn't grok time-sharing. When MIT/Bell Labs looked for a machine with address translation, IBM wasn't interested, so GE got the contract. IBM suddenly realized that they had lost an opportunity, and developed their address translation, which ended up in the IBM 360/67. They also announced an operating system, TSS/360, for this machine. IBM practice was to define memory constraints for their software. So Assembler F would run on a 64K machine, Fortran G on a 128K machine, and so on. The TSS engineers asked how much memory their components were given. They were told “It's virtual memory, use as much as you need.” When the first beta of TSS/360 appeared, an attempt to log in produced the message LOGON IN PROGRESS...for 20 minutes. Eventually, IBM made TSS/360 usable, but by then it was too late. 360/67s ended up running VM/CMS, or 3rd party systems: I had many happy years using the Michigan Terminal System.
Remember, there's a gigabit pathway between server and browser, so use as much of the bandwidth as you need.
At my deathbed, I’m not sure if I’ll be able to forgive our industry for that. I grew up in the 3rd world where resources where extremely expensive, so my early career was all about doing the most with the resources I had. It was a skill that I had honed so well and now it feels useless and unappreciated. With higher interest rates we see a small degree of it again, but I’m doubtful that hiring managers without that experience will be able to identify it on the wild to pick me.
> I really wonder whether we've lost something by not making more of an effort to use resources more frugally
I'll bite. What do you think we've lost? What would the benefit be of using resources more frugally?
Disclosure: I'm an embedded systems programmer. I frequently find myself in the position where I have to be very careful with my usage of CPU cycles and memory resources. I still think we'd all be better off with infinitely fast, infinitely resourced computers. IMO, austerity offers no benefit.
Quick answer: is our software any more usable, any more reliable, than it was 50 years ago? The more code you write, the more dependencies you require, the more opportunity for bugs and design errors to creep in. I get the impression that many software projects have enough fixes and kludges slathered on them to make them work nowadays.
(Remember Bill Atkinson's famous response, quoted here to how much code he'd written that week: -3000. He had reworked Quickdraw so that it was faster and better, with a net loss of 3000 lines of code.) Of course the classic Mac had its own constraints.
I couldn't play 16 videos simultaneously while downloading stuff in the background and and playing a game. I could go on and on but my computer today is vastly more usable than any of my computers 40 years ago that could only effectively run one app at a time and I had to run QEMM and edit my config.sys and autoexec.bat to try to optimized my EMS and XMS memory cards.
It's much more capable, that's the main thing. Reliability and usability tend not to be valued in the market much, but being able to do more things is.
In my embedded career I always assumed the value of a SW engineer was to cut costs, regardless of whether it was a few cents (and of course add to quality and reliability). It seems in embedded, it’s one of the only SW fields left where software can affect the cost of the system. By choosing a processor in a family with the least amount of memory and speed, you can achieve a lower system cost. In this world, the ability to write fast code in a minimal footprint matters. You still have to wrestle with development time/cost versus HW cost tradeoffs vs sales, but I think it still remains true that the coding practices we saw in 60s-80s are still valued in this space. My impression is these skills these days are being lost/ignored for the most part. When a HW limit is reached, the only way forward for quicker SW solution is optimization.
I'm a scientist, working adjacent to a team of engineers.
My sense is that there's a "conceptual" austerity, due to the limited ability of the human mind to understand complex systems. The programmers are satisfied that the product is documented, because it passes tests, and "the source code expresses what the program does." But nobody can explain it to me, in situations where I have to be involved because something has gone wrong.
The system has surpassed some threshold of conceptual austerity when the majority of the devs have concluded that the only hope is to scrap it and start over, but they can't, because they don't know what it's supposed to do, and can't find out.
On the other hand, the infinite computer would take care of this for us too. We're faced with semi-infinite computers at the present time, that can be filled to the brim with stuff that they can't themselves understand or manage. But all real things are finite.
> I still think we'd all be better off with infinitely fast, infinitely resourced computers. IMO, austerity offers no benefit.
Well, sure. But these computers don't exist, so that doesn't really matter.
The main reason I bought a new laptop last year is because the frontend build process needed about 5G of RAM and ~4 minutes, which was a real productivity-killed. I'm not a frontend dev, I inherited all of this, and wasn't so easy to fix for both technical and organisational reasons.
Excessive austerity offers no benefit, I agree, but some projects are out of control in their resource usage, and impose a real burden with it.
Partial answer: Understanding of and peace with ourselves as humans. Human skill and discipline are long-lasting challenges that satisfy. Those who have not experienced the process of improving the self over years of practice are prone to unease and depression.
> But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
On the desktop we definitely lost responsiveness. Many webpages, even on the speediest, fatest, computer of them all are dog slow compared to what they should be.
Now some pages are fine but the amount of pigs out there is just plain insane.
I like my desktop to be lean and ultra low latency: I'll have tens of windows (including several browsers) and it's super snappy. Switching from one virtual workspace to another is too quick to see what's happening (it take milliseconds and I do it with a keyboard shortcut: reaching for the mouse is a waste of time).
I know what it means to have a system that feels like it's responding instantly as I do have such a system... Except when I browse the web!
And it's really only some (well, a lot of) sites: people who know what they're doing still come up with amazingly fast websites. But it's the turds: those shipping every package under the sun and calling thousands of micro-services, wasting all the memory available because they know jack shit about computer science, that make the Web a painful experience.
And although I use LLMs daily, I see a big overlap between those having the mindset required to produce such turds and those thinking LLMs are already perfectly fine today to replace devs, so I'm not exactly thrilled about the immediate future of the web.
P.S: before playing the apologists for such turd-sites, remember you're commenting on a very lean website. So there's that.
The main thing I remember from an usability book by Jakob Nielsen is that web pages should fit in 50kb, including all elements. Managing to do this in only 2x that size today, considering that his book was from 1999, may be considered a merit.
To put this into another context, today there was a post about Slack's 404 page weighting 50Mb.
> There used to be a contest to fit a good web page into 5kB ...
In accordance with this philosophy, I worked on a project a couple years ago where the HTML home page uncompressed size was intentionally limited to 4k. The idea being a slow network connection (such as a mobile device using a limited cellular connection) would be able to render quickly and remaining content could load asynchronously.
How about having a page consisting of pure html, less than 0.5kb, with the server only providing content alone (simple, one mp4), but serving you none of the html?
So it even works when there is no page on the web at all :0
In that case your browser only needs to load 0kb of html from the web in order to successfully reach the page.
That can be pretty fast. Either way. 0.5kb is real small but 0kb is a bit smaller ;)
Fairly elementary, all you're wanting to do is access some media which is found on the web itself at a known address.
But there's a catch, you would have to supply the html "landing page" file yourself from your own PC if none of the html's going to be coming in from the web to your browser.
Copy and paste the code into a text editor, save file as fuzzplayer.txt into your favorite folder, then save it again as fuzzplayer.html, right there in the same folder. To edit in the future, double-click on the TXT version to edit, double save again as both file types when done. Double-click the identical html version (identical to txt except for file extension) to launch your browser and go to "my" page where you can play the media. Not much differently than the Mozilla example does.
"Header" (not formally), three "paragraphs" (without paragraph marks), EOF.
Backward indenting for me, old (without) school I guess.
Pretty straightforward
Almost everything indented except things that you might want to change more easily in the future. Then you can just go down the left margin and kill 'em all or take pot shots.
Page formatting is associated with a block of displayed text in preference to other objects. This is the indented html formatting which is quite likely to need some future editing, very often in concert with the text itself, so it's good to have it right there. Almost seems like some html belongs without indentation also but that kind of defeats the purpose of the "clean" look. You get used to it.
Editing 5kb or 10kb of this kind of stuff manually, covering all the bases in a couple minutes, and having it work the first time can make you smile more than a lot of things :)
Making a website is what made me interested in programming as a 10 year old during the dot com bubble. Even back then I realized very quickly that webdev is a cargo cult and I switched to C and assembly to learn how to program “real” programs. Even now, almost 30 years later, I can make high quality software based on technology from back then (compared to the constant dependency drift nowadays in webdev). It’s just a constant assault by young programmers who don’t know better in webdev.
Some years ago I made a website again. Screw best practices, I used my systems engineering skills and the browser’s debugger. I had written game engines with soft realtime physics simulations and global illumination over the network. I knew what computers could do. This website would render within 1 frame at 60 FPS without any layout recalculation, garbage collection events, web requests that can’t be parallelized without dependencies etc.
I showed it to friends. They complained it doesn’t work. They didn’t realize that once they clicked, the site displayed the next content instantly (without any weird js tricks). This was a site with a fully responsive and complex looking design. The fact that users are SO used to terrible UX made me realize that I was right about this industry all along as a child.
I'm about halfway through the read so far, but wanted to vome back and say that these were/are some of most interesting challenges to overcome and constraints to work within, when I was coming up as a web person. Inspired by agencies like Clear Left, I'd seek out old af devices with comically bad browsers, tiny screens, and obtuse input methods. Unfortunately I never really found a financially rewarding enough path to continue pursuing that; the constraint on most projects that don't have these super tight requirements baked in, is money, time, and looking good, which meant that I could either accept that as a survival mechanism and throw JS at CSS problems, or I could lose my job. Although it's neat that over the years of my incredibly shaky career, I've moved from web designer/developer to f̵a̵k̵e̵ software engineer, I've never found UI programming to be quite as rewarding as making an incredibly fast and responsive and pretty and robust website.
Curious why, paywall? Genuinely asking as I have a blog on there, guess it's lazy not to host it myself. It is funny when your mostly un-read blog suddenly is graced by medium and it out of nowhere gets thousands of hits.
I really enjoyed the article. I have to say, though: sorry, not sorry, but application size is a poor measure of performance. A 128KB size limit doesn't account for pictures, videos, tracking, ads, fonts, and interactivity. Just avoid them, is not a real world strategy.
Suggesting that an application should stay within a 128KB limit is akin to saying I enjoy playing games in polygon mode. Battlezone was impressive in the 90s, but today, it wouldn't meet user expectations.
In my opinion, initial load time is a better measure of performance. It combines both the initial application size and the time to interactivity.
Achieving this is much more complex. There are many strategies to reduce initial load size and improve time to interactivity, such as lazy loading, using a second browser process to run code, or minimizing requests altogether. However, due to this complexity, it's also much easier to make mistakes.
Another reason this is often not done well is that it requires cross-team collaboration and cross-domain knowledge. It necessitates both frontend and backend adjustments, as well as optimisation at the request and response levels. And it is often a non-functional requirement like accessibility that is hard to track for a lot of teams.
It's not about performance, it's about load time and the restrictions of your client apps.
Also, you're thinking way too much in a SPA architecture. Using just server side rendering with just a tiny bit of javascript like the article states removes most of the problems you describe like Initial load time and cross team collaboration. The load time of the described websites would be instant, and there is no front end team needed.
Maybe i'm dumb, but I really don't understand the point of this post.
Why even make it "reactive"? Just make your site static server-rendered pages? Or just static pages. Is it because additional-content-loading is something users expect?
"Write your site in plain javascript and html. Don't use a framework. Write some minimal css. Bamo. Well under 128kb." ???
at least in this case one of the ideas seemed to be that if they did an ajax load of the middle section of the page, they could skip sending the fixed elements (header and footer) over the network repeatedly
To be fair, an iframe would accomplish that too, but the loaded content would have its own html header that adds to the amount of kilobytes used. So maybe that's the reason.
> As I often point out to teams I’m working with, the original 1993 release of DOOM weighed in at under 3MB, while today we routinely ship tens of megabytes of JavaScript just to render a login form. Perhaps we can rediscover the power of constraints, not because we have to, but because the results are better when we do.
Emphasis mine, and tying with how it opened with the story about the designer who believed accessibility and "good design" are at odds (I'm screaming inside).
Firefox reader mode works ok for me. Chromium tells me there was 2.5mB of downstream traffic to load the page, rising to 4.3 if I scroll to the Recommended from Medium spam at the bottom of the page. That would be appalling if the bar weren't so low.
I'm reminded of The Website Obesity Crisis, [0] where the author mentions reading an article about web bloat, then noticing that page was not exactly a shining example of lightweight design. He even calls out Medium specifically.
On an Android chrome based browser when you open a webpage that's able to be viewed in the accessibility simplified view, it pops up a dialog asking if you want to use simplified view. On non-accessible pages (like this one) that dialog box doesn't appear
I don't really pine for the days of the PDP-8, when programmers had to make sure that almost every routine took fewer than 128 words, or the days of System/360, when you had to decide whether the fastest way to clear a register was to subtract it from itself or exclusive-or it with itself. We wasted a lot of time trying to get around stringent limitations of the technology just to do anything at all.
I just looked at the Activity Monitor on my Macbook. Emacs is using 115MB, Thunderbird is at 900MB, Chrome is at something like 2GB (I lost track of all the Renderer processes), and a Freecell game is using 164MB. Freecell, which ran just fine on Windows 95 in 8MB!
I'm quite happy with a video game taking a few gigabytes of memory, with all the art and sound assets it wants to keep loaded. But I really wonder whether we've lost something by not making more of an effort to use resources more frugally.
Remember, there's a gigabit pathway between server and browser, so use as much of the bandwidth as you need.
I'll bite. What do you think we've lost? What would the benefit be of using resources more frugally?
Disclosure: I'm an embedded systems programmer. I frequently find myself in the position where I have to be very careful with my usage of CPU cycles and memory resources. I still think we'd all be better off with infinitely fast, infinitely resourced computers. IMO, austerity offers no benefit.
(Remember Bill Atkinson's famous response, quoted here to how much code he'd written that week: -3000. He had reworked Quickdraw so that it was faster and better, with a net loss of 3000 lines of code.) Of course the classic Mac had its own constraints.
Yes, by several orders of magnitude. I couldn't enter or display Japanese or on my Atari 800 nor Apple 2 nor C64 (sorry, only 45 years ago). I couldn't display 200+ large 24bit images with ease (heres 100: https://www.flickr.com/groups/worldofarchitecture/pool/). Or try this: https://scrolldit.com/
I couldn't play 16 videos simultaneously while downloading stuff in the background and and playing a game. I could go on and on but my computer today is vastly more usable than any of my computers 40 years ago that could only effectively run one app at a time and I had to run QEMM and edit my config.sys and autoexec.bat to try to optimized my EMS and XMS memory cards.
I love that I can display a video as simple as
My sense is that there's a "conceptual" austerity, due to the limited ability of the human mind to understand complex systems. The programmers are satisfied that the product is documented, because it passes tests, and "the source code expresses what the program does." But nobody can explain it to me, in situations where I have to be involved because something has gone wrong.
The system has surpassed some threshold of conceptual austerity when the majority of the devs have concluded that the only hope is to scrap it and start over, but they can't, because they don't know what it's supposed to do, and can't find out.
On the other hand, the infinite computer would take care of this for us too. We're faced with semi-infinite computers at the present time, that can be filled to the brim with stuff that they can't themselves understand or manage. But all real things are finite.
Well, sure. But these computers don't exist, so that doesn't really matter.
The main reason I bought a new laptop last year is because the frontend build process needed about 5G of RAM and ~4 minutes, which was a real productivity-killed. I'm not a frontend dev, I inherited all of this, and wasn't so easy to fix for both technical and organisational reasons.
Excessive austerity offers no benefit, I agree, but some projects are out of control in their resource usage, and impose a real burden with it.
Data centers use a lot of electricity. Even a 10% reduction would cause huge impact world wide
On the desktop we definitely lost responsiveness. Many webpages, even on the speediest, fatest, computer of them all are dog slow compared to what they should be.
Now some pages are fine but the amount of pigs out there is just plain insane.
I like my desktop to be lean and ultra low latency: I'll have tens of windows (including several browsers) and it's super snappy. Switching from one virtual workspace to another is too quick to see what's happening (it take milliseconds and I do it with a keyboard shortcut: reaching for the mouse is a waste of time).
I know what it means to have a system that feels like it's responding instantly as I do have such a system... Except when I browse the web!
And it's really only some (well, a lot of) sites: people who know what they're doing still come up with amazingly fast websites. But it's the turds: those shipping every package under the sun and calling thousands of micro-services, wasting all the memory available because they know jack shit about computer science, that make the Web a painful experience.
And although I use LLMs daily, I see a big overlap between those having the mindset required to produce such turds and those thinking LLMs are already perfectly fine today to replace devs, so I'm not exactly thrilled about the immediate future of the web.
P.S: before playing the apologists for such turd-sites, remember you're commenting on a very lean website. So there's that.
To put this into another context, today there was a post about Slack's 404 page weighting 50Mb.
Evidently, the entire concept of size & communications efficiency has been abandoned
[0] https://www.the5k.org/about.php
In accordance with this philosophy, I worked on a project a couple years ago where the HTML home page uncompressed size was intentionally limited to 4k. The idea being a slow network connection (such as a mobile device using a limited cellular connection) would be able to render quickly and remaining content could load asynchronously.
So it even works when there is no page on the web at all :0
In that case your browser only needs to load 0kb of html from the web in order to successfully reach the page.
That can be pretty fast. Either way. 0.5kb is real small but 0kb is a bit smaller ;)
Fairly elementary, all you're wanting to do is access some media which is found on the web itself at a known address.
But there's a catch, you would have to supply the html "landing page" file yourself from your own PC if none of the html's going to be coming in from the web to your browser.
Here it is:
Copy and paste the code into a text editor, save file as fuzzplayer.txt into your favorite folder, then save it again as fuzzplayer.html, right there in the same folder. To edit in the future, double-click on the TXT version to edit, double save again as both file types when done. Double-click the identical html version (identical to txt except for file extension) to launch your browser and go to "my" page where you can play the media. Not much differently than the Mozilla example does."Header" (not formally), three "paragraphs" (without paragraph marks), EOF.
Backward indenting for me, old (without) school I guess.
Pretty straightforward
Almost everything indented except things that you might want to change more easily in the future. Then you can just go down the left margin and kill 'em all or take pot shots.
Page formatting is associated with a block of displayed text in preference to other objects. This is the indented html formatting which is quite likely to need some future editing, very often in concert with the text itself, so it's good to have it right there. Almost seems like some html belongs without indentation also but that kind of defeats the purpose of the "clean" look. You get used to it.
Editing 5kb or 10kb of this kind of stuff manually, covering all the bases in a couple minutes, and having it work the first time can make you smile more than a lot of things :)
Some years ago I made a website again. Screw best practices, I used my systems engineering skills and the browser’s debugger. I had written game engines with soft realtime physics simulations and global illumination over the network. I knew what computers could do. This website would render within 1 frame at 60 FPS without any layout recalculation, garbage collection events, web requests that can’t be parallelized without dependencies etc.
I showed it to friends. They complained it doesn’t work. They didn’t realize that once they clicked, the site displayed the next content instantly (without any weird js tricks). This was a site with a fully responsive and complex looking design. The fact that users are SO used to terrible UX made me realize that I was right about this industry all along as a child.
But, damn, that was some fun stuff. Really challenging to get the graphical results we wanted and keep it under budget (15 KB in the early days).
It's really satisfying.
https://noscript.net/
uBlock Filters EasyList EasyPrivacy Online Malicious URL Blocklist Peter Lowe's Ad and tracking server list EasyList - Annoyances
For tech-inclined: Codeberg/GitLab/GitHub Pages or Cloudflare Pages
-- Feynman
Suggesting that an application should stay within a 128KB limit is akin to saying I enjoy playing games in polygon mode. Battlezone was impressive in the 90s, but today, it wouldn't meet user expectations.
In my opinion, initial load time is a better measure of performance. It combines both the initial application size and the time to interactivity.
Achieving this is much more complex. There are many strategies to reduce initial load size and improve time to interactivity, such as lazy loading, using a second browser process to run code, or minimizing requests altogether. However, due to this complexity, it's also much easier to make mistakes.
Another reason this is often not done well is that it requires cross-team collaboration and cross-domain knowledge. It necessitates both frontend and backend adjustments, as well as optimisation at the request and response levels. And it is often a non-functional requirement like accessibility that is hard to track for a lot of teams.
Also, you're thinking way too much in a SPA architecture. Using just server side rendering with just a tiny bit of javascript like the article states removes most of the problems you describe like Initial load time and cross team collaboration. The load time of the described websites would be instant, and there is no front end team needed.
Why even make it "reactive"? Just make your site static server-rendered pages? Or just static pages. Is it because additional-content-loading is something users expect?
"Write your site in plain javascript and html. Don't use a framework. Write some minimal css. Bamo. Well under 128kb." ???
Emphasis mine, and tying with how it opened with the story about the designer who believed accessibility and "good design" are at odds (I'm screaming inside).
I'm reminded of The Website Obesity Crisis, [0] where the author mentions reading an article about web bloat, then noticing that page was not exactly a shining example of lightweight design. He even calls out Medium specifically.
[0] https://idlewords.com/talks/website_obesity.htm , discussed https://news.ycombinator.com/item?id=34466910