> Traditional frameworks hydrate entire pages with JavaScript. Even if you've got a simple blog post with one interactive widget, the whole page gets the JavaScript treatment. Astro flips this on its head. Your pages are static HTML by default, and only the bits that need interactivity become JavaScript "islands."
Back in my days we called this "progressive enhancements" (or even just "web pages"), and was basically the only way we built websites with a bit of dynamic behavior. Then SPAs were "invented", and "progressive enhancements" movement became something less and less people did.
Now it seems that is called JavaScript islands, but it's actually just good ol' web pages :) What is old is new again.
You're making the opposite mistake: you're seeing someone's description of a tool's feature and confusing it with the way we've already done things without even checking if the tool is transformative just because it kinda sounds similar.
Astro's main value prop is that it integrates with JS frameworks, let's them handle subtrees of the HTML, renders their initial state as a string, and then hydrates them on the client with preloaded data from the server.
TFA is trying to explain that value to someone who wants to use React/Svelte/Solid/Vue but only on a subset of their page while also preloading the data on the server.
It's not necessarily progressive enhancement because the HTML that loads before JS hydration doesn't need to work at all. It just matches the initial state of the JS once it hydrates. e.g. The <form> probably doesn't work at all without the JS that takes it over.
These are the kind of details you miss when you're chomping at the bit to be cynical instead of curious.
What is the value in first sending dysfunctional HTML and then fixing it with later executed JS? If you do that, you might as well do 100% JS. Probably would simplify things in the framework.
Sending functional HTML, and then only doing dynamic things dynamically, that's where the value is for web _apps_. So if what you point out is the value proposition for Astro, then I am not getting it, and don't see its value.
Dysfunctional HTML in the sense that interactivity is disabled, but visually it is rendered and therefore you can see the proper webpage immediately.
Just compare the two cases, assuming 100ms for the initial HTML loading and 200ms for JS loading and processing.
With full JS, you don't see anything for 300ms, the form does not exists (300ms is a noticeable delay for the human eye).
With frameworks such as Astro, after 100ms you already see the form. By the time you move the mouse and/or start interacting with it, the JS will probably be ready (because 200ms is almost instant in the context of starting an interaction).
This is not new at all, old school server side processing always did this. The advantage is writing the component only once, in one framework (React/vue/whatev). The server-client passage is transparent for the developer, and that wasn't the case at all in old school frameworks.
Note that I'm not seeing this is good! but this is the value proposition of Astro and similar frameworks: transparent server-client context switching, with better performance perceived by the user.
Usually I write my HTML and CSS also in a way that I write it once and can then reuse it elsewhere. I do that by employing traditional template engines like Jinja2 (in Python) or using SXML in Lisps.
>e.g. The <form> probably doesn't work at all without the JS that takes it over.
What a value!
I guess I may be chomping at the bit to be cynical, but I have quite a bit of experience in these fields, and I don't think Astro sounds especially transformative.
I didn't read their comment as particularly cynical, and at a high level they're still correct, but so are you.
I think your comment gets at a very specific and subtle nuance that is worth mentioning, namely that typically if you were a proghance purist, you'd have a fallback that did work; a form the submitted normally, a raw table that could be turned into an interactive graph, etc..
I don't think these details are mutually exclusive though, and that it was certainly valid in those days to add something that didn't have a non-js default rendering mode, it's just that it was discouraged from being in the critical path. Early fancy "engineered" webapps like Flipboard got roasted for poorly re-implimenting their core text content on top of canvas so they could reach 60fps scrolling, but if JS didn't work their content wasn't there, and they threw out a bunch of accessibility stuff that you'd otherwise get for free.
Now that I'm thinking back, it's hard to recall situations *at that time* where there would be both something you couldn't do without JavaScript and that couldn't also have a more raw/primitive version, but one example that comes to mind and would still be current are long-form explanations of concepts that can be visualized, such as those that make HN front page periodically. You would not tightly couple the raw text content to the visualization, you would enhance the content with an interactive visual, but not necessarily have fallback for it, and this would still be progressive enhancement.
Here's another good example from that time, which is actually only somewhat forward compatible (doesn't render on my Android), but the explanation still renders https://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
Second edit: "Streaming server rendering", "Progressive rehydration", "Partial rehydration", "Trisomorphic rendering"... Seems I woke up in a different universe today.
the concept is not new in an sense (duh), I mean, send HTML first send the important CSS first so the browser can show something - and with the correct dimensions hopefully to avoid reflows (and flash of white and so on), and therefore JS interaction "always" got enabled on some kind of trigger (onload, DOMContentLoaded, or simply it the script tag was added after the closing body tag, which as far as I know led to all of the event handlers registering by the time the DOM tree was there)
then chunking was the next step, and basically then logical endpoint is this mix-and-match strategy that NextJS is "leading" (?), by allowing things to be streamed in while sending and caching as much of the static parts up front as possible.
Oh no, go read about AJAX. You may be too young, there's no revolution here. I celebrate your curiosity, it's still a wheel. Changing the name it's still reinventing it.
Agreed! Astro is fantastic, but the biggest barrier I had in learning it was getting my head around range of terms that developers who entered the workplace after 2010 have invented to describe "how the web works".
I really appreciate that innovation can sometimes reinvent things that exist out of pure ignorance and sometimes hubris. I’ve seen people turn mountains out of molehills that take on a lore of their own and then along comes someone new who doesn’t know or is a little too sure of themselves and suddenly the problem is solved with something seemingly obvious.
I’m not sure javascript islands is that but I appreciate a new approach to an old pattern.
This field (software) in general, and especially web stuff, has no memory. It’s a cascade of teens and twentysomethings rediscovering the same paradigms over and over again with different names.
I think we are overdue for a rediscovery of object oriented programming and OOP design patterns but it will be called something else. We just got through an era of rediscovering the mainframe and calling it “cloud native.”
Every now and then you do get a new thing like LLMs and diffusion models.
Wasted effort reinventing things and rediscovering their limits and failure modes is a major downside of the ageism that is rampant in the industry. There is nobody around to say “actually this has been tried three times by three previous generations of devs and this is what they learned.”
Another one: WASM is a good VM spec and the VM implementations are good, but the ecosystem with its DSLs and component model is getting an over engineered complexity smell. Remind anyone of… say… J2EE? Good VM, good foundation, massive excess complexity higher up the stack.
Even if you are 10 years younger than you and youve been doing it webdev in your late teens you’ve seen like three spins of the concepts wheel to come back again and again.
Htmx and, even moreso, Datastar have brought us back on track. Hypermedia + Ajax with whatever backend language(s)/framework(s) you want. Whereas astro is astro.
I agree with that it’s not a new concept by itself, but the way it’s being done is much more elegant in my opinion.
I originally started as a web developer during the time where PHP+jQuery was the state of the art for interactive webpages, shortly before React with SPAs became a thing.
Looking back at it now, architecturally, the original approach was nicer, however DX used to be horrible at the time. Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that. SPAs have their place, most so in customer dashboards or heavily interactive applications, but Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
> Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that.
I do remember that, all too well. Countless hours spent with templates in Symfony, or dealing with Zend Framework and all that jazz...
But as far as I remember, the issue was debuggability and testing of the templates themselves, which was easily managed by moving functionality out of the templates (lots of people put lots of logic into templates...) and then putting that behavior under unit tests. Basically being better at enforcing proper MVC split was the way to solve that.
The DX wasn't horrible outside of that, even early 2010s which was when I was dealing with that for the first time.
The main difference is as simple as modern web pages having on average far more interactivity.
More and more logic moved to the client (and JS) to handle the additional interactivity, creating new frameworks to solve the increasing problems.
At some point, the bottleneck became the context switching and data passing between the server and the client.
SPAs and tools like Astro propose themselves as a way to improve DX in this context, either by creating complete separation between the two words (SPAs) or by making it transparent (Astro)
Well, that's a way to manage server-side logic, but your progressively-enhanced client-side logic (i.e. JS) still wasn't necessarily easy to debug, let alone being able to write unit tests for them.
> but your progressively-enhanced client-side logic (i.e. JS) still wasn't necessarily easy to debug, let alone being able to write unit tests for them
True, don't remember doing much unit testing of JavaScript at that point. Even with BackboneJS and RequireJS, manual testing was pretty much the approach, trying to make it easy to repeat things with temporary dev UIs and such, that were commented out before deploying (FTPing). Probably not until AngularJS v1 came around, with Miško spreading the gospel of testing for frontend applications, together with Karma and PhantomJS, did it feel like the ecosystem started to pick up on testing.
There wasn't as much JS to test. I built a progressively-enhanced SQLite GUI not too long ago to refresh my memory on the methodology, and I wound up with 50-ish lines of JS not counting Turbo. Fifty. It was a simple app, but it had the same style of partial DOM updates and feel that you would see from a SPA when doing form submissions and navigation.
> Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
the point is pretty much that you can do more JS for rich client-side interactions in a much more elegant way without throwing away the benefits of "back in the days" where that's not needed.
Modern PHP development with Laravel is wildly effective and efficient.
Facebook brought React forth with influences from PHPs immediate context switching and Laravel’s Blade templates have brought a lot of React and Vue influences back to templating in a very useful way.
> because it implies something better would have replaced it
Hah, if only... Time and time again the ecosystem moves not to something that is better, but "same but different" or also commonly "kind of same but worse".
Back in your day, there wasn’t a developer experience by which you could build a website or web app (or both) as a single, cohesive unit, covering both front-end and back-end, while avoiding the hydration performance hit. Now we have Astro, Next.js with RSC, and probably at least a dozen more strong contenders.
That is a perfect description of it, released 1996. So far ahead of its time it’s not even funny. Still one of the best programming environments I’ve ever used almost <checks calendar> 30 years later.
The syntax is horrible, and seeing it today almost gives me the yuckies, but seems like the same idea to me, just different syntax more or less. I'm not saying it was better back then, just seems like very similar ideas (which isn't necessarily bad either)
I find that syntax more palatable than what is going on in JSX to be honest. I actually like having a separate declarative syntax for describing the tree structure of a page. Some languages manage to actually seamlessly incorporate this, for example SXML libraries in Schemes, but I have not seen it being done in a mainstream language yet.
Yeah, not a huge fan of either either to be honest, but probably JSX slightly before Twig only because it's easier for someone who written lots of HTML.
Best of them all has to be hiccup I think, smallest and most elegant way of describing HTML. Same template but in as a Clojure function returning hiccup:
And as you say, part of the language itself, which means no need to learn something different, and no need to learn a pseudo-HTML or lookalike like with JSX, which then needs to be actually parsed by the framework (or its dependencies), unlike SXML, which is already structured data, already understood perfectly in the same language and only needs to be rendered into HTML.
> there wasn’t a developer experience by which you could build a website or web app (or both) as a single, cohesive unit, covering both front-end and back-end
How much of the frontend and how much of the backend are we talking about? Contemporary JavaScript frameworks only cover a narrow band of the problem, and still require you to bootstrap the rest of the infrastructure on either side of the stack to have something substantial (e.g., more than just a personal blog with all of the content living inside of the repo as .md files).
> while avoiding the hydration performance hit
How are we solving that today with Islands or RSCs?
It can cover as much of the back-end that your front-end uses directly as you’d care to deploy as one service. Obviously if you’re going to have a microservices architecture, you’re not going to put all those services in the one Next.js app. But you can certainly build a hell of a lot more in a monolith that a personal blog serving a handful of .md files.
In terms of the front-end, there’s really no limit imposed by Next.js and it’s not limited to a narrow band of the problem (whatever that gibberish means), so I don’t know what you’re even talking about.
> How are we solving that today with Islands or RSCs?
Next.js/RSC solves it by loading JavaScript only for the parts of the page that are dynamic. The static parts of the page are never client-side rendered, whereas before RSC they were.
There are a lot of moving parts when building an application and the abstractions that Next.js provides doesn't cover the same stuff when compared to frameworks like Ruby on Rails, Django and Laravel. The narrow band of problems Next.js solves for you are things like data fetching, routing, bundling assets and rendering interactive UI. It leaves things like auth, interacting with a database, logging, mailing, crons, queues, etc up to you to build yourself or integrate with 3rd party services. When you work with one of those frameworks, they pretty much solve all of those problems for you and you don't have to leave the framework often to get things done.
This is fine generally because you have the choice to pick the right tool for the job, but in the context of "a single, cohesive unit" you can only get that with Next.js if you all that you care about are those specific abstractions and want your backend and frontend to be in the same language. Even then you run into this awkwardness where you have to really think about where your JavaScript is running because it all looks the same. This might be a personal shortcoming, but that definitely broken the illusion of cohesion for me.
> The static parts of the page are never client-side rendered, whereas before RSC they were.
Didn't the hydration performance issues start when we started doing the contemporary SSR method of isomorphic javascript? I think Islands are great and it's a huge improvement to how we started doing SSR with things like Next.js Pages Router. But that's not truly revolutionary industry wide because we've been able to do progressive enhancement long before contemporary frameworks caught up. The thing I'm clarifying here is the "before RSC" only refers to what was once possible with frameworks like Next.js and not what was possible; you could always template some HTML on the server and progressively enhance it with JavaScript.
Why do you think that? You could absolutely "build a website covering FE and BE without the hydration performance hit" using standard practices of the time.
You'd render templates in Jade/Handlebars/EJS, break them down into page components, apply progressive enhancement via JS. Eventually we got DOM diffing libraries so you could render templates on the client and move to declarative logic. DX was arguably better than today as you could easily understand and inspect your entire stack, though tools weren't as flashy.
In the 2010-2015 era it was not uncommon to build entire interactive websites from scratch in under a day, as you wasted almost no time fighting your tools.
I can only approve. To me Astro started as "it's just html and css but with includes."
I used it for my personal website, and recently used it when reimplementing the Matrix Conference website. It's really a no-fuss framework that is a joy to use.
Among the things I love about Astro:
- It's still html and css centric
- Once built, it doesn't require js by default
- You can still opt-into adding js for interactivity here and there
- Content collections are neat and tidy
- Astro massively optimizes for speed, and the maintainers know how to do it
- It had a very helpful devbar to help you visually figure out what easy fix can make your website snappier (like lazily loading images if it detects them below the fold)
For the "optimize for speed" bit, an example is that the css minifier cleverly inlines some CSS to avoid additional queries. The Image component they provide will set the width and height attribute of an image to avoid content layout shifts. It will also generate responsive images for you.
> - It's still html and css centric - Once built, it doesn't require js by default - You can still opt-into adding js for interactivity here and there
I've never used Astro so forgive my ignorance, but isn't that just creating a .html file, a .css file and then optionally provide a .js file? What does Astro give you in this case? You'd get the same experience with a directory of files + Notepad basically. It's also even more optimized for speed, since there is no overhead/bloat at all, including at dev-time, just pure files, sent over HTTP.
> an example is that the css minifier cleverly inlines some CSS to avoid additional queries
Is that a common performance issue in the web pages you've built? I think across hundreds of websites, and for 20 years, not once have "CSS queries" been a bottleneck in even highly interactive webpages with thousands of elements, it's almost always something else (usually network).
For the first one, the main benefits of Astro over static html and css (for my use cases) are the ability to include components and enforce the properties that must be passed. A typical example would be [here][0] where I define a layout for the whole website, and then [on each page that uses it](https://github.com/matrix-org/matrix-conf-website/blob/main/...) I have to pass the right properties. Doable by hand, but it's great to have tooling that can yell at me if I forgot to do it.
Content Collections also let me grab content from e.g. markdown or json and build pages automatically from it. The [Content Collections docs][1] are fairly straightforward.
As for performance issues, I've spent quite a bit of time on the countryside where connectivity was an issue and every extra request was definitely noticeable, hence the value of inlining it (you load one html file that has the css embedded, instead of loading an html file that then tells your browser to load an extra css file). The same can be true in some malls where I live.
Embedded CSS circumvents proper caching of the CSS. Also, with HTTP/2 your client can download several resources in one transaction. So, it shouldn’t make much of a difference with CSS embedded or separate. Just, that embedded CSS has to be loaded over and over again whereas a separate file can be cached and reused from the local cache.
Caching is only relevant if you think your site is going to be visited by the same person often enough that the cache is worthwhile. If the assets are small that HTTP overhead is non-negligible, and if your CSS would get few enough cache hits, then you're often better off just inlining stuff.
HTTP/2 does not change this equation much. Server Push is dead, and bypasses caching anyway. Early Hints can help if configured correctly, but still require the client to make the request roundtrip to fetch that asset.
> I've never used Astro so forgive my ignorance, but isn't that just creating a .html file, a .css file and then optionally provide a .js file? What does Astro give you in this case? You'd get the same experience with a directory of files + Notepad basically. It's also even more optimized for speed, since there is no overhead/bloat at all, including at dev-time, just pure files, sent over HTTP.
Astro is super for completely static stuff too. Sometimes static stuff can be complex and there a modern framework like Astro shines.
I will share a couple of files to explain.
The site is almost completely static. It serves minimal JS for:
(1) Prefetching (you can block that and nothing will break)
(2) Mobile menu (you cannot make an accessible mobile menu without JS)
The site is for the docs and demos of a JS library. I want many demos on it, to be able to see what patterns the lib can handle and where things break down. I want to be able to add/remove demos quickly to try ideas. Straight HTML written in index.html files would not allow me to do that (but it is fine for the site where I have my CV, so I just use that there).
This is the Astro component I made that makes it super easy for me to try whatever idea I come up with:
So Astro is basically a (what we used to call) "static site generator" in that case? Something that existed for decades, and is basically just "compiling" templates, which could be in various syntaxes and languages, but in this case it's for JavaScript specifically. I guess the "what's old is new" point continues to stand tall :)
Again I'm failing to see exactly what Astro is "innovating" (as you and others claim they're doing). It's nothing wrong with taking a workflow and making it really stable/solid, or making it really fast, or similar. But for the claim to be "innovative" to be true, they actually have to do something new, or at least put together stuff in a new way :)
I am not saying Astro is innovative and I don’t think I implied that in my reply. I don’t view Astro as innovative and I don’t understand people who view it like that because of, say, its islands. (We knew about islands before Astro.)
As you said, in the example I shared Astro is an SSG. It happens to use server-side JS but this is is irrelevant.
But it is more than that. Astro is an SSG and it is also a *very well made* SSG.
I have used all the usual suspects: Ruby ones, Go ones, Python ones, JS ones. The closest I came to having fun was 11ty, but 11ty is a bit too chaotic for me. Astro is the one that clicked. And the one that was fun to use right from day 1.
I am not a JavaScript person, mind you. JavaScript is not my strongest FE skill. The JS conventions, tricks, and syntaxes of modern FE frameworks, even less so.
So Astro did not click for me because of that. It clicked because of how well it is made and because of how fun it is to use.
Oh! It does this!
Oh! It does that!
Oh! It gives you type safety for your Markdown meta! (What?!)
Oh! It gives you out of the box this optimization I was putting together manually! You just have to say thisOptim: true in the configuration file!
Astro is a very well made tool that improves continually and that aligns with my vision of the platform and of how we should make stuff for the platform.
The best way to do "just html and css with includes" is to run any commonon webserver like nginx and turn on server side includes. It is literally just html and css with includes then. And zero javascript, anywhere, unless you want it.
SSI hasn't changed in 20+ years and it's extremely stable in all webservers. A very tiny attack surface with no maintainence problems. It just does includes of html fragments. The perfect amount of templating power to avoid redundancy but also avoid expoitable backends.
I love Astro so much. After 20 years in data and backend, I got back into frontend for a big project. After banging my head with React, I took a leap of faith and choose Astro with Svelte. That includes an initial try with SvelteKit.
It's worked out so wonderfully. By being HTML/CSS centric, it forces a certain predictable organization with your front end code. I handed the frontend to another developer, with a React background, and because it's so consistently and plainly laid out, the transition happened almost overnight.
My one criticism which is why I ditched it for now is complex routing gets confusing and abstract quickly.
I don't know that there's a serious solution to it because complexity can't come with zero friction but just my gut feeling was to back out and go with something else for now.
I am feeling old reading the phrase "traditional frameworks" as a reference to SPA/Virtual DOM frameworks all while the actual traditional frameworks like Backbone, jQuery, etc. actually worked the way described in the blogpost.
"Traditional" always been a measure that depends on when we were born. "Traditional" internet for me is 56kbit modems, vbulletin forums, GTA:VC modding and IRC, while for older people "traditional" internet is probably BBS and such, and for the younger crowd things like Discord is part of the "traditional" internet.
You see the same thing in political conservative/traditional circles, where basically things were good when they were young, and things today are bad, but it all differs on when the person was born.
>You see the same thing in political conservative/traditional circles, where basically things were good when they were young, and things today are bad, but it all differs on when the person was born
when things decline that's still an accurate represenation, not just an artifact of subjectivity
It's baffling to me why more SSR frameworks, Astro and NextJS namely, can't adopt static pages with dynamic paths like SvelteKit. So for example, if you have a page /todos/[todoId] you can't serve those in your static bundle and NextJS straight-out refuse building your app statically.
Whereas with SvelteKit, it builds happily and does this beautiful catch-all mechanism where a default response page, say 404.html in Cloudflare, fetches the correct page and from user-perspective works flawlessly. Even though behind the scenes the response was 404 (since that dynamic page was never really compiled). Really nice especially when bundling your app as a webview for mobile.
I partly agree with you, but it is a design decision that comes with a drawback. A URL /todos/123 cannot be resolved in a SPA in a hard-reload. I.e. if a user were to bookmark /todos/123 or press reload in the browser, the browser would ultimately ask the underlying HTTP server for that file. As you mentioned, you would need a 404 page configured to fetch that - but that requires a configuration in the HTTP server (nginx etc.). So you are not just a static html+js+css+images deploy, you always will need server support. Another issue is, that 4xx errors in the HTTP spec are treated differently than 2xx: most notably, browsers are NOT allowed to cache any 404 responses, no matter what response header your server sends. This will ultimately mean, those /todo/123 bookmarks/hard-reloads will always trigger a full download of the page, even though it would be in the cache. And again, you would always need support in the web server to overwrite 404 pages. While the current NextJS output can be just deployed to something like github-pages or other webspace solutions.
Now, these are just the limitations I can think of, but there are probably more. And to be fair, why "break" the web this way, if you can just use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
> if a user were to bookmark /todos/123 or press reload in the browser, the browser would ultimately ask the underlying HTTP server for that file. As you mentioned, you would need a 404 page configured to fetch that - but that requires a configuration in the HTTP server (nginx etc.). So you are not just a static html+js+css+images deploy, you always will need server support.
> use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
We had PATH_INFO in virtually every http server since CGI/1.0 and were using it for embedding parameters in urls since SEO was a thing, if not earlier. Using PATH_INFO in a PHP script to access an ID was pretty common, even if it wasn't the default.
By way of example, here's a sample url from vBulletin, a classic PHP application <https:/ /forum.vbulletin.com/forum/vbulletin-sales-and-feedback/vbulletin-pre-sales-questions/4387853-vbulletin-system-requirements>[0] where the section, subsection, topic id, and topic are embedded into the URL path, not the query string.
Interesting. You can set up the server to respond with 200.html as the catch-all so the requests would return 200. There was some issue with it—can't remember what—which is why I switched to 404.html. After the initial load though the subsequent navigations would go through pushState so I think they'd be cached.
But I don't see this is as big of a problem. With this I can switch and choose—SSR dynamic pages or use hacky catch-all mechanism. For any reasonably large site you probably would SSR for SEO and other purposes. But for completely offline apps I have to do zero extra work to render them as is.
Personally, I much prefer route paths vs query parameters not just because they look ugly but because they lose hierarchy. Also, you can't then just decide to SSR the pages individually as they're now permanently fixed to same path.
I tend to agree that you could come up with a solution using server-side catch-all and custom 200/404 routes - and actually I do, as I use nginx with a single line of try_files customization. But this is optional. It shouldn't be required to mess with the server config, if you want a static deployment.
Besides, if you catch-all to a 200.html page, how would you serve 404s? Yes, you can integrate a piece of JS in the 200.html file and have it "display" 404, but the original HTTP response would have been 200 (not 404). A lot of bending web standards and technology, and I can see how framework authors probably decide against that. Especially given how much shit JS frameworks get for "reinventing the wheel" :)
It doesn't really matter from user's point of view if the response is 200 or 404 if the end result is the same. This is just a rendered web page after all. But yeah, you can get stuck in the semantics of it but I personally just use what works and move along.
Maybe i misunderstood you, but I did dynamic routes/pages for Next and Astro static builds. Using contentful or storyblok as a CMS, where the editor defines the routes and the components/bloks per page. Basically, the projects had one slug like [...slug].
Routes and Components per Page are dynamically created while export Next or build Astro static pages. In both frameworks you create the pages / slugs via getStaticPaths. And if ISR/ISP is enabled, even new pages (that are not known during build time) are pre-rendert while running the server.
In Next it is called dynamic routes[1] and in Astro dynamic pages[2].
Catch all slugs in Next and Astro are [...slug] e.g..
I think since they used [todoId] in the example they mean a static page which does not exist at build time. Which both can do, it’s called ISG (or on-demand in the Astro docs), but it requires a server to work, or you can create a static route that accepts any parameters and pass those to JavaScript to run on the client.
It doesn't. Those are executed build-time and you can't just set a wildcard so anything outside the given set results in 404.
As background, I wanted to make a PoC with NextJS bundled into a static CapacitorJS app somewhat recently and had to give up because of this.
You can try tricking NextJS by transforming the pages into "normal" ones with eg query parameters instead of path, but then you need complicated logic changing the pages as well as rewriting links. As you of course want the normal routes in web app. Just a huge PITA.
Not sure why you gave up. All you need to do, is use query params: /todo?id=123 and use `const { id } = useSearchParams()` in your code. Yes, the urls won't be that pretty, but I don't know if this is a road block. I have a NextJS webapp up and running that is 100% SPA (no SSR, full static export) and uses a C#/.NET REST Backend for data [0]. Works (almost) flawlessly.
No. I disagree, you have to refactor all the pages from using [pathId] folder pattern and change all the links including switching to useSearchParams. It's just a huge change, especially if I want to keep the old routes in my web app.
Yes it would, and I can see with an existing app that it is work. But we are probably talking less than a days work? Not even using AI, but this is straightforward string replacements.
Right. So a custom build system which changes the built pages from dynamic path folders to named folders in a day. No bugs. Sure. Also I serve rich-text content with links so those gotta be rewritten as well.
It's clear to me that the frontend conversation space is broken. Not even just the ecosystem being a mess.
Boiling down the conversation I see in the article, it just seems to be: the browser as a HMI vs the browser as a an application runtime. Depending on what you want to do one might be a better fit than the other. But the points it puts forward are fluff arguments like "it's a breadth of fresh air" or "it loads faster".
It's difficult to articulate the source of just how broken the discussion space is; nor have I made a particularly strong argument myself. But I think it's important to keep pushing back on conversations that position framework's like they are brands winning hearts and minds. Ala the fashion industry.
The fashion industry is the best analogy I've seen so far for frontend frameworks. It's obvious that the amount of technical rigor involved with declaring something "content-driven" and "server-first" is approximately zero.
Astro is trying to position itself in opposition to things like Next.js or Nuxt wich are specifically marketed as application frameworks?
And the architecture is more suited to something like a content site, because of the content collections, built-in MDX support, SSR, image handling, and server routing?
They are referring to static non-interactive content (basically images and text) in sites like blogs, marketing site, docs, etc. as opposed to highly interactive, and dynamic components, think Facebook, Figma, etc.
> But the points it puts forward are fluff arguments like "it's a breadth of fresh air" or "it loads faster".
Fluff arguments do exist, but you can also measure. The site is static with minimal JS on the one page, and a bit more JS on the other page, so nothing surprising in the numbers, and nothing you can say was achieved thanks to the magic of Astro, but I wanted to shared them:
Write some straight HTML pages and serve it from bog standard Apache. Heck, get really fancy and do some server-side includes for your CSS or something.
It's really fast, you can edit it with Notepad, and you can probably saturate your bandwidth with a consumer level PC.
It's fluff because, well, our expectations are so unbelievably low. By the time you've bolted on every whizbang dingus leveraging four different languages (two of which are some flavor of Javascript), your twelve page site takes a couple of minutes to compile (what?), and it chokes your three load-balanced AWS compute nodes.
Web applications are hard. I get that. Web sites? They were, by design, incredibly simple. We make them complicated for often unclear reasons.
I appreciate what the Astro folks are trying to do, and it's very clever. But your basic Web site need not require freaking npm in order to "return to the fundamentals of the Web".
Astro will generate those HTML pages you can serve 'from bog standard Apache'.
You can then use all of those npm packages to do whatever processing on your data that you want to do to generate the content and the pages and then just serve it as HTML.
I'm a backend dev, but Astro is the first time a front end framework has made sense to me for years. It fits my mental model of how the web works - serving up HTML pages with some JS just like we did 20 years ago. Its just that I can have it connect to a DB or an API to pull the data at build time so that I can have it generate all of the pages.
But astro literally generates straight html which can be cached wherever you want...
As for build time, I don't have a clue - I haven't used astro (and don't plan to. Datastar + whatever backend framework you want is better). But I'm generally in favour of the direction they're bringing JS frameworks.
I think the main benefit is that you are not forced to use any other library or framework like React or Vue on top of it. You can simply use HTML or Web Components. However, Astro can perform similar tasks to Next or Nuxt, such as SSR, ISR, static site generation, and middleware.
Another differences and benefit of Astro is the island architecture, compared to other frameworks. This means you can implement micro frontends.
Island architecture and micro frontends are features that companies or projects may want if they have multiple teams. For example, one team could be working on the checkout process, another on the shopping basket, and another on product listings.
Now, you can use Astro to combine these components on a single route or page. And you control how this components are rendered. Astro also allows you to share global state between these islands.
This approach is beneficial because teams can develop and ship a single feature while having full responsibility for it. However, it also has downsides, and similar outcomes can be achieved with non-island architectures.
For instance, if all teams use React, it is common for each team to use a different version of React, forcing the browser to load all these versions. The same issue arises if one team uses Vue, another uses Angular, and another uses React or any other framework.
I'am not fully convinced that it will change the web. It is basically a Next or Nuxt without the library/framework login. And it overs the Island-Architecture, that is usually only beneficial for very huge projects.
But, you should try it. I work with Astro since there first release, now for several years, and I can recommend you to give it a try.
It is also a nice tool, if you want to get ride of React or Vue and move to web-components or if you want to replace Next or Nuxt. You can do this with Astro, step by step.
To me no, it's not. It works well for some of the use cases, but if all you needed was offline rendering of your js in a build step to generate static html then you really didn't need all that js to begin with. islands work until they don't, and a lot of stuff gets inlined too. I guess it's fine if you stop caring about the final build.
I feel a lot of the hype around Astro has more to do with vite than anything else. And there yes, without doubt, vite is amazing.
Not the op, but I’d guess islands become a PITA when there is user-visible state that must be synchronized and rendered coherently across multiple islands.
Setting `client:load` and `client:visible` for (svelte) islands you want to run on the client ends up inlining script type modules all around the html. It looked like a big hack to me.
On the positive side their use of web components is a nice bet.
Please stop recommending Next.js as the de facto React framework, we need some critical thinking back into front-end. Remix (React Router v7) or TanStack are much better alternatives.
Second this. Next.js had potential, but it feels like it's gone downhill majorly since Vercel got involved.
Been on the Next.js journey since v10, lived through the v13 debacle and even now on v15, I've very much cooled on it.
I find both React and Next.js move way too fast and make incredibly radical changes sub-annually. It's impossible to keep up with. Maybe it could be justified if things improved from time to time, but often it just feels like changes for changes' sake.
Remix/React Router v7 was/is on a right path. I hope whatever they are planning with Remix with preact and using web standards will bring back the robust way of building websites.
I did not like how Remix to RR7 transition was made though, my project built using Remix was not an easy upgrade and I am rewriting a lot of it on RR7 now.
We're still on RR5 since they decided to change how nested routes work, a nested router doesn't get all params from parent routes, and after advocating absolute paths they switch to advocating relative paths, which are a mess when you want to use a component in multiple places and harder to debug / verify (all our routes are fully typed safe)
Please stop recommending React as the de facto framework, we need some critical thinking back into front-end. HTML, CSS and JS are much better alternatives.
Fundamentals of the Web haven't gone away, anyone still coding across PHP, Spring, Quarkus, ASP.NET MVC hasn't noticed that much how bad things have become with JS frameworks.
Unfortunately in fashion driven industry, it isn't always easy to keep to the basics.
I spent a small amount of time looking into Astro and I didn’t get the difference with the Fresh framework created by the Deno team.. ? Fresh does this Island architecture already, and benchmarks on Astro website dont include Deno+Fresh to compare. So I’m still wondering what’s the benefit of using Deno+Astro vs. Deno+Fresh
Astro and Fresh were both inspired by the islands idea which was iirc coined by an Etsy frontend architect and further elaborated on by the creator of Preact.
My understanding is that Astro is able to more-or-less take a component from any combo of popular frameworks and render it, whereas Fresh is currently limited to just Preact via Deno. I think the limitation is to optimize for not needing a build step, and not having to tweak the frameworks themselves like Astro does (did?).
I'm not affiliated; I've just looked at both tools before.
It doesn't, which is why all these solutions breakdown long-term compared to things like WP for small biz brochure stuff. 5 to 10 years from now when you're no longer talking to your client who has absolutely no technical experience, they're not gonna know that their website code is in some random GitHub repository that needs to be compiled with vite and then you need to magically wait for Netlify/etc to pull in your changes. They'll probably be fuming they have to find a developer that knows how to edit and manage that compared to something like WordPress which is used for the majority of those websites.
For a small business that simply displays an informational site, but occasionally needs to add content, a post on a blog or a change in the text or similar, it is sufficient to build a simple infrastructure of reading some markdown files and let them SFTP some markdown files and have periodic backups. One could also go as far as automatically committing those user uploaded files to a repo. None of which is difficult to do.
It does get more complicated, if the small business wants users to be able to message them through the website, not just via e-mail. Or if the business suddenly wants to have a shop functionality on the site. Then CMSs start to slowly become an option.
Many medium businesses don't even need that btw. In many instances marketing people just want to have control over websites, that they should not be given control over, since they usually are incapable of knowing the impact of what they are doing, when they add something like Google tagmanager to their site. They also tend to often break things, when using Wordpress, because they do not understand how it works under the hood, so that side of things is also not without hassle. And then the devs are called to fix what marketing broke. Even with Wordpress. At that point it would often be easier to let the devs build a non-Wordpress site, and any ideas about things that are not just content in markdown files need to be requests for the dev team to evaluate, and possibly work on, when deemed safe and lawful.
Sadly the power dynamics in businesses are often stacked against conscientious developers.
Have you ever worked with any SMBs before? This is at least 5 technical levels above their head. Would make as much sense as telling them, "just use this CLI tool".
We're talking about people who will email you from their phone that the website is down, but it turns out it's just their home internet that is down.
Or think that the website disappeared from the internet. When in reality it's now the #2 result in google and they never knew they you could type a URL directly into the browser.
I just did a search for "wordpress hosting" and picked one of the first results without shopping around, and the (clearly overpriced) plan I saw was $9.99/mo. In ten years that'd be $1200.
A WP deployment on a simple shared hosting plan like that could run itself without needed a dev or sysadmin.
And then come the WP plugin license fees, which easily quadrupel that cost, if not more. Just the other day I have seen a WP instance, that has >50 plugins installed, some of which cost 800 Euro per year. They are lock-in traps for hapless marketing and other non-technical people, who are convinced that they definitely need this super-duper plugin.
And then come the legal fees for making the site actually conforming with the law, such as GDPR. Those fees are increased, because of people wanting to do stuff they need to declare to visitors of the site, for which they want reassurance, that all is well.
And then come the costs for paying a dev anyway, to fix things that they break or that become broken over time.
So no, 9.99$/month are very very far from a realistic price these businesses pay.
Well we have hosted 10 small business WP sites per $10 DigitalOcean droplet for the last decade. There are not additional plugin costs on any of them. And there has been no real maintenance needed.
I'm not saying WP is great. Taking over a WP project from someone else can be daunting in tech debt and weird choices. But in terms of having a simple brochure website for businesses that get < 10k weekly visitors, it's pretty quick, cheap, and easy.
Reality check: So you have no forms any user needs to input anything into, because if you had any, then you would need a spam protection, lest your customer gets tons of spam. Cheapest one is probably Askimet, but that costs money for a business. So either you are talking about little blogs, or not businesses. In any case, probably real small sites of purely informational character without user input, otherwise it would cost something. Or you are letting your customers get spammed.
No real maintenance? So either you let your PHP version and plugins become outdated, or you sooner or later have to fix things breaking. Maybe you simply did not notice any breakage, because you don't do maintenance for customers?
A brochure website? Does that mean people enter their e-mail to be sent a brochure? (Then paragraph 1 applies again) Or brochure meaning, that you merely display information on pages and that's it?
I think for small info sites what you describe can be true, but for anything slightly larger not, especially not for small businesses.
If they need to bring in a CMS to edit content and juggle assets they could have used Wordpress in headless mode, which the customer was already used to…
Per default, Astro generates static pages. So it makes sense to compare it to an approach that doesn't.
Using a framework has upsides over writing static pages manually. Most notably, you can decompose your website into reusable components which makes your implementation more DRY. Also, you can fluently upgrade to a very interaction-heavy website without ever changing tech or architecture. But that's just what I value. I whole-heartedly recommend trying it out.
What about DX in terms of template execution speed? There are many implementations of template fragments on various platforms like Go and Rust [1], which might arguably perform better than their JavaScript counterparts. Wouldn't quicker execution give you a faster feedback loop when developing, and also give you a faster UX?
> Wouldn't quicker execution give you a faster feedback loop when developing, and also give you a faster UX?
Yes, I've used stuff like Templ for Go or Razor Pages for .NET.
Even if the raw HTML rendering performance is significantly better, there are other factors to consider in terms of dx.
1) Most backend languages will not hot reload modules in the client which is what Vite gives you.
Very often the whole backend application needs to be recompiled and restarted. Even with something like the .NET CLI which does have a hot reload feature (and it's absolute garbage btw) the whole page needs to be reloaded.
PHP has an advantage here since every request typically "runs the whole application".
But even with PHP, JS and CSS assets do not have hot reload unless you're also running Vite in parallel (which is what Laravel does).
With Astro you can run a single Vite dev server which takes care of everything with reliable and instant hot reload.
2) With Astro you will get islands which are simply not feasible with any non-JS backend. Islands are so much more powerful than old school progressive enhancement techniques. When we were using eg jQuery 15+ years ago it was a massive pain to coordinate between backend dynamic HTML, frontend JS code, and CSS. Now with islands you can encapsulate all that in a single file.
3) You also get CSS co-location. Meaning you can write an Astro server component with its own CSS scoped to the particular piece of markup. Again, CSS colocation is a huge win for dx. These days I write vanilla CSS with PostCSS but with Astro it's trivial to integrate any other CSS workflow: Tailwind, SCSS, etc.
4) Finally, you have to consider bundling of frontend assets. I don't think it's an exaggeration to say that solutions like Vite are really the best you can get in this space. Internally it uses Go and Rust but it's all abstracted for you.
If you have a use case where you really need exceptional HTML rendering performance in a monolithic application, Astro (or really anything in JS) is definitely a bad fit. But you can easily run an Astro server app on eg Cloudflare Workers which could would work in many of those use cases too and reduce latency and adapt dynamically to load.
My personal website is written with Astro. I love it! Admittedly I'm still a little trigger-happy with my JS usage, but overall the ability to write templated pages and utilize islands that don't require big dependencies is the most friendly web development experience I've had.
From what I understand static web apps have a 'cold start' where if they haven't been touched in so long they get killed. I need to just add a function app that pings it periodically so it never goes down.
Thank you! Appreciate you sticking around and trying it again :) I am fairly proud of it, even in its simplicity.
I tired Astro but not for me. I was upgrading my personal website from Jekyll to Astro couple of years back. It was a steep learning curve with lots of folders. Things used to break all the time. Then I switched to raw html and css
The title change lead to a bit of an unexpected jolt, I assumed I'd clicked on the wrong link. I'm not sure where that falls on the guidelines though given the circumstances.
Could someone compare it like I'm 5 to static site generators like Hugo, Jekyll? Does it make it easier to throw in necessary JS (e.g. for comments)? Thanks.
Astro is, and should be treated, as a static site generator.
> Does it make it easier to throw in necessary JS (e.g. for comments)?
With astro you can combine html, css and js in a single file (.astro). You write plain JS (TypeScript) within <script> tag. There, you can, e.g. import your comment library, point to separate .js/*.ts file or write whatever logic you want for client-side JS.
See the docs for example JS usage in astro components:
Coming from Go, I don’t enjoy working with the Go or other template engine, I have comparing various 3rd party Go template libraries, and settle down with JSX-like syntax, which is just way easier.
I've become pretty convinced everyone working in software should have a something like a static site generator somewhere in their back pocket. It could be Astro, Hugo (my choice), or even pandoc running in server mode, which is unhackable because you would have to first understand the Haskell type system to hack it. But it should be there for all the little times you just want to spin up something fast, cheap, and content-driven.
Next.js hydrates only client components - so effectively it's doing island architecture. And it's react end to end. How's that different from Astro? Stating things like "Components without the complexity" doesn't really mean anything unless you do some comparisons?
I ported my personal website from Jekyll to Astro a few weeks back and I really liked it. Astro is much easier to build and extend for me (and that is a personal preference point - I (and by I mostly mean claude) - and it's cool to add react components in to create more interactive points (but I haven't deployed that element yet).
Speed is probably the same as jekyll - but relative to my react vite and nextjs apps it's about 10 times faster.
I would definitely use Astro for more complicated websites than content driven - but would probably return to nextjs or more hefty full stack solutions for complicated web apps.
Potentially the heuristics would be about the level of user state management - e.g. if you're needing to do various workflows vs just presenting content.
I recently migrated my blog from WordPress to Astro (https://aligundogdu.com). The development process was genuinely enjoyable, and the performance boost especially in SEO and speed scores was a fantastic bonus.
What I don't understand from these conversation is the main selling proposition for Astro: if you have a content-heavy website, you should ship zero Javascript. And I agree with that, most of the websites heavy in content will at most have a couple of pages consumed in a single session.
But if my "website" is an application, Javascript makes the whole user experience better, if implemented well. It doesn't matter that the user will wait for 1 more second if they will have to spend the entire day working on it.
The selling point is that you can ship JS, but only when necessary, and only scoped to the components that need it. It's not at all about removing the framework entirely.
Astro is great and I had a good experience using it. However I still don't like the complexity needed. I don't mind a build step per sé but I'd just like to know that todays version of my site can still in ~5 years.
But if you don't know the underlying platform, how can you be sure that your framework is high level enough or that it can do everything that's possible on the platform? You either have a framework with escape hatches (that'll require you to drop down to the bad stuff) or you have to wait until the framework catches up to the platform. Actually now that I think about it, React would fall in both camps.
I've only heard of Astro before, but I got interested today and it seems like an intriguing framework.
That said, Astro also seems to be developed under a venture-backed company.
Is it still less likely to end up like Next.js and React under Vercel's influence?
>> See that code fence at the top? That runs at build time, not in the browser. Your data fetching, your logic - it all happens before the user even loads the page. You get brilliant TypeScript support without any of the complexity of hooks, state management, or lifecycle methods.
This is satire, right?
If only there was any other server side language that could do the same and produce static compliant super-light HTML-first pages!
With a PHP project, you have to make all the decisions yourself, which framework (Symfony, Laravel, etc.), how to structure things, whether you can use JSX-like syntax (you can’t, really), how to handle HTML escaping safely, and how to optimise images during build time. You can use Unpic in Laravel (The Unpic creator, also former Netlify employee is now on Astro team).
I'm aware there's a new PHP web framework that's somewhat similar to Astro, but I can't recall the name.
Astro gives you sensible defaults out of the box. It’s designed for modern web development, so things like partial hydration, automatic image optimisation, and using components from different frameworks just work.
The difference, it seems to me, is that astro is a framework and, specifically, meant for server side generated static pages with a bit of dynamic elements in them. I don't think it is all that capable of doing highly dynamic SSR apps (though their server Islands might make that possible? https://docs.astro.build/en/guides/server-islands/)
And, also, "php" in your question could be ruby, go, C or anything else that runs on the server.
I prefer htmx or, better yet, Datastar which are both small, backend agnostic, js libraries for making ssr html interactive to varying degrees. You could, in theory, use astro with them but probably better to just use something else.
> Astro needs to run on a server that can run node etc
It needs to run on your computer to generate the HTML, but you can just run npm run build then copy the contents from the dist folder to your apache server, or whether you want to host it.
At least, thats how I do it.
I haven't used PHP for about 20 years so I'm sure its changed a lot.
I'm not a frontend developer so I'm ignorant about this stuff so that's the first I've heard of it.
Do you know how you can do this in spring? Let's say I used Thymeleaf, is there a maven target I can use to walk over a database and generate every iteration of a html website?
i dont know anything about spring, thymeleaf etc... I assume google or llms could point you in the right direction. At worst, you could just literally crawl your site with http requests and cache the result. You could even do it from cloudflare workers and with the right caching headers could cache it all in cloudflare cdn.
I mean those are decision-makers, like managers or directors, small business owners, who will outsource or delegate the tech decisions to their developers or designers since they don’t have the skillset to design their site and branding themselves.
> Traditional frameworks hydrate entire pages with JavaScript. Even if you've got a simple blog post with one interactive widget, the whole page gets the JavaScript treatment. Astro flips this on its head. Your pages are static HTML by default, and only the bits that need interactivity become JavaScript "islands."
I guess I'd argue "Traditional Frameworks" were the ones that never stopped doing this. Laravel, Django, Rails etc. Then the SPA frameworks came along and broke everything.
Also - what on earth is "f*"? I originally assumed it was shorthand for "fuck" but is "fuck dream" a common expression? And wouldn't you normally write it as "f***"?
> Also - what on earth is "f"? I originally assumed it was shorthand for "fuck" but is "fuck dream" a common expression? And wouldn't you normally write it as "f*"
I would thinking F**ing, to delve deeper into the meta discussion
Although Astro is great and all, it collects telemetry by default and requires an opt-out. Zola and 11ty, on the other hand, don’t do such nonsense and have way smaller dependency footprint.
Oh okay. While Htmx-the-lib is JS, the standard practice is to make the site progressively enhanced by Htmx such that it still works without JS and improves accessibility for screen readers. While this could be possible with Datastar, I've read that it's generally more work to enable progressive enhancement, but maybe that's not a goal.
I misread the title and was quite confused about when the F* programming language connection will take a stage in the article. Spoiler alert: it never does, because the title doesn't make a reference to the F* programming language :D
I find it sad that Astro advertises itself this way, because I think that it is perfectly capable of building web projects of any complexity, simply by means of the component libraries you can plug in.
What makes it so great is not that it serves a particular niche (like "content-driven websites") but that it provides a developer experience that makes it incredibly easy to scale from a static website to something very complex and interaction-heavy without compromising UX.
That's the point.
It's a war on recentish industry standard to develop all projects using same tools made by/for huge organizations working on huge projects.
Same thing happened with microservice architecture.
Yeah, this is my gut-feeling too! Because the alternative for "complex" being discussed is Next.js, but that doesn't really help you with "complex" applications, and you still have to bootstrap a lot of infrastructure yourself (with dependencies, or by yourself).
>See that code fence at the top? That runs at build time, not in the browser. Your data fetching, your logic - it all happens before the user even loads the page.
I can't with this goddamn LLM blog posts, it just drowns everything.
It kind of surprises me that they never confuse "it's" and "its" and common mistakes like that, when it seems like most human writers today swap them randomly. I suppose that's thanks to a lot of the text in the training data predating the collapse of English education.
I'm not sure why em dashes are so popular, though. I don't think I've ever seen human writing that had as many em dashes as LLMs use.
I doubt it to be honest. It's so distinct and non-existing.
>With Astro you're not locked into a single way of doing things. Need React for a complex form? Chuck it in. Prefer Vue for data visualisation? Go for it. Want to keep most things as simple Astro components? Perfect.
>What struck me most after migrating several projects is how Astro makes the right thing the easy thing. Want a fast site? That's the default. Want to add interactivity? Easy, but only where you need it. Want to use your favourite framework? Go ahead, Astro won't judge.
>Developer experience that actually delivers
I am downvoted so I guess I'm wrong. It's just bland and form in a way ChatGPT usually outputs. Sorry to the author if I'm wrong.
You're not just venting about me being uncomfortable with dashes — you are making a philosophical claim about what comfortability really means. And that? That is a true super-power.
The quote you posted from the article isn't even the right size of dash though. It's a hyphen, not an em dash. And an LLM would have it properly touching the surrounding text.
Back in my days we called this "progressive enhancements" (or even just "web pages"), and was basically the only way we built websites with a bit of dynamic behavior. Then SPAs were "invented", and "progressive enhancements" movement became something less and less people did.
Now it seems that is called JavaScript islands, but it's actually just good ol' web pages :) What is old is new again.
Bit of history for the new webdevs: https://en.wikipedia.org/wiki/Progressive_enhancement
Astro's main value prop is that it integrates with JS frameworks, let's them handle subtrees of the HTML, renders their initial state as a string, and then hydrates them on the client with preloaded data from the server.
TFA is trying to explain that value to someone who wants to use React/Svelte/Solid/Vue but only on a subset of their page while also preloading the data on the server.
It's not necessarily progressive enhancement because the HTML that loads before JS hydration doesn't need to work at all. It just matches the initial state of the JS once it hydrates. e.g. The <form> probably doesn't work at all without the JS that takes it over.
These are the kind of details you miss when you're chomping at the bit to be cynical instead of curious.
Sending functional HTML, and then only doing dynamic things dynamically, that's where the value is for web _apps_. So if what you point out is the value proposition for Astro, then I am not getting it, and don't see its value.
Just compare the two cases, assuming 100ms for the initial HTML loading and 200ms for JS loading and processing.
With full JS, you don't see anything for 300ms, the form does not exists (300ms is a noticeable delay for the human eye).
With frameworks such as Astro, after 100ms you already see the form. By the time you move the mouse and/or start interacting with it, the JS will probably be ready (because 200ms is almost instant in the context of starting an interaction).
This is not new at all, old school server side processing always did this. The advantage is writing the component only once, in one framework (React/vue/whatev). The server-client passage is transparent for the developer, and that wasn't the case at all in old school frameworks.
Note that I'm not seeing this is good! but this is the value proposition of Astro and similar frameworks: transparent server-client context switching, with better performance perceived by the user.
What a value!
I guess I may be chomping at the bit to be cynical, but I have quite a bit of experience in these fields, and I don't think Astro sounds especially transformative.
I think your comment gets at a very specific and subtle nuance that is worth mentioning, namely that typically if you were a proghance purist, you'd have a fallback that did work; a form the submitted normally, a raw table that could be turned into an interactive graph, etc..
I don't think these details are mutually exclusive though, and that it was certainly valid in those days to add something that didn't have a non-js default rendering mode, it's just that it was discouraged from being in the critical path. Early fancy "engineered" webapps like Flipboard got roasted for poorly re-implimenting their core text content on top of canvas so they could reach 60fps scrolling, but if JS didn't work their content wasn't there, and they threw out a bunch of accessibility stuff that you'd otherwise get for free.
Now that I'm thinking back, it's hard to recall situations *at that time* where there would be both something you couldn't do without JavaScript and that couldn't also have a more raw/primitive version, but one example that comes to mind and would still be current are long-form explanations of concepts that can be visualized, such as those that make HN front page periodically. You would not tightly couple the raw text content to the visualization, you would enhance the content with an interactive visual, but not necessarily have fallback for it, and this would still be progressive enhancement.
Here's another good example from that time, which is actually only somewhat forward compatible (doesn't render on my Android), but the explanation still renders https://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
Edit: according to WP history, around December 2020
https://en.wikipedia.org/w/index.php?title=Hydration_(web_de...
Second edit: "Streaming server rendering", "Progressive rehydration", "Partial rehydration", "Trisomorphic rendering"... Seems I woke up in a different universe today.
then chunking was the next step, and basically then logical endpoint is this mix-and-match strategy that NextJS is "leading" (?), by allowing things to be streamed in while sending and caching as much of the static parts up front as possible.
Nicely sums up a lot of interactions these days
I’m not sure javascript islands is that but I appreciate a new approach to an old pattern.
I think we are overdue for a rediscovery of object oriented programming and OOP design patterns but it will be called something else. We just got through an era of rediscovering the mainframe and calling it “cloud native.”
Every now and then you do get a new thing like LLMs and diffusion models.
Another one: WASM is a good VM spec and the VM implementations are good, but the ecosystem with its DSLs and component model is getting an over engineered complexity smell. Remind anyone of… say… J2EE? Good VM, good foundation, massive excess complexity higher up the stack.
https://en.wikipedia.org/wiki/Eternal_September
I originally started as a web developer during the time where PHP+jQuery was the state of the art for interactive webpages, shortly before React with SPAs became a thing.
Looking back at it now, architecturally, the original approach was nicer, however DX used to be horrible at the time. Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that. SPAs have their place, most so in customer dashboards or heavily interactive applications, but Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
I do remember that, all too well. Countless hours spent with templates in Symfony, or dealing with Zend Framework and all that jazz...
But as far as I remember, the issue was debuggability and testing of the templates themselves, which was easily managed by moving functionality out of the templates (lots of people put lots of logic into templates...) and then putting that behavior under unit tests. Basically being better at enforcing proper MVC split was the way to solve that.
The DX wasn't horrible outside of that, even early 2010s which was when I was dealing with that for the first time.
More and more logic moved to the client (and JS) to handle the additional interactivity, creating new frameworks to solve the increasing problems.
At some point, the bottleneck became the context switching and data passing between the server and the client.
SPAs and tools like Astro propose themselves as a way to improve DX in this context, either by creating complete separation between the two words (SPAs) or by making it transparent (Astro)
True, don't remember doing much unit testing of JavaScript at that point. Even with BackboneJS and RequireJS, manual testing was pretty much the approach, trying to make it easy to repeat things with temporary dev UIs and such, that were commented out before deploying (FTPing). Probably not until AngularJS v1 came around, with Miško spreading the gospel of testing for frontend applications, together with Karma and PhantomJS, did it feel like the ecosystem started to pick up on testing.
> Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
the point is pretty much that you can do more JS for rich client-side interactions in a much more elegant way without throwing away the benefits of "back in the days" where that's not needed.
Modern PHP development with Laravel is wildly effective and efficient.
Facebook brought React forth with influences from PHPs immediate context switching and Laravel’s Blade templates have brought a lot of React and Vue influences back to templating in a very useful way.
Dear God. In 20 years people will hire HTML experts as if they are COBOL experts today.
Hah, if only... Time and time again the ecosystem moves not to something that is better, but "same but different" or also commonly "kind of same but worse".
There are so many cases where the "worse" solution "won", and there is a reason "worse is better" is such a popular mantra: https://en.wikipedia.org/wiki/Worse_is_better
That is a perfect description of it, released 1996. So far ahead of its time it’s not even funny. Still one of the best programming environments I’ve ever used almost <checks calendar> 30 years later.
Isn't that basically just what Symfony + Twig does? Server-side rendering, and you can put JS in there if you want to. Example template:
The syntax is horrible, and seeing it today almost gives me the yuckies, but seems like the same idea to me, just different syntax more or less. I'm not saying it was better back then, just seems like very similar ideas (which isn't necessarily bad either)Best of them all has to be hiccup I think, smallest and most elegant way of describing HTML. Same template but in as a Clojure function returning hiccup:
Basically, just lists/vectors with built-in data structures in them, all part of the programming language itself.Have a look at: https://www.gnu.org/software/guile/manual/html_node/SXML.htm... or https://docs.racket-lang.org/sxml/SXML.html
And as you say, part of the language itself, which means no need to learn something different, and no need to learn a pseudo-HTML or lookalike like with JSX, which then needs to be actually parsed by the framework (or its dependencies), unlike SXML, which is already structured data, already understood perfectly in the same language and only needs to be rendered into HTML.
How much of the frontend and how much of the backend are we talking about? Contemporary JavaScript frameworks only cover a narrow band of the problem, and still require you to bootstrap the rest of the infrastructure on either side of the stack to have something substantial (e.g., more than just a personal blog with all of the content living inside of the repo as .md files).
> while avoiding the hydration performance hit
How are we solving that today with Islands or RSCs?
In terms of the front-end, there’s really no limit imposed by Next.js and it’s not limited to a narrow band of the problem (whatever that gibberish means), so I don’t know what you’re even talking about.
> How are we solving that today with Islands or RSCs?
Next.js/RSC solves it by loading JavaScript only for the parts of the page that are dynamic. The static parts of the page are never client-side rendered, whereas before RSC they were.
This is fine generally because you have the choice to pick the right tool for the job, but in the context of "a single, cohesive unit" you can only get that with Next.js if you all that you care about are those specific abstractions and want your backend and frontend to be in the same language. Even then you run into this awkwardness where you have to really think about where your JavaScript is running because it all looks the same. This might be a personal shortcoming, but that definitely broken the illusion of cohesion for me.
> The static parts of the page are never client-side rendered, whereas before RSC they were.
Didn't the hydration performance issues start when we started doing the contemporary SSR method of isomorphic javascript? I think Islands are great and it's a huge improvement to how we started doing SSR with things like Next.js Pages Router. But that's not truly revolutionary industry wide because we've been able to do progressive enhancement long before contemporary frameworks caught up. The thing I'm clarifying here is the "before RSC" only refers to what was once possible with frameworks like Next.js and not what was possible; you could always template some HTML on the server and progressively enhance it with JavaScript.
You'd render templates in Jade/Handlebars/EJS, break them down into page components, apply progressive enhancement via JS. Eventually we got DOM diffing libraries so you could render templates on the client and move to declarative logic. DX was arguably better than today as you could easily understand and inspect your entire stack, though tools weren't as flashy.
In the 2010-2015 era it was not uncommon to build entire interactive websites from scratch in under a day, as you wasted almost no time fighting your tools.
I used it for my personal website, and recently used it when reimplementing the Matrix Conference website. It's really a no-fuss framework that is a joy to use.
Among the things I love about Astro:
- It's still html and css centric - Once built, it doesn't require js by default - You can still opt-into adding js for interactivity here and there - Content collections are neat and tidy - Astro massively optimizes for speed, and the maintainers know how to do it - It had a very helpful devbar to help you visually figure out what easy fix can make your website snappier (like lazily loading images if it detects them below the fold)
For the "optimize for speed" bit, an example is that the css minifier cleverly inlines some CSS to avoid additional queries. The Image component they provide will set the width and height attribute of an image to avoid content layout shifts. It will also generate responsive images for you.
I've never used Astro so forgive my ignorance, but isn't that just creating a .html file, a .css file and then optionally provide a .js file? What does Astro give you in this case? You'd get the same experience with a directory of files + Notepad basically. It's also even more optimized for speed, since there is no overhead/bloat at all, including at dev-time, just pure files, sent over HTTP.
> an example is that the css minifier cleverly inlines some CSS to avoid additional queries
Is that a common performance issue in the web pages you've built? I think across hundreds of websites, and for 20 years, not once have "CSS queries" been a bottleneck in even highly interactive webpages with thousands of elements, it's almost always something else (usually network).
For the first one, the main benefits of Astro over static html and css (for my use cases) are the ability to include components and enforce the properties that must be passed. A typical example would be [here][0] where I define a layout for the whole website, and then [on each page that uses it](https://github.com/matrix-org/matrix-conf-website/blob/main/...) I have to pass the right properties. Doable by hand, but it's great to have tooling that can yell at me if I forgot to do it.
Content Collections also let me grab content from e.g. markdown or json and build pages automatically from it. The [Content Collections docs][1] are fairly straightforward.
As for performance issues, I've spent quite a bit of time on the countryside where connectivity was an issue and every extra request was definitely noticeable, hence the value of inlining it (you load one html file that has the css embedded, instead of loading an html file that then tells your browser to load an extra css file). The same can be true in some malls where I live.
[0]: https://github.com/matrix-org/matrix-conf-website/blob/main/... [1]: https://docs.astro.build/en/guides/content-collections/
HTTP/2 does not change this equation much. Server Push is dead, and bypasses caching anyway. Early Hints can help if configured correctly, but still require the client to make the request roundtrip to fetch that asset.
Astro is super for completely static stuff too. Sometimes static stuff can be complex and there a modern framework like Astro shines.
I will share a couple of files to explain.
The site is almost completely static. It serves minimal JS for:
(1) Prefetching (you can block that and nothing will break)
(2) Mobile menu (you cannot make an accessible mobile menu without JS)
The site is for the docs and demos of a JS library. I want many demos on it, to be able to see what patterns the lib can handle and where things break down. I want to be able to add/remove demos quickly to try ideas. Straight HTML written in index.html files would not allow me to do that (but it is fine for the site where I have my CV, so I just use that there).
This is the Astro component I made that makes it super easy for me to try whatever idea I come up with:
https://github.com/demetris/omni-carousel/blob/main/site/com...
Here is one page with demos that use the component:
https://github.com/demetris/omni-carousel/blob/main/site/pag...
Basically, without a setup like this, I would publish the site with 3 or 4 demos, and I would maybe add 1 or 2 more after a few months.
Cheers!
Again I'm failing to see exactly what Astro is "innovating" (as you and others claim they're doing). It's nothing wrong with taking a workflow and making it really stable/solid, or making it really fast, or similar. But for the claim to be "innovative" to be true, they actually have to do something new, or at least put together stuff in a new way :)
As you said, in the example I shared Astro is an SSG. It happens to use server-side JS but this is is irrelevant.
But it is more than that. Astro is an SSG and it is also a *very well made* SSG.
I have used all the usual suspects: Ruby ones, Go ones, Python ones, JS ones. The closest I came to having fun was 11ty, but 11ty is a bit too chaotic for me. Astro is the one that clicked. And the one that was fun to use right from day 1.
I am not a JavaScript person, mind you. JavaScript is not my strongest FE skill. The JS conventions, tricks, and syntaxes of modern FE frameworks, even less so.
So Astro did not click for me because of that. It clicked because of how well it is made and because of how fun it is to use.
Oh! It does this!
Oh! It does that!
Oh! It gives you type safety for your Markdown meta! (What?!)
Oh! It gives you out of the box this optimization I was putting together manually! You just have to say thisOptim: true in the configuration file!
Astro is a very well made tool that improves continually and that aligns with my vision of the platform and of how we should make stuff for the platform.
SSI hasn't changed in 20+ years and it's extremely stable in all webservers. A very tiny attack surface with no maintainence problems. It just does includes of html fragments. The perfect amount of templating power to avoid redundancy but also avoid expoitable backends.
It's worked out so wonderfully. By being HTML/CSS centric, it forces a certain predictable organization with your front end code. I handed the frontend to another developer, with a React background, and because it's so consistently and plainly laid out, the transition happened almost overnight.
I don't know that there's a serious solution to it because complexity can't come with zero friction but just my gut feeling was to back out and go with something else for now.
You see the same thing in political conservative/traditional circles, where basically things were good when they were young, and things today are bad, but it all differs on when the person was born.
when things decline that's still an accurate represenation, not just an artifact of subjectivity
People frequently conflate the two.
Whereas with SvelteKit, it builds happily and does this beautiful catch-all mechanism where a default response page, say 404.html in Cloudflare, fetches the correct page and from user-perspective works flawlessly. Even though behind the scenes the response was 404 (since that dynamic page was never really compiled). Really nice especially when bundling your app as a webview for mobile.
Now, these are just the limitations I can think of, but there are probably more. And to be fair, why "break" the web this way, if you can just use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
> use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
We had PATH_INFO in virtually every http server since CGI/1.0 and were using it for embedding parameters in urls since SEO was a thing, if not earlier. Using PATH_INFO in a PHP script to access an ID was pretty common, even if it wasn't the default.
By way of example, here's a sample url from vBulletin, a classic PHP application <https:/ /forum.vbulletin.com/forum/vbulletin-sales-and-feedback/vbulletin-pre-sales-questions/4387853-vbulletin-system-requirements>[0] where the section, subsection, topic id, and topic are embedded into the URL path, not the query string.
[0] https://forum.vbulletin.com/forum/vbulletin-sales-and-feedba...
But I don't see this is as big of a problem. With this I can switch and choose—SSR dynamic pages or use hacky catch-all mechanism. For any reasonably large site you probably would SSR for SEO and other purposes. But for completely offline apps I have to do zero extra work to render them as is.
Personally, I much prefer route paths vs query parameters not just because they look ugly but because they lose hierarchy. Also, you can't then just decide to SSR the pages individually as they're now permanently fixed to same path.
Besides, if you catch-all to a 200.html page, how would you serve 404s? Yes, you can integrate a piece of JS in the 200.html file and have it "display" 404, but the original HTTP response would have been 200 (not 404). A lot of bending web standards and technology, and I can see how framework authors probably decide against that. Especially given how much shit JS frameworks get for "reinventing the wheel" :)
Routes and Components per Page are dynamically created while export Next or build Astro static pages. In both frameworks you create the pages / slugs via getStaticPaths. And if ISR/ISP is enabled, even new pages (that are not known during build time) are pre-rendert while running the server.
In Next it is called dynamic routes[1] and in Astro dynamic pages[2]. Catch all slugs in Next and Astro are [...slug] e.g..
[1] https://nextjs.org/docs/pages/building-your-application/rout...
[2] https://docs.astro.build/en/guides/routing/#example-dynamic-...
[0]: `https://docs.astro.build/en/guides/routing/#static-ssg-mode
As background, I wanted to make a PoC with NextJS bundled into a static CapacitorJS app somewhat recently and had to give up because of this.
You can try tricking NextJS by transforming the pages into "normal" ones with eg query parameters instead of path, but then you need complicated logic changing the pages as well as rewriting links. As you of course want the normal routes in web app. Just a huge PITA.
[0]: lockmeout.online
https://docs.astro.build/en/guides/server-islands/
Boiling down the conversation I see in the article, it just seems to be: the browser as a HMI vs the browser as a an application runtime. Depending on what you want to do one might be a better fit than the other. But the points it puts forward are fluff arguments like "it's a breadth of fresh air" or "it loads faster".
It's difficult to articulate the source of just how broken the discussion space is; nor have I made a particularly strong argument myself. But I think it's important to keep pushing back on conversations that position framework's like they are brands winning hearts and minds. Ala the fashion industry.
The fashion industry is the best analogy I've seen so far for frontend frameworks. It's obvious that the amount of technical rigor involved with declaring something "content-driven" and "server-first" is approximately zero.
Astro is trying to position itself in opposition to things like Next.js or Nuxt wich are specifically marketed as application frameworks?
And the architecture is more suited to something like a content site, because of the content collections, built-in MDX support, SSR, image handling, and server routing?
What do you mean when you say "a content site"?
To me, "content" == "literally anything that resides in the DOM".
But, clearly we aren't talking about that (I hope).
Fluff arguments do exist, but you can also measure. The site is static with minimal JS on the one page, and a bit more JS on the other page, so nothing surprising in the numbers, and nothing you can say was achieved thanks to the magic of Astro, but I wanted to shared them:
HOME PAGE
TTFB: .024s
SR: .200s
FCP: .231s
SI: .200s
LCP: .231s
CLS: 0
TBT: .000s
PW: 108KB
DEMOS PAGE
TTFB: .033s
SR: .300s
FCP: .281s
SI: .200s
LCP: .231s
CLS: 0
TBT: .000s
PW: 174KB
It's really fast, you can edit it with Notepad, and you can probably saturate your bandwidth with a consumer level PC.
It's fluff because, well, our expectations are so unbelievably low. By the time you've bolted on every whizbang dingus leveraging four different languages (two of which are some flavor of Javascript), your twelve page site takes a couple of minutes to compile (what?), and it chokes your three load-balanced AWS compute nodes.
Web applications are hard. I get that. Web sites? They were, by design, incredibly simple. We make them complicated for often unclear reasons.
I appreciate what the Astro folks are trying to do, and it's very clever. But your basic Web site need not require freaking npm in order to "return to the fundamentals of the Web".
You can then use all of those npm packages to do whatever processing on your data that you want to do to generate the content and the pages and then just serve it as HTML.
I'm a backend dev, but Astro is the first time a front end framework has made sense to me for years. It fits my mental model of how the web works - serving up HTML pages with some JS just like we did 20 years ago. Its just that I can have it connect to a DB or an API to pull the data at build time so that I can have it generate all of the pages.
As for build time, I don't have a clue - I haven't used astro (and don't plan to. Datastar + whatever backend framework you want is better). But I'm generally in favour of the direction they're bringing JS frameworks.
I was amazed by how easy it was compared to my experience with Wordpress for this several years ago.
And I can host it for free on something like Netlify and I don’t need to worry about the site being hacked, like with WP.
I even built a very simple git-based CMS so that the client can update the content themselves.
Web dev has really come a long way, despite what a lot of people say.
But at least in Germany there are some agencies doing nothing else.
$550/TB for those who want to save a search.
Another differences and benefit of Astro is the island architecture, compared to other frameworks. This means you can implement micro frontends. Island architecture and micro frontends are features that companies or projects may want if they have multiple teams. For example, one team could be working on the checkout process, another on the shopping basket, and another on product listings.
Now, you can use Astro to combine these components on a single route or page. And you control how this components are rendered. Astro also allows you to share global state between these islands.
This approach is beneficial because teams can develop and ship a single feature while having full responsibility for it. However, it also has downsides, and similar outcomes can be achieved with non-island architectures.
For instance, if all teams use React, it is common for each team to use a different version of React, forcing the browser to load all these versions. The same issue arises if one team uses Vue, another uses Angular, and another uses React or any other framework.
I'am not fully convinced that it will change the web. It is basically a Next or Nuxt without the library/framework login. And it overs the Island-Architecture, that is usually only beneficial for very huge projects.
But, you should try it. I work with Astro since there first release, now for several years, and I can recommend you to give it a try.
It is also a nice tool, if you want to get ride of React or Vue and move to web-components or if you want to replace Next or Nuxt. You can do this with Astro, step by step.
I feel a lot of the hype around Astro has more to do with vite than anything else. And there yes, without doubt, vite is amazing.
Like when?
On the positive side their use of web components is a nice bet.
Web Components are a JavaScript/ECMAscript standard.
So this is like saying: _Their use of Arrays is a nice bet_
Been on the Next.js journey since v10, lived through the v13 debacle and even now on v15, I've very much cooled on it.
I find both React and Next.js move way too fast and make incredibly radical changes sub-annually. It's impossible to keep up with. Maybe it could be justified if things improved from time to time, but often it just feels like changes for changes' sake.
I did not like how Remix to RR7 transition was made though, my project built using Remix was not an easy upgrade and I am rewriting a lot of it on RR7 now.
Unfortunately in fashion driven industry, it isn't always easy to keep to the basics.
My understanding is that Astro is able to more-or-less take a component from any combo of popular frameworks and render it, whereas Fresh is currently limited to just Preact via Deno. I think the limitation is to optimize for not needing a build step, and not having to tweak the frameworks themselves like Astro does (did?).
I'm not affiliated; I've just looked at both tools before.
Astro brings a friendly UI to maintain and update the sites? Like the WordPress panel and editor.
Many medium businesses don't even need that btw. In many instances marketing people just want to have control over websites, that they should not be given control over, since they usually are incapable of knowing the impact of what they are doing, when they add something like Google tagmanager to their site. They also tend to often break things, when using Wordpress, because they do not understand how it works under the hood, so that side of things is also not without hassle. And then the devs are called to fix what marketing broke. Even with Wordpress. At that point it would often be easier to let the devs build a non-Wordpress site, and any ideas about things that are not just content in markdown files need to be requests for the dev team to evaluate, and possibly work on, when deemed safe and lawful.
Sadly the power dynamics in businesses are often stacked against conscientious developers.
Have you ever worked with any SMBs before? This is at least 5 technical levels above their head. Would make as much sense as telling them, "just use this CLI tool".
We're talking about people who will email you from their phone that the website is down, but it turns out it's just their home internet that is down.
Or think that the website disappeared from the internet. When in reality it's now the #2 result in google and they never knew they you could type a URL directly into the browser.
A WP deployment on a simple shared hosting plan like that could run itself without needed a dev or sysadmin.
Maybe in some cases but that hasn't been my experience at all (or the experience of all the devs I know IRL).
Just a couple of weeks ago one of my clients installed a plugin which didn't allow users to log in.
And then come the legal fees for making the site actually conforming with the law, such as GDPR. Those fees are increased, because of people wanting to do stuff they need to declare to visitors of the site, for which they want reassurance, that all is well.
And then come the costs for paying a dev anyway, to fix things that they break or that become broken over time.
So no, 9.99$/month are very very far from a realistic price these businesses pay.
I'm not saying WP is great. Taking over a WP project from someone else can be daunting in tech debt and weird choices. But in terms of having a simple brochure website for businesses that get < 10k weekly visitors, it's pretty quick, cheap, and easy.
No real maintenance? So either you let your PHP version and plugins become outdated, or you sooner or later have to fix things breaking. Maybe you simply did not notice any breakage, because you don't do maintenance for customers?
A brochure website? Does that mean people enter their e-mail to be sent a brochure? (Then paragraph 1 applies again) Or brochure meaning, that you merely display information on pages and that's it?
I think for small info sites what you describe can be true, but for anything slightly larger not, especially not for small businesses.
Eg. https://www.gatsbyjs.com/docs/glossary/headless-wordpress/
That's a really low bar. Why not static pages? Why even use a framework at all if you're thinking of using Astro?
Using a framework has upsides over writing static pages manually. Most notably, you can decompose your website into reusable components which makes your implementation more DRY. Also, you can fluently upgrade to a very interaction-heavy website without ever changing tech or architecture. But that's just what I value. I whole-heartedly recommend trying it out.
If you use static pages, how do you make sure that shared UI like navbars all update if you decide to make a change?
[1] https://htmx.org/essays/template-fragments/#known-template-f...
Yes, I've used stuff like Templ for Go or Razor Pages for .NET.
Even if the raw HTML rendering performance is significantly better, there are other factors to consider in terms of dx.
1) Most backend languages will not hot reload modules in the client which is what Vite gives you.
Very often the whole backend application needs to be recompiled and restarted. Even with something like the .NET CLI which does have a hot reload feature (and it's absolute garbage btw) the whole page needs to be reloaded.
PHP has an advantage here since every request typically "runs the whole application".
But even with PHP, JS and CSS assets do not have hot reload unless you're also running Vite in parallel (which is what Laravel does).
With Astro you can run a single Vite dev server which takes care of everything with reliable and instant hot reload.
2) With Astro you will get islands which are simply not feasible with any non-JS backend. Islands are so much more powerful than old school progressive enhancement techniques. When we were using eg jQuery 15+ years ago it was a massive pain to coordinate between backend dynamic HTML, frontend JS code, and CSS. Now with islands you can encapsulate all that in a single file.
3) You also get CSS co-location. Meaning you can write an Astro server component with its own CSS scoped to the particular piece of markup. Again, CSS colocation is a huge win for dx. These days I write vanilla CSS with PostCSS but with Astro it's trivial to integrate any other CSS workflow: Tailwind, SCSS, etc.
4) Finally, you have to consider bundling of frontend assets. I don't think it's an exaggeration to say that solutions like Vite are really the best you can get in this space. Internally it uses Go and Rust but it's all abstracted for you.
If you have a use case where you really need exceptional HTML rendering performance in a monolithic application, Astro (or really anything in JS) is definitely a bad fit. But you can easily run an Astro server app on eg Cloudflare Workers which could would work in many of those use cases too and reduce latency and adapt dynamically to load.
Still, yes, I prefer other tooling in the backend. But astro is a good thing for JS devs
https://evklein.com
Edit: Ah, finally, it loaded after about 30 seconds.
Edit 2: Fairly neat.
Thank you! Appreciate you sticking around and trying it again :) I am fairly proud of it, even in its simplicity.
> Does it make it easier to throw in necessary JS (e.g. for comments)?
With astro you can combine html, css and js in a single file (.astro). You write plain JS (TypeScript) within <script> tag. There, you can, e.g. import your comment library, point to separate .js/*.ts file or write whatever logic you want for client-side JS.
See the docs for example JS usage in astro components:
https://docs.astro.build/en/guides/client-side-scripts/#web-...
You should try it out, not comparing.
Speed is probably the same as jekyll - but relative to my react vite and nextjs apps it's about 10 times faster.
I would definitely use Astro for more complicated websites than content driven - but would probably return to nextjs or more hefty full stack solutions for complicated web apps.
Potentially the heuristics would be about the level of user state management - e.g. if you're needing to do various workflows vs just presenting content.
But if my "website" is an application, Javascript makes the whole user experience better, if implemented well. It doesn't matter that the user will wait for 1 more second if they will have to spend the entire day working on it.
How else can you fully grasp what's possible on that platform and the costs of different abstractions?
Me? I’m using Html + css + xslt.
That said, Astro also seems to be developed under a venture-backed company. Is it still less likely to end up like Next.js and React under Vercel's influence?
This is satire, right? If only there was any other server side language that could do the same and produce static compliant super-light HTML-first pages!
https://unpic.pics/
I'm aware there's a new PHP web framework that's somewhat similar to Astro, but I can't recall the name.
Astro gives you sensible defaults out of the box. It’s designed for modern web development, so things like partial hydration, automatic image optimisation, and using components from different frameworks just work.
And, also, "php" in your question could be ruby, go, C or anything else that runs on the server.
I prefer htmx or, better yet, Datastar which are both small, backend agnostic, js libraries for making ssr html interactive to varying degrees. You could, in theory, use astro with them but probably better to just use something else.
It’s php for javascript devs?
Astro needs to run on a server that can run node etc
And php can equally have its html cached.
It needs to run on your computer to generate the HTML, but you can just run npm run build then copy the contents from the dist folder to your apache server, or whether you want to host it.
At least, thats how I do it.
I haven't used PHP for about 20 years so I'm sure its changed a lot.
Do you know how you can do this in spring? Let's say I used Thymeleaf, is there a maven target I can use to walk over a database and generate every iteration of a html website?
Can it be reliable for production use? Yes.
Can non-techy make it reliable for production use? Who knows.
E-commerce and marketing sites are at the two opposite sides of complexity spectrum.
Astro would be perfect for marketing page (non-techy could approach that) and doable for e-commerce (for experienced dev).
Whether it SHOULD be used for e-commerce would be another question.
I guess I'd argue "Traditional Frameworks" were the ones that never stopped doing this. Laravel, Django, Rails etc. Then the SPA frameworks came along and broke everything.
Also - what on earth is "f*"? I originally assumed it was shorthand for "fuck" but is "fuck dream" a common expression? And wouldn't you normally write it as "f***"?
I would thinking F**ing, to delve deeper into the meta discussion
npm run build -> static html and css
I prefer htmx and, better yet, datastar as they're backend-agnostic.
Datastar does everything htmx does and much more. And, iirc, is also smaller. Just explore their site, docs, essays etc
Seriously. This is how things are done in most nonjs frameworks
Basically, not suitable for anything complex.
What makes it so great is not that it serves a particular niche (like "content-driven websites") but that it provides a developer experience that makes it incredibly easy to scale from a static website to something very complex and interaction-heavy without compromising UX.
Same thing happened with microservice architecture.
I can't with this goddamn LLM blog posts, it just drowns everything.
Sucks when everything you write sounds like a bot because you're autistic.
The fact that LLMs write like that is proof of that people write like this too, as LLMS produce statistical averages of the input writings.
I'm not sure why em dashes are so popular, though. I don't think I've ever seen human writing that had as many em dashes as LLMs use.
Feeling less-than-human isn't great.
>With Astro you're not locked into a single way of doing things. Need React for a complex form? Chuck it in. Prefer Vue for data visualisation? Go for it. Want to keep most things as simple Astro components? Perfect.
>What struck me most after migrating several projects is how Astro makes the right thing the easy thing. Want a fast site? That's the default. Want to add interactivity? Easy, but only where you need it. Want to use your favourite framework? Go ahead, Astro won't judge.
>Developer experience that actually delivers
I am downvoted so I guess I'm wrong. It's just bland and form in a way ChatGPT usually outputs. Sorry to the author if I'm wrong.
Speak your truth, poster!