All these improvement in a single year, 2025. While this may seem obvious to those who follows along the AI / LLM news. It may be worth pointing out again ChatGPT was introduced to us in November 2022.
I still dont believe AGI, ASI or Whatever AI will take over human in short period of time say 10 - 20 years. But it is hard to argue against the value of current AI, which many of the vocal critics on HN seems to have the opinion of. People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Being more of a Hardware person, the most interesting part to me is the funding of all the developments of latest hardware. I know this is another topic HN hate because of the DRAM and NAND pricing issue. But it is exciting to see this from a long term view where the pricing are short term pain. Right now the industry is asking, we have together over a trillion dollar to spend on Capex over the next few years and will even borrow more if it needs to be, when can you ship us 16A / 14A / 10A and 8A or 5A, LPDDR6, Higher Capacity DRAM at lower power usage, better packaging, higher speed PCIe or a jump to optical interconnect? Every single part of the hardware stack are being fused with money and demand. The last time we have this was Post-PC / Smartphone era which drove the hardware industry forward for 10 - 15 years. The current AI can at least push hardware for another 5 - 6 years while pulling forward tech that was initially 8 - 10 years away.
I so wished I brought some Nvidia stock. Again, I guess no one knew AI would be as big as it is today, and it is only just started.
2025 was the year of development tool using AI agents. I think we'll shift attention to non development tool using AI agents. Most business users are still stuck using chat gpt as some kind of grand oracle that will write their email or powerpoint slides. There are bits and pieces of mostly technology demo level solutions but nothing that is widely used like AI coding tools are so far. I don't think this is bottle necked on model quality.
I don't need an AGI. I do need a secretary type agent that deals with all the simple but yet laborious non technical tasks that keep infringing on my quality engineering time. I'm CTO for a small startup and the amount of non technical bullshit that I need to deal with is enormous. Some examples of random crap I deal with: figuring out contracts, their meaning/implication to situations, and deciding on a course of action; Customer offers, price calculations, scraping invoices from emails and online SAAS accounts, formulating detailed replies to customer requests, HR legal work, corporate bureaucracy, financial planning, etc.
A lot of this stuff can be AI assisted (and we get a lot of value out of ai tools for this) but context engineering is taking up a non trivial amount of my time. Also most tools are completely useless at modifying structured documents. Refactoring a big code base, no problem. Adding structured text to an existing structured document, hardest thing ever. The state of the art here is an ff-ing sidebar that will suggest you a markdown formatted text that you might copy/paste. Tool quality is very primitive. And then you find yourself just stripping all formatting and reformatting it manually. Because the tools really suck at this.
> People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Those are 3 different things. There can be a LOT of fast and significant improvements but still remain extremely far from the actual goal, so far it looks like actually little progress.
People pay for a lot of things, including snake oil, so convincing a lot of people to pay a bit is not in itself a proof of value, especially when some people are basically coerced into this, see how many companies changed their "strategy" to mandating AI usage internally, or integration for a captive audience e.g. Copilot.
Finally yes, $1B is a LOT of money for you and I... but for the largest corporations it's actually not a lot. For reference Google earned that in revenue... per day in 2023. Anyway that's still a big number BUT it still has to be compared with, well how much does OpenAI burn. I don't have any public number on that but I believe the consensus is that it's a lot. So until we know that number we can't talk about an actual runway.
Is the AI progress in 2025 an outstanding breakthrough? Not really. It's impressive but incremental.
Still, the gap between the capabilities of a cutting edge LLM and that of a human is only this wide. There are only this many increments it takes to cross it.
Yes. If you're bottlenecked on silicon and secondaries like memory, why would you want to put more of those resources into lower margin consumer products if you could use those very resources to make and sell more high margin AI accelerators instead?
From a business standpoint, it makes some sense to throttle the gaming supply some. Not to the point of surrendering the market to someone else probably, but to a measurable degree.
Literally the only thing I've encountered regarding LLMS and AGI is morons stating that LLMs will never become AGI. I literally have no idea where the AGI arguments are coming from. No one who I've ever worked with who uses LLMs is talking about AGI. It's just a fucking distraction from actually usable tools right now. Is there anything except a strawman for LLM AGI?
Honestly, I wouldn't be surprised if a system that's an LLM at its core can attain AGI. With nothing but incremental advances in architecture, scaffolding, training and raw scale.
Mostly the training. I put less and less weight on "LLMs are fundamentally flawed" and more and more of it on "you're training them wrong". Too many "fundamental limitations" of LLMs are ones you can move the needle on with better training alone.
The foundation of LLM is flexible and capable, and the list of "capabilities that are exclusive to human mind" is ever shrinking.
I use a qemu vm for running codex cli in yolo mode and use simple ssh based git operations for getting code in and out of there. Works great. And you can also do fun things like let it loose on multiple git projects in one prompt. The vm can run docker as well which helps with containerized tests and other more complicated things. One thing I've started to observe is that you spend more time waiting for tool execution than for model inference. So having a fast local vm is better than a slower remote one.
This is a good tooling survey of the past year. I have been watching it as a developer re-entering the job market. The job descriptions closely parallel the timeline used in the post. That's bizarre to me because these approaches are changing so fast. I see jobs for "Skill and Langchain experts with production-grade 0>1 experience. Former founders preferred". That is an expertise that is just a few months old and startups are trying to build whole teams overnight with it. I'm sure January and February will have job postings for whatever gets released that week. It's all so many sand castles.
That must have been a long time back. Having lived through the time when web pages were served through CGI and mobile phones only existed in movies, when SVMs where the new hotness in ML and people would write about how weird NNs were, I feel like I've seen a lot more concrete progress in the last few decades than this year.
This year honestly feels quite stagnant. LLMs are literally technology that can only reproduce the past. They're cool, but they were way cooler 4 years ago. We've taken big ideas like "agents" and "reinforcement learning" and basically stripped them of all meaning in order to claim progress.
I mean, do you remember Geoffrey Hinton's RBM talk at Google in 2010? [0] That was absolutely insane for anyone keeping up with that field. By the mid-twenty teens RBMs were already outdated. I remember when everyone was implementing flavors of RNNs and LSTMs. Karpathy's character 2015 RNN project was insane [1].
This comment makes me wonder if part of the hype around LLMs is just that a lot of software people simply weren't paying attention to the absolutely mind-blowing progress we've seen in this field for the last 20 years. But even ignoring ML, the world's of web development and mobile application development have gone through incredible progress over the last decade and a half. I remember a time when JavaScript books would have a section warning that you should never use JS for anything critical to the application. Then there's the work in theorem provers over the last decade... If you remember when syntactic sugar was progress, either you remember way further back than I do, or you weren't paying attention to what was happening in the larger computing world.
> LLMs are literally technology that can only reproduce the past.
Funny, I've used them to create my own personalized text editor, perfectly tailored to what I actually want. I'm pretty sure that didn't exist before.
It's wild to me how many people who talk about LLM apparently haven't learned how to use them for even very basic tasks like this! No wonder you think they're not that powerful, if you don't even know basic stuff like this. You really owe it to yourself to try them out.
I've worked at multiple AI startups in lead AI Engineering roles, both working on deploying user facing LLM products and working on the research end of LLMs. I've done collaborative projects and demos with a pretty wide range of big names in this space (but don't want to doxx myself too aggressively), have had my LLM work cited on HN multiple times, have LLM based github projects with hundreds of stars, appeared on a few podcasts talking about AI etc.
This gets to the point I was making. I'm starting to realize that part of the disconnect between my opinions on the state of the field and others is that many people haven't really been paying much attention.
I can see if recent LLMs are your first intro to the state of the field, it must feel incredible.
Over half of HN still thinks it’s a stochastic parrot and that it’s just a glorified google search.
The change hit us so fast a huge number of people don’t understand how capable it is yet.
Also it certainly doesn’t help that it still hallucinates. One mistake and it’s enough to set someone against LLMs. You really need to push through that hallucinations are just the weak part of the process to see the value.
That's all very impressive, to be sure. But are you sure you're getting the point? As of 2025, LLMs are now very good at writing new code, creating new imagery, and writing original text. They continue to improve at a remarkable rate. They are helping their users create things that didn't exist before. Additionally, they are now very good at searching and utilizing web resources that didn't exist at training time.
So it is absurdly incorrect to say "they can only reproduce the past." Only someone who hasn't been paying attention (as you put it) would say such a thing.
> They are helping their users create things that didn't exist before.
That is a derived output. That isn't new as in: novel. It may be unique but it is derived from training data. LLMs legitimately cannot think and thus they cannot create in that way.
Why is that kind of thinking required to create novel works?
Randomness can create novelty.
Mistakes can be novel.
There are many ways to create novelty.
Also I think you might not know how LLMs are trained to code. Pre-training gives them some idea of the syntax etc but that only gets you to fancy autocomplete.
Modern LLMs are heavily trained using reinforcement data which is custom task the labs pay people to do (or by distilling another LLM which has had the process performed on it).
That is a pedantic distinction. You can create something that didn't exist by combining two things that did exist, in a way of combining things that already existed. For example, you could use a blender to combine almond butter and sawdust. While this may not be "novel", and it may be derived from existing materials and methods, you may still lay claim to having created something that didn't exist before.
For a more practical example, creating bindings from dynamic-language-A for a library in compiled-language-B is a genuinely useful task, allowing you to create things that didn't exist before. Those things are likely to unlock great happiness and/or productivity, even if they are derived from training data.
Yeah you’ve lost me here I’m sorry. In the real world humans work with AI tools to create new things. What you’re saying is the equivalent of “when a human writes a book in English, because they use words and letters that already exist and they already know they aren’t creating anything new”.
Could you give us an idea of what you’re hoping for that is not possible to derive from training data of the entire internet and many (most?) published books?
This is the problem, the entire internet is a really bad set of training data because it’s extremely polluted.
Also the derived argument doesn’t really hold, just because you know about two things doesn’t mean you’d be able to come up with the third, it’s actually very hard most of the time and requires you to not do next token prediction.
The emergent phenomenon is that the LLM can separate truth from fiction when you give it a massive amount of data. It can figure the world out just as we can figure it out when we are as well inundated with bullshit data. The pathways exist in the LLM but it won’t necessarily reveal that to you unless you tune it with RL.
> So it is absurdly incorrect to say "they can only reproduce the past."
Also , a shitton of what we do economically is reproducing the past with slight tweaks and improvements. We all do very repetitive things and these tools cut the time / personnel needed by a significant factor.
I think the confusion is people's misunderstanding of what 'new code' and 'new imagery' mean. Yes, LLMs can generate a specific CRUD webapp that hasn't existed before but only based on interpolating between the history of existing CRUD webapps. I mean traditional Markov Chains can also produce 'new' text in the sense that "this exact text" hasn't been seen before, but nobody would argue that traditional Markov Chains aren't constrained by "only producing the past".
This is even more clear in the case of diffusion models (which I personally love using, and have spent a lot of time researching). All of the "new" images created by even the most advanced diffusion models are fundamentally remixing past information. This is really obvious to anyone who has played around with these extensively because they really can't produce truly novel concepts. New concepts can be added by things like fine-tuning or use of LoRAs, but fundamentally you're still just remixing the past.
LLMs are always doing some form of interpolation between different points in the past. Yes they can create a "new" SQL query, but it's just remixing from the SQL queries that have existed prior. This still makes them very useful because a lot of engineering work, including writing a custom text editor, involve remixing existing engineering work. If you could have stack-overflowed your way to an answer in the past, an LLM will be much superior. In fact, the phrase "CRUD" largely exists to point out that most webapps are fundamentally the same.
A great example of this limitation in practice is the work that Terry Tao is doing with LLMs. One of the largest challenges in automated theorem proving is translating human proofs into the language of a theorem prover (often Lean these days). The challenge is that there is not very much Lean code currently available to LLMs (especially with the necessary context of the accompanying NL proof), so they struggle to correctly translate. Most of the research in this area is around improving LLM's representation of the mapping from human proofs to Lean proofs (btw, I personally feel like LLMs do have a reasonably good chance of providing major improvements in the space of formal theorem proving, in conjunction with languages like Lean, because the translation process is the biggest blocker to progress).
When you say:
> So it is absurdly incorrect to say "they can only reproduce the past."
It's pretty clear you don't have a solid background in generative models, because this is fundamentally what they do: model an existing probability distribution and draw samples from that. LLMs are doing this for a massive amount of human text, which is why they do produce some impressive and useful results, but this is also a fundamental limitation.
But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs. If you've read The Three Body Problem, it's very much like living in the world where scientific progress is impeded by sophons. In that world there is still some progress (especially with abundant energy), but it remains fundamentally and deeply limited.
Just an innocent bystander here, so forgive me, but I think the flack you are getting is because you appear to be responding to claims that these tools will reinvent everything and introduce a new halcyon age of creation - when, at least on hacker news, and definitely in this thread, no one is really making such claims.
Put another way, and I hate to throw in the now over-used phrase, but I feel you may be responding to a strawman that doesn't much appear in the article or the discussion here: "Because these tools don't achieve a god-like level of novel perfection that no one is really promising here, I dismiss all this sorta crap."
Especially when I think you are also admitting that the technology is a fairly useful tool on its own merits - a stance which I believe represents the bulk of the feelings that supporters of the tech here on HN are describing.
I apologize if you feel I am putting unrepresentative words in your mouth, but this is the reading I am taking away from your comments.
Lot of impressive points. They are also irrelevant. The majority of people also only extrapolate from the knowledge they acquired in the past. That’s why there is the concept of inventor, someone who comes up with new ideas. Many new inventions are also based on existing ideas. Is that the reason to dismiss those achievements?
Do you only take LLM seriously if it can be another Einstein?
> But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs.
What do you consider recent fundamental breakthroughs?
Even if you are right, human can continue to work on hard problems while letting LLM handle the majority of derivative work
From what you've described an LLM has not invented anything. LLMs that can reason have a bit more slight of hand but they're not coming up with new ideas outside of the bounds of what a lot of words have encompassed in both fiction and non.
Good for you that you've got a fun token of code that's what you've always wanted, I guess. But this type of fantasy take on LLMs seems to be more and more prevalent as of late. A lot of people defending LLMs as if they're owed something because they've built something or maybe people are getting more and more attached to them from the conversational angle. I'm not sure, but I've run across more people in 2025 that are way too far in the deep end of personifying their relationships with LLMs.
Hang on, you're now saying that if something has ever been described in fiction it doesn't count as invention? So if somebody literally developed a working photon torpedo, that isn't new because "Star Trek Did It"?
Well, they can use tools, and tools includes physics simulations, so if it is possible (and FWIW the tool-free "intuition" of ChatGPT is "there will never be an age of antimatter"), then why couldn't LLMs grind those tools to get a solution?
FWIW, your "evidence" is a text editor. I'm glad you made a tool that works for you, but the parent's point stands; this is a 200-level course-curriculum homework assignment. Tens of thousands of homemade editors exist, in various states of disrepair and vain overengineering.
The difference between those is the person is actually using this text editor that they built with the help of LLMs. There's plenty of people creating novel scripts and programs that can accommodate their own unique specifications.
If a programmer creating their own software (or contracting it out to a developer) would be a bespoke suit and using software someone or some company created without your input is an off the rack suit, I'd liken these sorts of programs as semi-bespoke, or made to measure.
"LLMs are literally technology that can only reproduce the past" feels like an odd statement. I think the point they're going for is that it's not thinking and so it's not going to produce new ideas like a human would? But literally no technology does that. That is all derived from some human beings being particularly clever.
LLMs are tools. They can enable a human to create new things because they are interfacing with a human to facilitate it. It's merging the functional knowledge and vision of a person and translating it into something else.
What is the point that you believe would be demonstrated by a new text editor running at the limit of hardware in a compiled editor? Would that point apply to every other text editor that exists already?
I'm being hyperbolic of course, but I'm a little dismissive of the progress that happened since the days of BBS's and car based cell phones - we just got more connectivity, more capacity, more content, bigger/faster. Likewise, my attitude toward machine learning before 2023 is a smug 'heh, these computer scientists are doing undisciplined statistics at scale, how nice for them.' Then all of a sudden the machines woke up and started arguing with me, coherently, even about niche topics I have a PhD in. I can appreciate in retrospect how much of the machine learning progress ultimately went into that, but, like fusion, the magic payoff was supposed to be decades away and always remain decades away. This wasn't supposed to happen in my lifetime. 2025 progress isn't the 2023 shock, but this was the year LLM's-as-programmers (and LLM's-as-mathematicians, and...) went from 'isn't that cute, the machine is trying' to 'an expert with enough time would make better choices than the machine did,' and that makes for a different world. More so than, going from a Commodore Vic 20 with 4k of RAM and a modem to the latest Macbook.
Crypto bros in hindsight were so much less dangerous than AI bros. At least they weren't trying to construct data centers in rural America or prop up artificial stocks like $NVDA.
Crazy how the two most hyped and funded technologies of the decade were: energy wasting fake money for criminals and energy wasting plagiarism machines.
Speaking of which, we never found out the details (strike price/expiration) of Michael Burry's puts, did we? It seems he could have made bank if he'd waited one more month...
It's funny how people complain about the rust belt dying and factories leaving rural communities and so on, then when someone wants to build something that can provide jobs and tax revenue, everyone complains.
How many people are employed at the average data center? A few dozen? Versus a steel mill, that’s nothing. A chicken plant in Nebraska closed down this last month. 3200 people lost their jobs. You think Meta will fill it with GPUs and the whole town will have jobs again?
Many more are employed while building it. And they will never stop building. It's modern version of rail. But instead of distances it will cover the area.
As if any taxes will be paid to the areas affected, and add to that the billions in taxes used to subsidize everything before a single cent is a net positive.
The negatives outweigh the positives, if only because the positives are so small. A bunch of coders making their lives easier doesn't really matter, but pupils and students skipping education does. As a meme said: you had better start eating healthy, because your future doctor vibed his way through med school.
> I don't understand why Hacker News is so dismissive about the coming of LLMs
I find LLMs incredibly useful, but if you were following along the last few years the promise was for “exponential progress” with a teaser world destroying super intelligence.
We objectively are not on that path. There is no “coming of LLMs”. We might get some incremental improvement, but we’re very clearly seeing sigmoid progress.
I can’t speak for everyone, but I’m tired of hyperbolic rants that are unquestionably not justified (the nice thing about exponential progress is you don’t need to argue about it)
Language model capability at generating text output.
The model progress this year has been a lot of:
- “We added multimodal”
- “We added a lot of non AI tooling” (ie agents)
- “We put more compute into inference” (ie thinking mode)
So yes, there is still rapid progress, but these ^ make it clear, at least to me, that next gen models are significantly harder to build.
Simultaneously we see a distinct narrowing between players (openai, deepseek, mistral, google, anthropic) in their offerings.
Thats usually a signal that the rate of progress is slowing.
Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
Do you even remember the releases? Yeah. I dont. I had to look it up.
Just another model with more or less the same capabilities.
“Mixed reception”
That is not what exponential progress looks like, by any measure.
The progress this year has been in the tooling around the models, smaller faster models with similar capabilities. Multimodal add ons that no one asked for, because its easier to add image and audio processing than improve text handling.
That may still be on a path to AGI, but it not an exponential path to it.
> Language model capability at generating text output.
That's not a metric, that's a vague non-operationalized concept, that could be operationalized into an infinite number of different metrics. And an improvement that was linear in one of those possible metrics would be exponential in another one (well, actually, one that is was linear in one would also be linear in an infinite number of others, as well as being exponential in an infinite number of others.
That’s why you have to define an actual metric, not simply describe a vague concept of a kind of capacity of interest, before you can meaningfully discuss whether improvement is exponential. Because the answer is necessarily entirely dependent on the specific construction of the metric.
> Language model capability at generating text output.
That's not a quantifiable sentence. Unless you put it in numbers, anyone can argue exponential/not.
> next gen models are significantly harder to build.
That's not how we judge capability progress though.
> Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
> Do you even remember the releases?
At gpt 3 level we could generate some reasonable code blocks / tiny features. (An example shown around at the time was "explain what this function does" for a "fib(n)") At gpt 4, we could build features and tiny apps. At gpt 5, you can often one-shot build whole apps from a vague description. The difference between them is massive for coding capabilities. Sorry, but if you can't remember that massive change... why are you making claims about the progress in capabilities?
> Multimodal add ons that no one asked for
Not only does multimodal input training improve the model overall, it's useful for (for example) feeding back screenshots during development.
Yeah, probably. But no chart actually shows it yet. For now we are firmly in exponential zone of the signoid curve and can't really tell if it's going to end in a year, decade or a century.
Doesn't even matter if the goal is extremely high. Talking about exponential when we clearly see matching energy needs proves there is no way we can maintain that pace without radical (and thus unpredictable) improvements.
My own "feeling" is that it's definitely not exponential but again, doesn't matter if it's unsustainable.
We're very clearly seeing exponential progress - even above trend, on METR, whose slope keeps getting revised to a higher and higher estimate each time. Explain your perspective on the objective evidence against exponential progress?
Because that requires adoption. Devs on hackernews are already the most up to date folks in the industry and even here adoption of LLMs is incredibly slow. And a lot of the adoption that does happen is still with older tech like ChatGPT or Cursor.
Writing the code itself was never the main bottleneck. Designing the bigger solution, figuring out tradeoffs, taking to affected teams, etc. takes as much time as it used to. But still, there's definitely a significant improvement in code production part in many areas.
I think this is an open question still and very interesting. Ilya discussed this on the Dwarkesh podcast. But the capabilities of LLMs is clearly exponential and perhaps super exponential. We went from something that could string together incoherent text in 2022 to general models helping people like Terrance Tao and Scott Aaronson write new research papers. LLMs also beat IMO and the ICPC. We have entered the John Henry era for intellectual tasks...
Very spurious claims, given that there was no effort made to check whether the IMO or ICPC problems were in the training set or not, or to quantify how far problems in the training set were from the contest problems. IMO problems are supposed to be unique, but since it's not at the frontier of math research, there is no guarantee that the same problem, or something very similar, was not solved in some obscure manual.
How long before introduction of computers lead to increases in average productivity? How long for the internet? Business is just slow to figure out how to use anything for its benefit, but it eventually gets there.
The best example is that even ATM machines didn't reduce bank teller jobs.
Why? Because even the bank teller is doing more than taking and depositing money.
IMO there is an ontological bias that pervades our modern society that confuses the map for the territory and has a highly distorted view of human existence through the lens of engineering.
We don't see anything in this time series, because this time series itself is meaningless nonsense that reflects exactly this special kind of ontological stupidity:
As if the sum of human interaction in an economy is some kind of machine that we just need to engineer better parts for and then sum the outputs.
Any non-careerist, thinking person that studies economics would conclude we don't and will probably not have the tools to properly study this subject in our lifetimes. The high dimensional interaction of biology, entropy and time. We have nothing. The career economist is essentially forced to sing for their supper in a type of time series theater.
Then there is the method acting of pretending to be surprised when some meaningless reductionist aspect of human interaction isn't reflected in the fake time series.
LLMs from late 2024 were nearly worthless as coding agents, so given they have quadrupled in capability since then (exponential growth, btw), it's not surprising to see a modestly positive impact on SWE work.
Also, I'm noticing you're not explaining yourself :)
I think this is happening by raising the floor for job roles which are largely boilerplate work. If you are on the more skilled side or work in more original/ niche areas, AI doesn't really help too much. I've only been able to use AI effectively for scaling refactors, not really much in feature development. It often just slows me down when I try to use it. I don't see this changing any time soon.
Hey, I'm not the OG commentator, why do I have to explain myself! :)
When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
> Hey, I'm not the OG commentator, why do I have to explain myself! :)
The issue is that you're not acknowledging or replying to people's explanations for _why_ they see this as exponential growth. It's almost as if you skimmed through the meat of the comment and then just re-phrased your original idea.
> When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
This comparison doesn't make sense because we know the limits of cars but we don't yet know the limits of LLMs. It's an open question. Whether or not an F1 engine can make it the speed of light in 20 seconds is not an open question.
It's not in me to somehow disprove claims of exponential growth when there isn't even evidence provided of it.
My point with the F1 comparison is to say that a short period of rapid improvement doesn't imply exponential growth and it's about as weird to expect that as it is for an f1 car to reach the speed of light. It's possible you know, the regulations are changing for next season - if Leclerc sets a new lap record in Australia by .1 ms we can just assume exponential improvements and surely Ferrari will be lapping the rest of the field by the summer right?
Based on quite a few comments recently, it also looks like many have tried LLMs in the past, but haven't seriously revisited either the modern or more expensive models. And I get it. Not everyone wants to keep up to date every month, or burn cash on experiments. But at the same time, people seem to have opinions formed in 2024. (Especially if they talk about just hallucinations and broken code - tell the agent to search for docs and fix stuff) I'd really like to give them Opus 4.5 as an agent to refresh their views. There's lots to complain about, but the world has moved on significantly.
This has been the argument since day one. You just have to try the latest model, that's where you went wrong. For the record I use Claude Code quite a bit and I can't see much meaningful improvements from the last few models. It is a useful tool but it's shortcomings are very obvious.
Just last week Opus 4.5 decided that the way to fix a test was to change the code so that everything else but the test broke.
When people say ”fix stuff” I always wonder if it actually means fix, or just make it look like it works (which is extremely common in software, LLM or not).
Nope, I did get a lot of fancy markdown with emojis though so I guess that was a nice tradeoff.
In general, even with access to the entire code base (which is very small), I find the inherent need in the models to satisfy the prompter to be their biggest flaw since it tends to constantly lead down this path. I often have to correct over convoluted SQL too because my problems are simple and the training data seems to favor extremely advanced operations.
Sure, I get an occasional bad result from Opus - then I revert and try again, or ask it for a fix. Even with a couple of restarts, it's going to be faster than me on average. (And that's ignoring the situations where I have to restart myself)
Basically, you're saying it's not perfect. I don't think anyone is claiming otherwise.
The problem is it’s imperfect in very unpredictable ways. Meaning you always need to keep it on a short leash for anything serious, which puts a limit on the productivity boost. And that’s fine, but does this match the level of investment and expectations?
The idea of HN being dismissive of impactful technology is as old as HN. And indeed, the crowd often appears stuck in the past with hindsight. That said, HN discussions aren't homogeneous, and as demonstrated by Karpathy in his recent blogpost "Auto-grading decade-old Hacker News", at least some commenters have impressive foresight: https://karpathy.bearblog.dev/auto-grade-hn/
So exactly 10 years ago a lot of people believed that the game Go would not be “conquered” by AI, but after just a few months it was. People will always be skeptical of new things, even people who are in tech, because many hyped things indeed go nowhere… while it may look obvious in hindsight, it’s really hard to predict what will and what won’t be successful. On the LLM front I personally think it’s extremely foolish to still consider LLMs as going nowhere. There’s a lot more evidence today of the usefulness of LLMs than there was of DeepMind being able to beat top human players in Go 10 years ago.
Many people feel threatened by the rapid advancements in LLMs, fearing that their skills may become obsolete, and in turn act irrationally. To navigate this change effectively, we must keep open minds, keep adaptable, and embrace continuous learning.
Many comments discussing LLMs involve emotions, sure. :) Including, obviously, comments in favour of LLMs.
But most discussion I see is vague and without specificity and without nuance.
Recognising the shortcomings of LLMs makes comments praising LLMs that much more believable; and recognising the benefits of LLMs makes comments criticising LLMs more believable.
I'd completely believe anyone who says they've found the LLM very helpful at greenfield frontend tasks, and I'd believe someone who found the LLM unable to carry out subtle refactors on an old codebase in a language that's not Python or JavaScript.
it isn't irrational to act in self-interest. If LLM threatens someone's livelihood, it matters not that it helps humanity overall one bit - they will oppose it. I don't blame them. But i also hope that they cannot succeed in opposing it.
It's irrational to genuinely hold false beliefs about capabilities of LLMs. But at this point I assume around half of the skeptics are emotionally motivated anyway.
It’s not the technology I’m dismissive about. It’s the economics.
25 years ago I was optimistic about the internet, web sites, video streaming, online social systems. All of that. Look at what we have now. It was a fun ride until it all ended up “enshitified”. And it will happen to LLMs, too. Fool me once.
Some developer tools might survive in a useful state on subscriptions. But soon enough the whole A.I. economy will centralise into 2 or 3 major players extracting more and more revenue over time until everyone is sick of them. In fact, this process seems to be happening at a pretty high speed.
Once the users are captured, they’ll orient the ad-spend market around themselves. And then they’ll start taking advantage of the advertisers.
I really hope it doesn’t turn out this way. But it’s hard to be optimistic.
Contrary to the case for the internet, there is a way out, however - if local, open-source LLMs get good. I really hope they do, because enshittification does seem unavoidable if we depend on commercial offerings.
It is an over correction because of all the empty promises of LLMs. I use Claude and chatgpt daily at work and am amazed at what they can do and how far they can come.
BUT when I hear my executive team talk and see demos of "Agentforce" and every saas company becoming an AI company promising the world, I have to roll my eyes.
The challenge I have with LLMs is they are great at creating first draft shiny objects and the LLMs themselves over promise. I am handed half baked work created by non technical people that now I have to clean up. And they don't realize how much work it is to take something from a 60% solution to a 100% solution because it was so easy for them to get to the 60%.
Amazing, game changing tools in the right hands but also give people false confidence.
Not that they are not also useful for non-technical people but I have had to spend a ton of time explaining to copywriters on the marketing team that they shouldn't paste their credentials into the chat even if it tells them to and their vibe coded app is a security nightmare.
This seems like the right take. The claims of the imminence of AGI are exhausting and to me appear dissonant with reality. I've tried gemini-cli and Claude Code and while they're both genuinely quite impressive, they absolutely suffer from a kind of prototype syndrome. While I could learn to use these tools effectively for large-scale projects, I still at present feel more comfortable writing such things by hand.
The NVIDIA CEO says people should stop learning to code. Now if LLMs will really end up as reliable as compilers, such that they can write code that's better and faster than I can 99% of the time, then he might be right. As things stand now, that reality seems far-fetched. To claim that they're useless because this reality has not yet been achieved would be silly, but not more silly than claiming programming is a dead art.
Speaking for myself: because if the hype were to be believed we should have no relational databases when there's MongoDB, no need for dollars when there's cryptocoins, all virtual goods would be exclusively sold as NFTs, and we would be all driving self-driving cars by now.
LLMs are being driven mostly by grifters trying to achieve a monopoly before they run out of cash. Under those conditions I find their promises hard to believe. I'll wait until they either go broke or stop losing money left and right, and whatever is left is probably actually useful.
The way I've been handling the deafening hype is to focus exclusively on what the models that we have right now can do.
You'll note I don't mention AGI or future model releases in my annual roundup at all. The closest I get to that is expressing doubt that the METR chart will continue at the same rate.
If you focus exclusively on what actually works the LLM space is a whole lot more interesting and less frustrating.
> focus exclusively on what the models that we have right now can do
I'm just a casual user, but I've been doing the same and have noticed the sharp improvements of the models we have now vs a year ago. I have OpenAI Business subscription through work, I signed up for Gemini at home after Gemini 3, and I run local models on my GPU.
I just ask them various questions where I know the answer well, or I can easily verify. Rewrite some code, factual stuff etc. I compare and contrast by asking the same question to different models.
AGI? Hell no. Very useful for some things? Hell yes.
> I don't understand why Hacker News is so dismissive about the coming of LLMs.
Eh. I wouldn’t be so quick to speak for the entirety of HN. Several articles related to LLMs easily hit the front page every single day, so clearly there are plenty of HN users upvoting them.
I think you're just reading too much into what is more likely classic HN cynicism and/or fatigue.
When an "AI skeptic" sees a very positive AI comment, they try to argue that it is indeed interesting but nowhere near close to AI/AGI/ASI or whatever the hype at the moment uses.
When an "AI optimistic" sees a very negative AI comment, they try to list all the amazing things they have done that they were convinced was until then impossible.
Exactly. There was a stretch of 6 months or so right after ChatGPT was released where approximately 50% of front page posts at any given time were related to LLMs. And these days every other Show HN is some kind of agentic dev tool and Anthropic/OpenAI announcements routinely get 500+ comments in a matter of hours.
LLMs hold some real utility. But that real utility is buried under a mountain of fake hype and over-promises to keep shareholder value high.
LLMs have real limitations that aren't going away any time soon - not until we move to a new technology fundamentally different and separate from them - sharing almost nothing in common. There's a lot of 'progress-washing' going on where people claim that these shortfalls will magically disappear if we throw enough data and compute at it when they clearly will not.
I find opus 4.5 and gpt 5.2 mind blowing more often than I find them dumb as rocks. I don’t listen to or read any marketing material, I just use the tools. I couldn’t care less about what the promises are, what I have now available to me is fundamentally different from what I had in August and it changed completely how I work.
I think the missing ingredient is not something the LLMs lack, but something we as developers don't do - we need to constrain, channel, and guide agents by creating reactive test environments around them. Not vibes, but hard tests, they are the missing ingredient to coding agents. You can even use AI to write most of these tests but the end result depends on how well you structured your code to be testable.
If you inherit 9000 tests from an existing project you can vibe code a replacement on your phone in a holiday, like Simon Willison's JustHTML port. We are moving from agents semi-randomly flailing around to constraint satisfaction.
Markets never deliver. That isnt new, i do think llms are not far off from google in terms of impact.
Search, as of today, is inferior to frontier models as a product. However, best case still misses expected returns by miles which is where the growsing comes from.
Generative art/ai is still up in the air for staying power but id predict it isnt going away.
The internet and smartphones were immediately useful in a million different ways for almost every person. AI is not even close to that level. Very to somewhat useful in some fields (like programming) but the average person will easily be able to go through their day without using AI.
The most wide-appeal possibility is people loving 100%-AI-slop entertainment like that AI Instagram Reels product. Maybe I'm just too disconnected with normies but I don't see this taking off. Fun as a novelty like those Ring cam vids but I would never spend all day watching AI generated media.
The early internet and smartphones (the Japanese ones, not iPhone) were definitely not "immediately" adopted by the mass, unlike LLM.
If "immediate" usefulness is the metric we measure, then the internet and smartphones are pretty insignificant inventions compared to LLM.
(of course it's not a meaningful metric, as there is no clear line between a dumb phone and a smart phone, or a moderately sized language model and a LLM)
How many pay? And out of that how many are willing to pay the amount to at least cover the inference costs (not loss leading?)
Outside the verifiable domains I think the impact is more assistance/augmentation than outright disruption (i.e. a novelty which is still nice). A little tiny bit of value sprinkled over a very large user base but each person deriving little value overall.
Even as they use it as search it is at best an incrementable improvement on what they used to do - not life changing.
Kagi’s Research Assistant is pretty damn useful, particularly when I can have it poll different models. I remember when the first iPhone lacked copy-paste. This feels similar.
… the internet was not immediately useful in a million different ways for almost every person.
Even if you skip ARPAnet, you’re forgetting the Gopher days and even if you jump straight to WWW+email==the internet, you’re forgetting the mosaic days.
The applications that became useful to the masses emerged a decade+ after the public internet and even then, it took 2+ decades to reach anything approaching saturation.
Your dismissal is not likely to age well, for similar reasons.
the "usefulness" excuse is irrelevant, and the claim that phones/internet is "immediately useful" is just a post hoc rationalization. It's basically trying to find a reasonable reason why opposition to AI is valid, and is not in self-interest.
The opposition to AI is from people who feel threatened by it, because it either threatens their livelihood (or family/friends'), and that they feel they are unable to benefit from AI in the same way as they had internet/mobile phones.
The usefulness of mobile phones was identifiable immediately and it is absolutely not 'post hoc rationalization'. The issue was the cost - once low cost mobile telephones were produced they almost immediately became ubiquitous (see nokia share price from the release of the nokia 6110 onwards for example).
This barrier does not exist for current AI technologies which are being given away free. Minor thought experiment - just how radical would the uptake of mobile phones have been if they were given away free?
> The internet and smartphones were immediately useful in a million different ways for almost every person. AI is not even close to that level.
Those are some very rosy glasses you've got on there. The nascent Internet took forever to catch on. It was for weird nerds at universities and it'll never catch on, but here we are.
A year after the iPhone came out… it didn’t have an App Store, barely was able to play video, barely had enough power to last a day. You just don’t remember or were not around for it.
A year after llms came out… are you kidding me?
Two years?
10 years?
Today, by adding an MCP server to wrap the same API that’s been around forever for some system, makes the users of that system prefer NLI over the gui almost immediately.
> Very to somewhat useful in some fields (like programming) but the average person will easily be able to go through their day without using AI.
I know a lot of "normal" people who have completely replaced their search engine with AI. It's increasingly a staple for people.
Smartphones were absolutely NOT immediately useful in a million different ways for almost every person, that's total revisionist history. I remember when the iPhone came out, it was AT&T only, it did almost nothing useful. Smartphones were a novelty for quite a while.
I agree with most points but as a tech enthusiast, I was using a smart phone years before the iPhone, and I could already use the internet, make video calls, email etc around 2005. It was a small flip phone but it was not uncommon for phones to do that already at that time, at least in Australia and parts of Asia (a Singaporean friend told me about the phone).
I think the split between vibe coding and AI-assisted coding will only widen over time. If you ask LLMs to do something complex, they will fail and you waste your time. If you work with them as a peer, and you delegate tasks to them, they will succeed and you save your time.
I'm not against AI/LLM(in fact, i am quite supportive to it). But one of my biggest fear is overusing AI. We may introduce some tool that only "AI/LLM" can resonably do(Like tool with weird, convoluted UI/UX, syntax) and no one against it because AI/LLM can use/interact.
Then genAI, It's become more and more difficult to tell which is AI and which is not, and AI is in everywhere. I dont know what to think about it. "If you can't tell, does it matter ?"
It’s still around, and tends to be adopted by big enterprises. It’s generally a decent product, but is facing a lot of equally powerful competition and is very expensive.
Thank you for your warning about the normalization of deviance. Do you think there will be an AI agent software worm like NotPetya which will cause a lot of economic damage?
I'm expecting something like a malicious prompt injection which steals API keys and crypto wallets and uses additional tricks to spread itself further.
Or targeted prompt injections - like spear phishing attacks - against people with elevated privileges (think root sysadmins) who are known to be using coding agents.
MCP isn't going anywhere. Some developers can't seem to see past their terminal or dev environment when it comes to MCP. Skills, etc do not replace MCP and MCP is far more than just documentation searching.
MCP is a great way for an LLM to connect to an external system in a standardized way and immediately understand what tools it has available, when and how to use them, what their inputs and outputs are,etc.
For example, we built a custom MCP server for our CRM. Now our voice and chat agents that run on elevenlabs infrastructure can connect to our system with one endpoint, understand what actions it can take, and what information it needs to collect from the user to perform those actions.
I guess this could maybe be done with webhooks or an API spec with a well crafted prompt? Or if eleven labs provided an executable environment with tool calling? But at some point you're just reinventing a lot of the functionality you get for free from MCP, and all major LLMs seem to know how to use MCP already.
Yeah, I don't think I was particularly clear in that section.
I don't think MCP is going to go away, but I do think it's unlikely to ever achieve the level of excitement it had in early 2025 again.
If you're not building inside a code execution environment it's a very good option for plugging tools into LLMs, especially across different systems that support the same standard.
But code execution environments are so much more powerful and flexible!
I expect that once we come up with a robust, inexpensive way to run a little Bash environment - I'm still hoping WebAssembly gets us there - there will be much less reason to use MCP even outside of coding agent setups.
I disagree. MCP will remain the best way to do most things for the same reason REST APIs are the main way to access non local services: they provide a way to secure and audit access to systems in a way that a coding environment cannot. And you can authorize actions depending on the well defined inputs and outputs. You can’t do that using just a bash script unless said script actually does SSO and calls REST APIs but then you just have a worse MCP client without any interoperability.
For connecting agents to third-party systems I prefer CLI tools, less context bloat and faster. You can define the CLI usage in your agent instructions. If the MCP you're using doesn't exist as a CLI, build one with your agent.
I never quite got what was so "hot" about it. There seems to be an entire parallel ecosystem of corporates that are just begging to turn AI into PowerPoint slides so that they can mould it into a shape that's familiar.
One reason may be that it makes it a lot easier to open up a product to AI. Instead of adding a bad ChatGPT UI clone into your app, you inverse control and let external AI tools interact with your application and its data, thus giving your customers immediate benefits, while simultaneously sating your investors/founders/managers desire to somehow add AI.
MCP or skills? Can a skill negate the need for MCP. In addition there was a YC startup who is looking at searching docs for LLMs or similar. I think MCP may be less needed once you have skills, openapi specs, and other things that LLMs can call directly.
I can’t get over the range of sentiment on LLMs. HN leans snake oil, X leans “we’re all cooked” —- can it possibly be both? How do other folks make sense of this? I’m not asking for a side, rather understanding the range. Does the range lead you to believe X over Y?
I believe the spikiness in response is because AI itself is spiky - it’s incredibly good at some classes of tasks, and remarkably poor at others. People who use it on the spikes are genuinely amazed because of how good it is. This does nothing but annoy the people who use it in the troughs, who become increasingly annoyed that everyone seems to be losing their mind over something that can’t even do (whatever).
Well, this is the internet. Arguing about everything is its favorite pastime.
But generally yes, I think back to Mongo/Node/metaverse/blockchain/IDEs/tablets and pretty much everything has had its boosters and skeptics, this is just more... intense.
Anyway I've decided to believe my own eyes. The crowds say a lot of things. You can try most of it yourself and see what it can and can't do. I make a point to compare notes with competent people who also spent the time trying things. What's interesting is most of their findings are compatible with mine, including for folks who don't work in tech.
Oh, and one thing is for sure: shoving this technology into every single application imaginable is a good way to lose friends and alienate users.
I think it may be all summed up by Roy Amara's observation that "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
I think this is the most-fitting one-liner right now.
The arguments going back and forth in these threads are truly a sight to behold. I don’t want to lean to any one side, but in 2025 I‘ve begun to respond to everyone who still argues that LLMs are only plagiarism machines, or are only better autocompletes, or are only good at remixing the past: Yes, correct!
And CPUs can only move zeros and ones.
This is likewise a very true statement. But look where having 0s and 1s shuffled around has brought us.
The ripple effects of a machine doing something very simple and near-meaningless, but doing it at high speed and again and again without getting tired, cannot be underestimated.
At the same time, here is Nobel Laureate Robert Solow, who famously, and at the time correctly, stated that "You can see the computer age everywhere but in the productivity statistics."
It took a while, but eventually, his statement became false.
The effects might be drastically different from what you would expect though. We’ve seen this with machine learning/AI again and again that what looks probable to work doesn’t work out and unexpected things work.
The problem with X is that so many people who have no verifiable expertise are super loud in shouting "$INDUSTRY is cooked!!" every time a new model releases. It's exhausting and untrue. The kind of video generation we see might nail realism but if you want to use it to create something meaningful which involves solving a ton of problems and making difficult choices in order to express an idea, you run into the walls of easy work pretty quickly. It's insulting then for professionals to see manga PFPs on X put some slop together and say "movie industry is cooked!". It betrays a lack of understanding of what it takes to make something good and it gives off a vibe of "the loud ones are just trying to force this objectively meh-by-default thing to happen".
The other day there was that dude loudly arguing about some code they wrote/converted even after a woman with significant expertise in the topic pointed out their errors.
Gen AI has its promise. But when you look at the lack of ethics from the industry, the cacophony of voices of non experts screaming "this time it's really doom", and the weariness/wariness that set in during the crypto cycle, it's a natural tendency that people are going to call snake oil.
That said, I think the more accurate representation here is that HN as a whole is calling the hype snake oil. There's very little question anymore about the tools being capable of advanced things. But there is annoyance at proclamations of it being beyond what it really is at the moment which is that it's still at the stage of being an expertise+motivation multiplier for deterministic areas of work. It's not replacing that facet any time soon on its current trend (which could change wildly in 2026). Not until it starts training itself I think. Could be famous last words
My take (no more informed than anyone else's) is that the range indicates this is a complex phenomenon that people are still making sense of. My suspicion is that something like the following is going on:
1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.
2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.
3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.
4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.
5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.
6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.
That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.
I'm not really convinced that anywhere leans heavily towards anything; it depends which thread you're in etc.
It's polarizing because it represents a more radical shift in expected workflows. Seeing that range of opinions doesn't really give me a reason to update, no. I'm evaluating based on what makes sense when I hear it.
From my perspective, both show HN and Twitter's normal biases. I view HN as generally leaning toward "new things suck, nothing ever changes", and I view Twitter generally as "Things suck, and everything is getting worse". Both of those align with snake oil and we're all cooked.
Speaking of asynchronous agents, what do people use? Claude Code for web is extremely limited, because you have no custom tools. Claude Code in GitHub Actions is vastly more useful, due to the custom environment, but ackward to use interactively. Are there any good alternatives?
I use Claude Code for web with an environment allowing full internet access, which means it can install extra tools as and when it needs them. I don't run into limits with it very often.
> The reason I think MCP may be a one-year wonder is the stratospheric growth of coding agents. It appears that the best possible tool for any situation is Bash—if your agent can run arbitrary shell commands, it can do anything that can be done by typing commands into a terminal.
I push back strongly from this. In the case of the solo, one-machine coder, this is likely the case - if you're exposing workflows or fixed tools to customers / collegues / the web at large via API or similar, then MCP is still the best way to expose it IMO.
Think about a GitHub or Jira MCP server - commandline alone they are sure to make mistakes with REST requests, API schema etc. With MCP the proper known commands are already baked in. Remember always that LLMs will be better with natural language than code.
Planing depends on deterministic view of the future. I used to plan (esp annual plans) until about 5 years. Now I scan for trends and prepare myself for different scenarios that can come in the future. Even if you get it approximately right, you stand apart.
For tech trends, I read Simon, Benedict Evans, Mary Meeker etc. Simon is in a better position make these predictions than anyone else having closely analyzed these trends over the last few years.
I'm curious how all of the progress will be seen if it does indeed result in mass unemployment (but not eradication) of professional software engineers.
My prediction: If we can successfully get rid of most software engineers, we can get rid of most knowledge work. Given the state of robotics, manual labor is likely to outlive intellectual labor.
"Given the state of robotics" reminds me a lot of what was said about llms and image/video models over the past 3 years. Considering how much llms improved, how long can robotics be in this state?
I have to think 3 years from now we will be having the same conversation about robots doing real physical labor.
"This is the worst they will ever be" feels more apt.
but robotics had the means to do majority of the physical labour already - it's just not worth the money to replace humans, as human labour is cheap (and flexible - more than robots).
With knowledge work being less high-paying, physical labour supply should increase as well, which drops their price. This means it's actually less likely that the advent of LLM will make physical labour more automated.
The question is how rapid the adoption is. The price of failure in the real world is much higher ($$$, environmental, physical risks) vs just "rebuild/regenerate" in the digital realm.
Curious if you have any links about the rapid progression of robotics (as someone who is not educated on the topic).
It was my feeling with robotics that the more challenging aspect will be making them economically viable rather than simply the challenge of the task itself.
That’s the deep irony of technology IMHO, that innovation follows Conway's law on a meta layer: White collar workers inevitably shaped high technology after themselves, and instead of finally ridding humanity of hard physical labour—as was the promise of the Industrial Revolution—we imitate artists, scientists, and knowledge workers.
We can now use natural language to instruct computers generate stock photos and illustrations that would take a professional artist a few years ago, discover new molecule shapes, beat the best Go players, build the code for entire applications, or write documents of various shapes and lengths—but painting a wall? An unsurmountable task that requires a human to execute reliably, not even talking about economics.
I would have agreed with this a few months ago, but something Ive learned is that the ability to verify an LLMs output is paramount to its value. In software, you can review its output, add tests, on top of other adversarial techniques to verify the output immediately after generation.
With most other knowledge work, I don't think that is the case. Maybe actuarial or accounting work, but most knowledge work exists at a cross section of function and taste, and the latter isn't an automatically verifiable output.
I also believe this - I think it will probably just disrupt software engineering and any other digital medium with mass internet publication (i.e. things RLVR can use). For the short term future it seems to need a lot of data to train on, and no other profession has posted the same amount of verifiable material. The open source altruism has disrupted the profession in the end; just not in the way people first predicted. I don't think it will disrupt most knowledge work for a number of reasons. Most knowledge professions have "credentials' (i.e. gatekeeping) and they can see what is happening to SWE's and are acting accordingly. I'm hearing it firsthand at least locally in things like law, even accounting, etc. Society will ironically respect these professions more for doing so.
Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.
I mean law and accounting usually have a “right” answer that you can verify against. I can see a test data set being built for most professions. I’m sure open source helps with programming data but I doubt that’s even the majority of their training. If you have a company like Google you could collect data on decades of software work in all its dimensions from your workforce
It's not about invalidating your conclusion, but I'm not so sure about law having a right answer. At a very basic level, like hypothetical conduct used in basic legal training matrerials or MCQs, or in criminal/civil code based situations in well-abstracting Roman law-based jurisdictions, definitely.
But the actual work, at least for most lawyers is to build on many layers of such abstractions to support your/client's viwepoint.
And that level is already about persuasion of other people, not having the "right" legal argument or applying the most correct case found. And this part is not documented well, approaches changes a lot, even if law remains the same.
Think of family law or law of succession - does not change much over centuries but every day, worldwide, millions of people spend huge amounts of money and energy on finding novel ways to turn those same paragraphs to their advantage and put their "loved" ones and relatives in a worse position.
Not really. I used to think more general with the first generation of LLM's but given all progress since o1 is RL based I'm thinking most disruption will happen in open productive domains and not closed domains. Speaking to people in these professions they don't think SWE's have any self respect and so in your example of law:
* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)
* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.
* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.
* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.
* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).
* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.
TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.
I nearly added a section about that. I wanted to contrast the thing where many companies are reducing junior engineering hires with the thing where Cloudflare and Shopify are hiring 1,000+ interns. I ran out of time and hadn't figured out a good way to frame it though so I dropped it.
Even if it will make software engineering drastically more productive, it’s questionable that this will lead to unemployment. Efficiency gains translate to lower prices. Sometimes this leads to very few additional demand, as can be seen with masses of typesetters that lost their jobs. Sometimes this leads to a dramatically higher demand like you can see in the classic Jevons paradox examples of coal and light bulbs. I highly suspect software falls in the latter category
Software demand is philosophically limited by the question of "What can your computer do for you?"
You can describe that somewhat formally as:
{What your computer can do} intersect {What you want done (consciously or otherwise)}
Well a computer can technically calculate any computuable task that fits in bounded memory, that is an enormous set so its real limitations are its interfaces. In which case it can send packets, make noises, and display images.
How many human desires are things that can be solved with making noises, displaying images, and sending packets? Turns out quite a few but its not everything.
Basically I'm saying we should hope more sorts of physical interfaces come around (like VR and Robotics) so we cover more human desires. Robotics is a really general physical interface (like how ip packets are an extremely general interface) so its pretty promising if it pans out.
Personally, I find it very hard to even articulate what desires I have. I have this feeling that I might be substantially happier if I was just sitting around a campfire eating food and chatting with people instead of enjoying whatever infinite stuff a super intelligent computer and robots could do for me. At least some of the time.
The ability to accurately describe what you want with all constraints managed and with proactive design is the actual skill. Not programming. The day PMs can do that and have LLMs that can code to that, is the day software engineers en masse will disappear. But that day is likely never.
The non-technical people I've ever worked for were hopelessly terrible at attention to detail. They're hiring me primarily for that anyway.
This overly discussed thesis is already laughable - decent LLMs have been out for 3 years now and unemployment (using US as example) is up around 1% over the same time frame - and even attributing that small percentage change completely to AI is also laughable
Speaking of new year and AI: my phone just suggested "Happy Birthday!" as the quick-reply to any "Happy New Year!" notification I got in the last hours.
This year I had a spotify and a youtube thing to "recall my year", and it was abolute garbage (30% truth, to be exact). I think they're doing it more like an exercise to build up systems, infra, processes, people, etc - it's already clear they don't actually care about users.
It won't help to point out the worst examples. You're not competing with an outdated Apple LLM running on a phone. You're competing with Anthropic frontier models running on a multimillion dollar rack of servers.
AI slop videos will no doubt get longer and "more realistic" in 2026.
I really hope social media companies plaster a prominent banner over them which screams, "Likely/Made by AI" and give us the option to automatically mute these videos from our timeline. That would be the responsible thing to do. But I can't see Alphabet doing that on YT, xAI doing that on X or Meta doing that on FB/Insta as they all have skin in the video gen game.
>I really hope social media companies plaster a prominent banner over them which screams, "Likely/Made by AI" and give us the option to automatically mute these videos from our timeline.
They should just be deleted. They will not be, because they clearly generate ad revenue.
Let's talk about the societal cost these models have had on us including their high energy cost and the proliferation of auto-generated slop media used to milk ad revenue, scam people, SEO farm, do propaganda or automate trolling. What about these big corporations collecting an astronomical amount of debt to hoard DRAM and NAND in a way that has crippled the PC market within weeks? And what are they going to do next, put a few dollars in Trump's pocket so that they can rob/loot the US population through bailouts? Who gets to keep all the hardware I wonder?
Nvidia, Samsung, SK Hynix and some other voltures I forgot to mention are making serious bank right now.
The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.
But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.
How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.
Same thing with “psychosis”, which is a manufactured moral panic crisis.
If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.
Sure -- but that's fair game in engineering. I work on cars. If we kill people with safety faults I expect it to make more headlines than all the fun roadtrips.
What I find interesting with chat bots is that they're "web apps" so to speak, but with safety engineering aspects that type of developer is typically not exposed to or familiar with.
One of the tough problems here is privacy. AI labs really don't want to be in the habit of actively monitoring people's conversations with their bots, but they also need to prevent bad situations from arising and getting worse.
Until AI labs have the equivalent of an SLA for giving accurate and helpful responses it don't get better. They've not even able to measure if the agents work correctly and consistently.
I hope 2026 will be the year when software engineers and recruiters will stop the obsession with leetcode and all other forms of competitive programming bullshit
> The year of YOLO and the Normalization of Deviance #
On this including AI agents deleting home folders, I was able to run agents in Firejail by isolating vscode (Most of my agents are vscode based ones, like Kilo Code).
Took a bit of tweaking, vscode crashing a bunch of times with not being able to read its config files, but I got there in the end. Now it can only write to my projects folder. All of my projects are backed up in git.
I have a bunch of tabs opened on this exact topic, so thank you for sharing. So far I've been using devcontainers w/ vscode, and mostly having a blast with it. It is a bit awkward since some extensions need to be installed in the remote env, but they seem to play nicely after you have it setup, and the keys and stuff get populated so things like kilocode, cline, roo work fine.
My experience with AI so far: It's still far from "butler" level assistance for anything beyond simple tasks.
I posted about my failures to try to get them to review my bank statements [0] and generally got gaslit about how I was doing it wrong, that I if trust them to give them full access to my disk and terminal, they could do it better.
But I mean, at that point, it's still more "manual intelligence" than just telling someone what I want. A human could easily understand it, but AI still takes a lot of wrangling and you still need to think from the "AI's PoV" to get the good results.
But enough whining. I want AI to get better so I can be lazier. After trying them for a while, one feature that I think all natural-language As need to have, would be the ability to mark certain sentences as "Do what I say" (aka Monkey's Paw) and "Do what I mean", like how you wrap phrases in quotes on Google etc to indicate a verbatim search.
So for example I could say "[[I was in Japan from the 5th to 10th]], identify foreign currency transactions on my statement with "POS" etc in the description" then the part in the [[]] (or whatever other marker) would be literal, exactly as written, but the rest of the text would be up to the AI's interpretation/inference so it would also search for ATM withdrawals etc.
Ideally, eventually we should be able to have multiple different AI "personas" akin to different members of household staff: your "chef" would know about your dietary preferences, your "maid" would operate your Roomba, take care of your laundry, your "accountant" would do accounty stuff.. and each of them would only learn about that specific domain of your life: the chef would pick up the times when you get hungry, but it won't know about your finances, and so on. The current "Projects" paradigm is not quite that yet.
>I’m still holding hope that slop won’t end up as bad a problem as many people fear.
That's the pure, uncut copium. Meanwhile, in the real world, search on major platforms is so slanted towards slop that people need to specify that they want actual human music:
You’re absolutely right! You astutely observed that 2025 was a year with many LLMs and this was a selection of waypoints, summarized in a helpful timeline.
That’s what most non-tech-person’s year in LLMs looked like.
Hopefully 2026 will be the year where companies realize that implementing intrusive chatbots can’t make better ::waving hands:: ya know… UX or whatever.
For some reason, they think its helpful to distractingly pop up chat windows on their site because their customers need textual kindergarten handholding to … I don’t know… find the ideal pocket comb for their unique pocket/hair situation, or had an unlikely question about that aerosol pan release spray that a chatbot could actually answer. Well, my dog also thinks she’s helping me by attacking the vacuum when I’m trying to clean. Both ideas are equally valid.
And spending a bazillion dollars implementing it doesn’t mean your customers won’t hate it. And forcing your customers into pathways they hate because of your sunk costs mindset means it will never stop costing you more money than it makes.
I just hope companies start being honest with themselves about whether or not these things are good, bad, or absolutely abysmal for the customer experience and cut their losses when it makes sense.
I took the good with the bad: the ai assisted coding tools are a multiplier, google ai overviews in search results are half baked (at best) and often just factually wrong. AI was put in the instagram search bar for no practical purpose etc.
There was also source point pollution before the Industrial Revolution. Useless, forced, irritating chat was ‘nowhere close’ to as aggressive or pervasive as it is now. It used to be a niche feature of some CRMs and now it’s everywhere.
I’m on LinkedIn Learning digging into something really technical and practical and it’s constantly pushing the chat fly out with useless pre-populated prompts like “what are the main takeaways from this video.” And they moved their main page search to a little icon on the title bar and sneakily now what used to be the obvious, primary central search field for years sends a prompt to their fucking chatbot.
Nothing about the severe impact on the environment, and the hand waviness about water usage hurt to read. The referenced post was missing every single point about the issue by making it global instead of local. And as if data center buildouts are properly planned and dimensioned for existing infrastructure…
Add to this that all the hardware is already old and the amount of waste we’re producing right now is mind boggling, and for what, fun tools for the use of one?
I don’t live in the US, but the amount of tax money being siphoned to a few tech bros should have heads rolling and I really don’t want to see it happening in Europe.
But I guess we got a new version number on a few models and some blown up benchmarks so that’s good, oh and of course the svg images we will never use for anything.
His comment is far better than the rampant astroturfing from stakeholders going on everywhere on this website that is being mitigated not at all whatsoever. There is a wealth of information present suggesting these things are so bad for everyone in so many ways.
These people love generated content (like, they'll actually read generated blog post word-for-word and not even be angry; they'll skip a personal email for its machine summary) and they can generate all the content they'd ever want. If they want to take over HN this isn't a battle we're going to win except with aggressive moderation, and we know who feeds the mods.
HN isn't a place for thinking people any more (a long time coming, but you could squint and pretend until recently). Happy new year and adios, thanks for the 100s of accounts dang. Double pinky swear I won't make another.
This is extremely dismissive. Claude Code helps me make a majority of changes to our codebase now, particularly small ones, and is an insane efficiency boost. You may not have the same experience for one reason or another, but plenty of devs do, so "nothing happened" is absolutely wrong.
2024 was a lot of talk, a lot of "AI could hypothetically do this and that". 2025 was the year where it genuinely started to enter people's workflows. Not everything we've been told would happen has happened (I still make my own presentations and write my own emails) but coding agents certainly have!
And this is one of the vague "AI helped me do more".
This is me touting for Emacs
Emacs was a great plus for me over the last year. The integration with various tooling with comint (REPL integration), compile (build or report tools), TUI (through eat or ansi-term), gave me a unified experience through the buffer paradigm of emacs. Using the same set of commands boosted my editing process and the easy addition of new commands make it easy to fit my development workflow to the editor.
This is how easy it is to write a non-vague "tool X helped me" and I'm not even an English native speaker.
That paragraph could be the truth, or it could be a lie. Maybe Emacs really did make you more efficient, or you made it all up, I don't know. Best I can do is trust you.
If you don't trust me, I can't conclusively convince you that AI makes me more efficient, but if you want I'm happy to hop on a screen-share and elaborate in what ways it has boosted my workflow. I'm offering this because I'm also curious what your work looks like where AI cannot help at all.
You did not make a negative critique. You completely dismissed the value of coding agents on the basis that the results are not predictable, which is both obvious and doesn’t matter in practice. Anyone who has given these tools a chance will quickly realise that 1) they are actually quite predictable in doing what you ask them to, and 2) them being non-deterministic does not at all negate their value. This is why people can immediately tell you haven’t used these tools, because your argument as to why they’re useless is so elementary.
It’s also possible that people more experienced, knowledgable and skilled than you can see fundamental flaws in using LLMs for software engineering that you cannot. I am not including myself in that category.
I’m personally honestly undecided. I’ve been coding for over 30 years and know something like 25 languages. I’ve taught programming to postgrad level, and built prototype AI systems that foreshadowed LLMs, I’ve written everything from embedded systems to enterprise, web, mainframes, real time, physics simulation and research software. I would consider myself an 7/10 or 8/10 coder.
A lot of folks I know are better coders. To put my experience into context: one guy in my year at uni wrote one of the world’s most famous crypto systems; another wrote large portions of some of the most successful games of the last few decades. So I’ve grown up surrounded by geniuses, basically, and whilst I’ve been lectured by true greats I’m humble enough to recognise I don’t bleed code like they do. I’m just a dabbler. But it irks me that a lot of folks using AI profess it’s the future but don’t really know anything about coding compared to these folks. Not to be a Luddite - they are the first people to adopt new languages and techniques, but they also are super sceptical about anything that smells remotely like bullshit.
One of the most wise insights in coding is the aphorism“beware the enthusiasm of the recently converted.” And I see that so much with AI. I’ve seen it with compilers, with IDEs, paradigms, and languages.
I’ve been experimenting a lot with AI, and I’ve found it fantastic for comprehending poor code written by others. I’ve also found it great for bouncing ideas. And the code it writes, beyond boiler plate, is hot garbage. It doesn’t properly reason, it can’t design architecture, it can’t write code that is comprehensible to other programmers, and treating it as a “black box to be manipulated by AI” just leads to dead ends that can’t be escaped, terrible decisions that will take huge amounts of expert coding time to undo, subtle bugs that AI can’t fix and are super hard to spot, and often you can’t understand their code enough to fix them, and security nightmares.
Testing is insufficient for good code. Humans write code in a way that is designed for general correctness. AI does not, at least not yet.
I do think these problems can be solved. I think we probably need automated reasoning systems, or else vastly improved LLMs that border on automated reasoning much like humans do. Could be a year. Could be a decade. But right now these tools don’t work well. Great for vibe coding, prototyping, analysis, review, bouncing ideas.
Yes, that is the analogy I am making. People argued that bicycles (a tool for humans to use) could not possibly work - even as people were successfully using them.
People use drugs as well but I'm not sure I'd call that successful use of chemical compounds without further context. There are many analogies one can apply here that would be equally valid.
Please tell me which one of the headings is not about increased usage o LLMs and derived tools and is about some improvement in the axes of reliability or or any kind of usefulness.
> The coding agent heading. Claude Code and tools like it represent a huge improvement in what you can usefully get done with LLMs.
Does it? It's all prompt manipulation. Shell script are powerful yes, but not really huge improvement over having a shell (REPL interface) to the system. And even then a lot of programs just use syscalls or wrapper libraries.
> can try the code, see that it doesn't work and fix the problem.
You're welcome to try the LLM's yourself and come up with your own conclusions. By what you've posted it doesn't look like you've tried the anything in the last 2 years. Yes LLM's can be annoying, but there has been progress.
If you mean correct often enough that you can expect it to be a productive assistant that helps solve all sorts of problems faster than you could solve them without it, and which makes mistakes infrequently enough that you waste less time fixing them than you would doing everything by yourself then yes, it's plenty reliable enough now.
I know it seems like forever ago, but claude code only came out in 2025.
Its very difficult to argue the point that claude code:
1) was a paradigm shift in terms of functionality, despite, to be fair, at best, incremental improvements in the underlying models.
2) The results are an order of magnitude, I estimate, better in terms of output.
I think its very fair to distill “AI progress 2025” to: you can get better results (up to a point; better than raw output anyway; scaling to multiple agents has not worked) without better models with clever tools and loops. (…and video/image slop infests everything :p).
Did more software ship in 2025 than in 2024? I'm still looking for some actual indication of output here. I get that people feel more productive but the actual metrics don't seem to agree.
I'm still waiting for the Linux drivers to be written because of all the 20x improvements that AI hypers are touting. I would even settle for Apple M3 and M4 computers to be supported by Asahi.
I am not making any argument about productivity about using AI vs. not using AI.
My point is purely that, compared to 2024, the quality of the code produced by LLM inference agent systems is better.
To say that 2025 was a nothing burger is objectively incorrect.
Will it scale? Is it good enough to use professionally? Is this like self driving cars where the best they ever get is stuck with an odd shaped traffic cone? Is it actually more productive?
Who knows?
Im just saying… LLM coding in 2024 sucked. 2025 was a big year.
Whenever someone tells me that AI is worthless, does nothing, scam/slop etc, I ask them about their own AI usage, and their general knowledge about what's going on.
Invariably they've never used AI, or at most very rarely. (If they used AI beyond that, this would be admission that it was useful at some level).
Therefore it's reasonable to assume that you are in that boat. Now that might not be true in your case, who knows, but it's definitely true on average.
It's not worthless, it's just not worldchanging as is even in the fields where it's most useful, like programming. If the trajectory changes and we reach AGI then this changes too but right now it's just a way to
- fart out demos that you don't plan on maintaining, or want to use as a starting place
- generate first-draft unit tests/documentation
- generate boilerplate without too much functionality
- refactor in a very well covered codebase
It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.
This comment is legitimately hilarious to me. I thought it was satire at first. The list of what has happened in this field in the last twelve months is staggering to me, while you write it off as essentially nothing.
Different strokes, but I’m getting so much more done and mostly enjoying it. Can’t wait to see what 2026 holds!
People who dislike LLMs are generally insistent that they're useless for everything and have infinitely negative value, regardless of facts they're presented with.
Anyone that believes that they are completely useless is just as deluded as anyone that believes they're going to bring an AGI utopia next week.
Got a good news story about that one? I'm always interested in learning more about this issue, especially if it credibly counters the narrative that the issue is overblown.
What's not credible about Andy Masley's work on this?
(For anyone else reading this thread: my comment originally just read "Got a good news story about that one?" - justatdotin posted this reply while I was editing the comment to add the extra text.)
Why do the mods allow Simon to spam HN with his blogposts and his comments, which he often posts just for the sake of including a link back to his blog? Seriously, go look at his post history and see how often he includes a link to his blog, however tangentially related, when he posts a comment. I actually flagged this submission, which I never do, and encourage others to do likewise.
Is he, really? Most of his blog posts are little more than opportunistic, buttressing commentary on someone else's blog post or article, often with a bit of AI apologia sprinkled in (for example, marginalizing people as paranoid for not taking AI companies at their word that they aren't aggressively scraping websites in violation of robots.txt, or exfiltrating user data in AI-enbaled apps).
EDIT: and why must he link to his blog so often in his comments? How is that not SEO/engagement farming? BTW dang, I wasn't insinuating the mods were in league with him or anything, just that, IMO, he's long past the point at which good faith should no longer be assumed.
(And yes, there I go again linking to something I've written from a comment. It's entirely relevant to the point I am making here. That's why I have a blog - so I can put useful information in one place.)
I'll also note that I don't ever share links to my link blog posts on Hacker News myself - I don't think they're the right format for a HN post. I can't help if other people share them here: https://news.ycombinator.com/from?site=simonwillison.net
But given the volume of LLM slop, it was kind of obvious and known that even the moderators now have "favourites" over guidelines.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity. [0]
The blog itself is clearly used as promotion all the time when the original source(s) are buried deep in the post and almost all of the links link back to his own posts.
This is now a first on HN and a new low for moderators and as admitted have regular promotional favourites on the top of HN.
The operative word there is "primarily". Simon comments on a variety of topics and has far more interactions that don't link to his blog than do.
Simon's posts are not engagement farming by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.
I appreciate his work for being more informative and organized than average AI-related content. Without his blogging, it would be a struggle to navigate the bombastic and narcissistic Twitter/Reddit posts for AI updates. The barrier to entry for AI reporting is so low that you just need to give a bit more care to be distinguished, and he is getting the deserved attention for doing exactly that in a systematical and disciplined manner. (I do believe many on HN are more than capable but not interested in doing the same.) Personally, I sometimes find his posts more congratulatory or trivial than I like, but I have learned to take what I want and ignore what I don’t.
The difference between the performance of models between 2024 and 2025 has been so stark, that graph really shows it. There are still many people on these forums who seem to think AI’s produce terrible code unless ultra supervised, and I can’t help but suspect some of them tried it a little while ago and just don’t understand how different it is now compared to even quite recently.
I still dont believe AGI, ASI or Whatever AI will take over human in short period of time say 10 - 20 years. But it is hard to argue against the value of current AI, which many of the vocal critics on HN seems to have the opinion of. People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Being more of a Hardware person, the most interesting part to me is the funding of all the developments of latest hardware. I know this is another topic HN hate because of the DRAM and NAND pricing issue. But it is exciting to see this from a long term view where the pricing are short term pain. Right now the industry is asking, we have together over a trillion dollar to spend on Capex over the next few years and will even borrow more if it needs to be, when can you ship us 16A / 14A / 10A and 8A or 5A, LPDDR6, Higher Capacity DRAM at lower power usage, better packaging, higher speed PCIe or a jump to optical interconnect? Every single part of the hardware stack are being fused with money and demand. The last time we have this was Post-PC / Smartphone era which drove the hardware industry forward for 10 - 15 years. The current AI can at least push hardware for another 5 - 6 years while pulling forward tech that was initially 8 - 10 years away.
I so wished I brought some Nvidia stock. Again, I guess no one knew AI would be as big as it is today, and it is only just started.
I don't need an AGI. I do need a secretary type agent that deals with all the simple but yet laborious non technical tasks that keep infringing on my quality engineering time. I'm CTO for a small startup and the amount of non technical bullshit that I need to deal with is enormous. Some examples of random crap I deal with: figuring out contracts, their meaning/implication to situations, and deciding on a course of action; Customer offers, price calculations, scraping invoices from emails and online SAAS accounts, formulating detailed replies to customer requests, HR legal work, corporate bureaucracy, financial planning, etc.
A lot of this stuff can be AI assisted (and we get a lot of value out of ai tools for this) but context engineering is taking up a non trivial amount of my time. Also most tools are completely useless at modifying structured documents. Refactoring a big code base, no problem. Adding structured text to an existing structured document, hardest thing ever. The state of the art here is an ff-ing sidebar that will suggest you a markdown formatted text that you might copy/paste. Tool quality is very primitive. And then you find yourself just stripping all formatting and reformatting it manually. Because the tools really suck at this.
> hard to argue against the value of current AI
> People are willing to pay $200 per month, and it is getting $1B dollar runway already.
Those are 3 different things. There can be a LOT of fast and significant improvements but still remain extremely far from the actual goal, so far it looks like actually little progress.
People pay for a lot of things, including snake oil, so convincing a lot of people to pay a bit is not in itself a proof of value, especially when some people are basically coerced into this, see how many companies changed their "strategy" to mandating AI usage internally, or integration for a captive audience e.g. Copilot.
Finally yes, $1B is a LOT of money for you and I... but for the largest corporations it's actually not a lot. For reference Google earned that in revenue... per day in 2023. Anyway that's still a big number BUT it still has to be compared with, well how much does OpenAI burn. I don't have any public number on that but I believe the consensus is that it's a lot. So until we know that number we can't talk about an actual runway.
Still, the gap between the capabilities of a cutting edge LLM and that of a human is only this wide. There are only this many increments it takes to cross it.
From a business standpoint, it makes some sense to throttle the gaming supply some. Not to the point of surrendering the market to someone else probably, but to a measurable degree.
* The year of YOLO and the Normalization of Deviance
* The year that Llama lost its way
* The year of alarmingly AI-enabled browsers
* The year of the lethal trifecta
* The year of slop
* The year that data centers got extremely unpopular
[1] https://blog.samaltman.com/reflections
Mostly the training. I put less and less weight on "LLMs are fundamentally flawed" and more and more of it on "you're training them wrong". Too many "fundamental limitations" of LLMs are ones you can move the needle on with better training alone.
The foundation of LLM is flexible and capable, and the list of "capabilities that are exclusive to human mind" is ever shrinking.
I looked into docker and then realized the problem I'm actually trying to solve was solved in like 1970 with users and permissions.
I just made a agent user limited to its own home folder, and added my user to its group. Then I run Claude code etc as the agent user.
So it can only read write /home/agent, and it cannot read or write my files.
I add myself to agent group so I can read/write the agent files.
I run into permission issues sometimes but, it's pretty smooth for the most part.
Oh also I gave it root to a $3 VPS. It's so nice having a sysadmin! :) That part definitely feels a bit deviant though!
Also , it's just normal backend work - calling a bunch of APIs. What am I missing here?
Actually making a system like this work seems easy, but isn't really.
(Though with the CURRENT generation or two of models it has gotten "pretty easy" I think. Before that, not so much.)
This year honestly feels quite stagnant. LLMs are literally technology that can only reproduce the past. They're cool, but they were way cooler 4 years ago. We've taken big ideas like "agents" and "reinforcement learning" and basically stripped them of all meaning in order to claim progress.
I mean, do you remember Geoffrey Hinton's RBM talk at Google in 2010? [0] That was absolutely insane for anyone keeping up with that field. By the mid-twenty teens RBMs were already outdated. I remember when everyone was implementing flavors of RNNs and LSTMs. Karpathy's character 2015 RNN project was insane [1].
This comment makes me wonder if part of the hype around LLMs is just that a lot of software people simply weren't paying attention to the absolutely mind-blowing progress we've seen in this field for the last 20 years. But even ignoring ML, the world's of web development and mobile application development have gone through incredible progress over the last decade and a half. I remember a time when JavaScript books would have a section warning that you should never use JS for anything critical to the application. Then there's the work in theorem provers over the last decade... If you remember when syntactic sugar was progress, either you remember way further back than I do, or you weren't paying attention to what was happening in the larger computing world.
0. https://www.youtube.com/watch?v=VdIURAu1-aU
1. https://karpathy.github.io/2015/05/21/rnn-effectiveness/
Funny, I've used them to create my own personalized text editor, perfectly tailored to what I actually want. I'm pretty sure that didn't exist before.
It's wild to me how many people who talk about LLM apparently haven't learned how to use them for even very basic tasks like this! No wonder you think they're not that powerful, if you don't even know basic stuff like this. You really owe it to yourself to try them out.
I've worked at multiple AI startups in lead AI Engineering roles, both working on deploying user facing LLM products and working on the research end of LLMs. I've done collaborative projects and demos with a pretty wide range of big names in this space (but don't want to doxx myself too aggressively), have had my LLM work cited on HN multiple times, have LLM based github projects with hundreds of stars, appeared on a few podcasts talking about AI etc.
This gets to the point I was making. I'm starting to realize that part of the disconnect between my opinions on the state of the field and others is that many people haven't really been paying much attention.
I can see if recent LLMs are your first intro to the state of the field, it must feel incredible.
The change hit us so fast a huge number of people don’t understand how capable it is yet.
Also it certainly doesn’t help that it still hallucinates. One mistake and it’s enough to set someone against LLMs. You really need to push through that hallucinations are just the weak part of the process to see the value.
So it is absurdly incorrect to say "they can only reproduce the past." Only someone who hasn't been paying attention (as you put it) would say such a thing.
That is a derived output. That isn't new as in: novel. It may be unique but it is derived from training data. LLMs legitimately cannot think and thus they cannot create in that way.
Why is that kind of thinking required to create novel works?
Randomness can create novelty.
Mistakes can be novel.
There are many ways to create novelty.
Also I think you might not know how LLMs are trained to code. Pre-training gives them some idea of the syntax etc but that only gets you to fancy autocomplete.
Modern LLMs are heavily trained using reinforcement data which is custom task the labs pay people to do (or by distilling another LLM which has had the process performed on it).
For a more practical example, creating bindings from dynamic-language-A for a library in compiled-language-B is a genuinely useful task, allowing you to create things that didn't exist before. Those things are likely to unlock great happiness and/or productivity, even if they are derived from training data.
Also the derived argument doesn’t really hold, just because you know about two things doesn’t mean you’d be able to come up with the third, it’s actually very hard most of the time and requires you to not do next token prediction.
Also , a shitton of what we do economically is reproducing the past with slight tweaks and improvements. We all do very repetitive things and these tools cut the time / personnel needed by a significant factor.
This is even more clear in the case of diffusion models (which I personally love using, and have spent a lot of time researching). All of the "new" images created by even the most advanced diffusion models are fundamentally remixing past information. This is really obvious to anyone who has played around with these extensively because they really can't produce truly novel concepts. New concepts can be added by things like fine-tuning or use of LoRAs, but fundamentally you're still just remixing the past.
LLMs are always doing some form of interpolation between different points in the past. Yes they can create a "new" SQL query, but it's just remixing from the SQL queries that have existed prior. This still makes them very useful because a lot of engineering work, including writing a custom text editor, involve remixing existing engineering work. If you could have stack-overflowed your way to an answer in the past, an LLM will be much superior. In fact, the phrase "CRUD" largely exists to point out that most webapps are fundamentally the same.
A great example of this limitation in practice is the work that Terry Tao is doing with LLMs. One of the largest challenges in automated theorem proving is translating human proofs into the language of a theorem prover (often Lean these days). The challenge is that there is not very much Lean code currently available to LLMs (especially with the necessary context of the accompanying NL proof), so they struggle to correctly translate. Most of the research in this area is around improving LLM's representation of the mapping from human proofs to Lean proofs (btw, I personally feel like LLMs do have a reasonably good chance of providing major improvements in the space of formal theorem proving, in conjunction with languages like Lean, because the translation process is the biggest blocker to progress).
When you say:
> So it is absurdly incorrect to say "they can only reproduce the past."
It's pretty clear you don't have a solid background in generative models, because this is fundamentally what they do: model an existing probability distribution and draw samples from that. LLMs are doing this for a massive amount of human text, which is why they do produce some impressive and useful results, but this is also a fundamental limitation.
But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs. If you've read The Three Body Problem, it's very much like living in the world where scientific progress is impeded by sophons. In that world there is still some progress (especially with abundant energy), but it remains fundamentally and deeply limited.
Put another way, and I hate to throw in the now over-used phrase, but I feel you may be responding to a strawman that doesn't much appear in the article or the discussion here: "Because these tools don't achieve a god-like level of novel perfection that no one is really promising here, I dismiss all this sorta crap."
Especially when I think you are also admitting that the technology is a fairly useful tool on its own merits - a stance which I believe represents the bulk of the feelings that supporters of the tech here on HN are describing.
I apologize if you feel I am putting unrepresentative words in your mouth, but this is the reading I am taking away from your comments.
Do you only take LLM seriously if it can be another Einstein?
> But a world where we used LLMs for the majority of work, would be a world with no fundamental breakthroughs.
What do you consider recent fundamental breakthroughs?
Even if you are right, human can continue to work on hard problems while letting LLM handle the majority of derivative work
Like, I'm sorry, but you're just flat-out wrong and I've got the proof sitting on my hard drive. I use this supposedly impossible program daily.
From what you've described an LLM has not invented anything. LLMs that can reason have a bit more slight of hand but they're not coming up with new ideas outside of the bounds of what a lot of words have encompassed in both fiction and non.
Good for you that you've got a fun token of code that's what you've always wanted, I guess. But this type of fantasy take on LLMs seems to be more and more prevalent as of late. A lot of people defending LLMs as if they're owed something because they've built something or maybe people are getting more and more attached to them from the conversational angle. I'm not sure, but I've run across more people in 2025 that are way too far in the deep end of personifying their relationships with LLMs.
I for one think your work is pretty cool - even though I haven't seen it, using something you built everyday is a claim not many can make!
If a programmer creating their own software (or contracting it out to a developer) would be a bespoke suit and using software someone or some company created without your input is an off the rack suit, I'd liken these sorts of programs as semi-bespoke, or made to measure.
"LLMs are literally technology that can only reproduce the past" feels like an odd statement. I think the point they're going for is that it's not thinking and so it's not going to produce new ideas like a human would? But literally no technology does that. That is all derived from some human beings being particularly clever.
LLMs are tools. They can enable a human to create new things because they are interfacing with a human to facilitate it. It's merging the functional knowledge and vision of a person and translating it into something else.
Curious, does it perform at the limit of the hardware? Was it programmed in a tools language (like C++, Rust, C, etc.) or in a web tech?
Is this such a big limitation? Most jobs are basically people trained on past knowledge applying it today. No need to generate new knowledge.
And a lot of new knowledge is just combining 2 things from the past in a new way.
I remember when we just wanted to rewrite everything in Rust.
Those were the simpler times, when crypto bros seemed like the worst venture capitalism could conjure.
And if so, what happens to those builders once the data center is built?
I haven’t heard about new businesses, job creation and growth in former industrial towns. What have I missed?
But LLM is certainly a game changer, I can see it delivering impact bigger than the internet itself. Both require a lot of investments.
I find LLMs incredibly useful, but if you were following along the last few years the promise was for “exponential progress” with a teaser world destroying super intelligence.
We objectively are not on that path. There is no “coming of LLMs”. We might get some incremental improvement, but we’re very clearly seeing sigmoid progress.
I can’t speak for everyone, but I’m tired of hyperbolic rants that are unquestionably not justified (the nice thing about exponential progress is you don’t need to argue about it)
First you need to define what it means. What's the metric? Otherwise it's very much something you can argue about.
Language model capability at generating text output.
The model progress this year has been a lot of:
- “We added multimodal”
- “We added a lot of non AI tooling” (ie agents)
- “We put more compute into inference” (ie thinking mode)
So yes, there is still rapid progress, but these ^ make it clear, at least to me, that next gen models are significantly harder to build.
Simultaneously we see a distinct narrowing between players (openai, deepseek, mistral, google, anthropic) in their offerings.
Thats usually a signal that the rate of progress is slowing.
Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
Do you even remember the releases? Yeah. I dont. I had to look it up.
Just another model with more or less the same capabilities.
“Mixed reception”
That is not what exponential progress looks like, by any measure.
The progress this year has been in the tooling around the models, smaller faster models with similar capabilities. Multimodal add ons that no one asked for, because its easier to add image and audio processing than improve text handling.
That may still be on a path to AGI, but it not an exponential path to it.
That's not a metric, that's a vague non-operationalized concept, that could be operationalized into an infinite number of different metrics. And an improvement that was linear in one of those possible metrics would be exponential in another one (well, actually, one that is was linear in one would also be linear in an infinite number of others, as well as being exponential in an infinite number of others.
That’s why you have to define an actual metric, not simply describe a vague concept of a kind of capacity of interest, before you can meaningfully discuss whether improvement is exponential. Because the answer is necessarily entirely dependent on the specific construction of the metric.
That's not a quantifiable sentence. Unless you put it in numbers, anyone can argue exponential/not.
> next gen models are significantly harder to build.
That's not how we judge capability progress though.
> Remind me what was so great about gpt 5? How about gpt4 from from gpt 3?
> Do you even remember the releases?
At gpt 3 level we could generate some reasonable code blocks / tiny features. (An example shown around at the time was "explain what this function does" for a "fib(n)") At gpt 4, we could build features and tiny apps. At gpt 5, you can often one-shot build whole apps from a vague description. The difference between them is massive for coding capabilities. Sorry, but if you can't remember that massive change... why are you making claims about the progress in capabilities?
> Multimodal add ons that no one asked for
Not only does multimodal input training improve the model overall, it's useful for (for example) feeding back screenshots during development.
https://chrisfrewin.medium.com/why-llms-will-never-be-agi-70...
Seems to be playing out that way.
Yeah, probably. But no chart actually shows it yet. For now we are firmly in exponential zone of the signoid curve and can't really tell if it's going to end in a year, decade or a century.
My own "feeling" is that it's definitely not exponential but again, doesn't matter if it's unsustainable.
Very spurious claims, given that there was no effort made to check whether the IMO or ICPC problems were in the training set or not, or to quantify how far problems in the training set were from the contest problems. IMO problems are supposed to be unique, but since it's not at the frontier of math research, there is no guarantee that the same problem, or something very similar, was not solved in some obscure manual.
By what metric?
Why? Because even the bank teller is doing more than taking and depositing money.
IMO there is an ontological bias that pervades our modern society that confuses the map for the territory and has a highly distorted view of human existence through the lens of engineering.
We don't see anything in this time series, because this time series itself is meaningless nonsense that reflects exactly this special kind of ontological stupidity:
https://fred.stlouisfed.org/series/PRS85006092
As if the sum of human interaction in an economy is some kind of machine that we just need to engineer better parts for and then sum the outputs.
Any non-careerist, thinking person that studies economics would conclude we don't and will probably not have the tools to properly study this subject in our lifetimes. The high dimensional interaction of biology, entropy and time. We have nothing. The career economist is essentially forced to sing for their supper in a type of time series theater. Then there is the method acting of pretending to be surprised when some meaningless reductionist aspect of human interaction isn't reflected in the fake time series.
I think it never did. Still has not.
https://en.wikipedia.org/wiki/Productivity_paradox
LLMs from late 2024 were nearly worthless as coding agents, so given they have quadrupled in capability since then (exponential growth, btw), it's not surprising to see a modestly positive impact on SWE work.
Also, I'm noticing you're not explaining yourself :)
When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
The issue is that you're not acknowledging or replying to people's explanations for _why_ they see this as exponential growth. It's almost as if you skimmed through the meat of the comment and then just re-phrased your original idea.
> When Fernando Alonso (best rookie btw) goes from 0-60 in 2.4 seconds in his Aston Martin, is it reasonable to assume he will near the speed of light in 20 seconds?
This comparison doesn't make sense because we know the limits of cars but we don't yet know the limits of LLMs. It's an open question. Whether or not an F1 engine can make it the speed of light in 20 seconds is not an open question.
My point with the F1 comparison is to say that a short period of rapid improvement doesn't imply exponential growth and it's about as weird to expect that as it is for an f1 car to reach the speed of light. It's possible you know, the regulations are changing for next season - if Leclerc sets a new lap record in Australia by .1 ms we can just assume exponential improvements and surely Ferrari will be lapping the rest of the field by the summer right?
When people say ”fix stuff” I always wonder if it actually means fix, or just make it look like it works (which is extremely common in software, LLM or not).
In general, even with access to the entire code base (which is very small), I find the inherent need in the models to satisfy the prompter to be their biggest flaw since it tends to constantly lead down this path. I often have to correct over convoluted SQL too because my problems are simple and the training data seems to favor extremely advanced operations.
Basically, you're saying it's not perfect. I don't think anyone is claiming otherwise.
But most discussion I see is vague and without specificity and without nuance.
Recognising the shortcomings of LLMs makes comments praising LLMs that much more believable; and recognising the benefits of LLMs makes comments criticising LLMs more believable.
I'd completely believe anyone who says they've found the LLM very helpful at greenfield frontend tasks, and I'd believe someone who found the LLM unable to carry out subtle refactors on an old codebase in a language that's not Python or JavaScript.
it isn't irrational to act in self-interest. If LLM threatens someone's livelihood, it matters not that it helps humanity overall one bit - they will oppose it. I don't blame them. But i also hope that they cannot succeed in opposing it.
I'd assume that around half of the optimists are emotionally motivated this way.
25 years ago I was optimistic about the internet, web sites, video streaming, online social systems. All of that. Look at what we have now. It was a fun ride until it all ended up “enshitified”. And it will happen to LLMs, too. Fool me once.
Some developer tools might survive in a useful state on subscriptions. But soon enough the whole A.I. economy will centralise into 2 or 3 major players extracting more and more revenue over time until everyone is sick of them. In fact, this process seems to be happening at a pretty high speed.
Once the users are captured, they’ll orient the ad-spend market around themselves. And then they’ll start taking advantage of the advertisers.
I really hope it doesn’t turn out this way. But it’s hard to be optimistic.
BUT when I hear my executive team talk and see demos of "Agentforce" and every saas company becoming an AI company promising the world, I have to roll my eyes.
The challenge I have with LLMs is they are great at creating first draft shiny objects and the LLMs themselves over promise. I am handed half baked work created by non technical people that now I have to clean up. And they don't realize how much work it is to take something from a 60% solution to a 100% solution because it was so easy for them to get to the 60%.
Amazing, game changing tools in the right hands but also give people false confidence.
Not that they are not also useful for non-technical people but I have had to spend a ton of time explaining to copywriters on the marketing team that they shouldn't paste their credentials into the chat even if it tells them to and their vibe coded app is a security nightmare.
The NVIDIA CEO says people should stop learning to code. Now if LLMs will really end up as reliable as compilers, such that they can write code that's better and faster than I can 99% of the time, then he might be right. As things stand now, that reality seems far-fetched. To claim that they're useless because this reality has not yet been achieved would be silly, but not more silly than claiming programming is a dead art.
LLMs are being driven mostly by grifters trying to achieve a monopoly before they run out of cash. Under those conditions I find their promises hard to believe. I'll wait until they either go broke or stop losing money left and right, and whatever is left is probably actually useful.
You'll note I don't mention AGI or future model releases in my annual roundup at all. The closest I get to that is expressing doubt that the METR chart will continue at the same rate.
If you focus exclusively on what actually works the LLM space is a whole lot more interesting and less frustrating.
I'm just a casual user, but I've been doing the same and have noticed the sharp improvements of the models we have now vs a year ago. I have OpenAI Business subscription through work, I signed up for Gemini at home after Gemini 3, and I run local models on my GPU.
I just ask them various questions where I know the answer well, or I can easily verify. Rewrite some code, factual stuff etc. I compare and contrast by asking the same question to different models.
AGI? Hell no. Very useful for some things? Hell yes.
Eh. I wouldn’t be so quick to speak for the entirety of HN. Several articles related to LLMs easily hit the front page every single day, so clearly there are plenty of HN users upvoting them.
I think you're just reading too much into what is more likely classic HN cynicism and/or fatigue.
When an "AI skeptic" sees a very positive AI comment, they try to argue that it is indeed interesting but nowhere near close to AI/AGI/ASI or whatever the hype at the moment uses.
When an "AI optimistic" sees a very negative AI comment, they try to list all the amazing things they have done that they were convinced was until then impossible.
LLMs have real limitations that aren't going away any time soon - not until we move to a new technology fundamentally different and separate from them - sharing almost nothing in common. There's a lot of 'progress-washing' going on where people claim that these shortfalls will magically disappear if we throw enough data and compute at it when they clearly will not.
If you inherit 9000 tests from an existing project you can vibe code a replacement on your phone in a holiday, like Simon Willison's JustHTML port. We are moving from agents semi-randomly flailing around to constraint satisfaction.
Search, as of today, is inferior to frontier models as a product. However, best case still misses expected returns by miles which is where the growsing comes from.
Generative art/ai is still up in the air for staying power but id predict it isnt going away.
The most wide-appeal possibility is people loving 100%-AI-slop entertainment like that AI Instagram Reels product. Maybe I'm just too disconnected with normies but I don't see this taking off. Fun as a novelty like those Ring cam vids but I would never spend all day watching AI generated media.
If "immediate" usefulness is the metric we measure, then the internet and smartphones are pretty insignificant inventions compared to LLM.
(of course it's not a meaningful metric, as there is no clear line between a dumb phone and a smart phone, or a moderately sized language model and a LLM)
Outside the verifiable domains I think the impact is more assistance/augmentation than outright disruption (i.e. a novelty which is still nice). A little tiny bit of value sprinkled over a very large user base but each person deriving little value overall.
Even as they use it as search it is at best an incrementable improvement on what they used to do - not life changing.
Kagi’s Research Assistant is pretty damn useful, particularly when I can have it poll different models. I remember when the first iPhone lacked copy-paste. This feels similar.
(And I don’t think we’re heading towards AGI.)
Even if you skip ARPAnet, you’re forgetting the Gopher days and even if you jump straight to WWW+email==the internet, you’re forgetting the mosaic days.
The applications that became useful to the masses emerged a decade+ after the public internet and even then, it took 2+ decades to reach anything approaching saturation.
Your dismissal is not likely to age well, for similar reasons.
The opposition to AI is from people who feel threatened by it, because it either threatens their livelihood (or family/friends'), and that they feel they are unable to benefit from AI in the same way as they had internet/mobile phones.
This barrier does not exist for current AI technologies which are being given away free. Minor thought experiment - just how radical would the uptake of mobile phones have been if they were given away free?
Those are some very rosy glasses you've got on there. The nascent Internet took forever to catch on. It was for weird nerds at universities and it'll never catch on, but here we are.
A year after llms came out… are you kidding me?
Two years?
10 years?
Today, by adding an MCP server to wrap the same API that’s been around forever for some system, makes the users of that system prefer NLI over the gui almost immediately.
I know a lot of "normal" people who have completely replaced their search engine with AI. It's increasingly a staple for people.
Smartphones were absolutely NOT immediately useful in a million different ways for almost every person, that's total revisionist history. I remember when the iPhone came out, it was AT&T only, it did almost nothing useful. Smartphones were a novelty for quite a while.
Lol. It's worse than nothing at all.
Then genAI, It's become more and more difficult to tell which is AI and which is not, and AI is in everywhere. I dont know what to think about it. "If you can't tell, does it matter ?"
I look forward to learning from his blog posts and HN comments in the year ahead, too.
> At the end of every month I send out a much shorter newsletter to anyone who sponsors me for $10 or more on GitHub
https://simonwillison.net/about/#monthly
Or targeted prompt injections - like spear phishing attacks - against people with elevated privileges (think root sysadmins) who are known to be using coding agents.
I like to believe, but MCP is quickly turning into an enterprise thing so I think it will stick around for good.
MCP is a great way for an LLM to connect to an external system in a standardized way and immediately understand what tools it has available, when and how to use them, what their inputs and outputs are,etc.
For example, we built a custom MCP server for our CRM. Now our voice and chat agents that run on elevenlabs infrastructure can connect to our system with one endpoint, understand what actions it can take, and what information it needs to collect from the user to perform those actions.
I guess this could maybe be done with webhooks or an API spec with a well crafted prompt? Or if eleven labs provided an executable environment with tool calling? But at some point you're just reinventing a lot of the functionality you get for free from MCP, and all major LLMs seem to know how to use MCP already.
I don't think MCP is going to go away, but I do think it's unlikely to ever achieve the level of excitement it had in early 2025 again.
If you're not building inside a code execution environment it's a very good option for plugging tools into LLMs, especially across different systems that support the same standard.
But code execution environments are so much more powerful and flexible!
I expect that once we come up with a robust, inexpensive way to run a little Bash environment - I'm still hoping WebAssembly gets us there - there will be much less reason to use MCP even outside of coding agent setups.
But generally yes, I think back to Mongo/Node/metaverse/blockchain/IDEs/tablets and pretty much everything has had its boosters and skeptics, this is just more... intense.
Anyway I've decided to believe my own eyes. The crowds say a lot of things. You can try most of it yourself and see what it can and can't do. I make a point to compare notes with competent people who also spent the time trying things. What's interesting is most of their findings are compatible with mine, including for folks who don't work in tech.
Oh, and one thing is for sure: shoving this technology into every single application imaginable is a good way to lose friends and alienate users.
The arguments going back and forth in these threads are truly a sight to behold. I don’t want to lean to any one side, but in 2025 I‘ve begun to respond to everyone who still argues that LLMs are only plagiarism machines, or are only better autocompletes, or are only good at remixing the past: Yes, correct!
And CPUs can only move zeros and ones.
This is likewise a very true statement. But look where having 0s and 1s shuffled around has brought us.
The ripple effects of a machine doing something very simple and near-meaningless, but doing it at high speed and again and again without getting tired, cannot be underestimated.
At the same time, here is Nobel Laureate Robert Solow, who famously, and at the time correctly, stated that "You can see the computer age everywhere but in the productivity statistics."
It took a while, but eventually, his statement became false.
The other day there was that dude loudly arguing about some code they wrote/converted even after a woman with significant expertise in the topic pointed out their errors.
Gen AI has its promise. But when you look at the lack of ethics from the industry, the cacophony of voices of non experts screaming "this time it's really doom", and the weariness/wariness that set in during the crypto cycle, it's a natural tendency that people are going to call snake oil.
That said, I think the more accurate representation here is that HN as a whole is calling the hype snake oil. There's very little question anymore about the tools being capable of advanced things. But there is annoyance at proclamations of it being beyond what it really is at the moment which is that it's still at the stage of being an expertise+motivation multiplier for deterministic areas of work. It's not replacing that facet any time soon on its current trend (which could change wildly in 2026). Not until it starts training itself I think. Could be famous last words
1. LLMs can do some truly impressive things, like taking natural language instructions and producing compiling, functional code as output. This experience is what turns some people into cheerleaders.
2. Other engineers see that in real production systems, LLMs lack sufficient background / domain knowledge to effectively iterate. They also still produce output, but it's verbose and essentially missing the point of a desired change.
3. LLMs also can be used by people who are not knowledgeable to "fake it," and produce huge amounts of output that is basically besides-the-point bullshit. This makes those same senior folks very, very resentful, because it wastes a huge amount of their time. This isn't really the fault of the tool, but it's a common way the tool gets used and so it gets tarnished by association.
4. There is a ridiculous amount of complexity in some of these tools and workflows people are trying to invent, some of which is of questionable value. So aside from the tools themselves people are skeptical of the people trying to become thought leaders in this space and the sort of wild hacks they're coming up with.
5. There are real macro questions about whether these tools can be made economical to justify whatever value they do produce, and broader questions about their net impact on society.
6. Last but not least, these tools poke at the edges of "intelligence," the crown jewel of our species and also a big source of status for many people in the engineering community. It's natural that we're a little sensitive about the prospect of anything that might devalue or democratize the concept.
That's my take for what it's worth. It's a complex phenomenon that touches all of these threads, so not only do you see a bunch of different opinions, but the same person might feel bullish about one aspect and bearish about another.
It's polarizing because it represents a more radical shift in expected workflows. Seeing that range of opinions doesn't really give me a reason to update, no. I'm evaluating based on what makes sense when I hear it.
:)
https://chrisfrew.in/blog/two-of-my-favorite-mcp-tools-i-use...
IMO this is the best balance of getting agentic work done while having immediate access to anything else you may need with your development process.
I push back strongly from this. In the case of the solo, one-machine coder, this is likely the case - if you're exposing workflows or fixed tools to customers / collegues / the web at large via API or similar, then MCP is still the best way to expose it IMO.
Think about a GitHub or Jira MCP server - commandline alone they are sure to make mistakes with REST requests, API schema etc. With MCP the proper known commands are already baked in. Remember always that LLMs will be better with natural language than code.
Planing depends on deterministic view of the future. I used to plan (esp annual plans) until about 5 years. Now I scan for trends and prepare myself for different scenarios that can come in the future. Even if you get it approximately right, you stand apart.
For tech trends, I read Simon, Benedict Evans, Mary Meeker etc. Simon is in a better position make these predictions than anyone else having closely analyzed these trends over the last few years.
Here I wrote about my approach: https://www.jjude.com/shape-the-future/
I have to think 3 years from now we will be having the same conversation about robots doing real physical labor.
"This is the worst they will ever be" feels more apt.
With knowledge work being less high-paying, physical labour supply should increase as well, which drops their price. This means it's actually less likely that the advent of LLM will make physical labour more automated.
It was my feeling with robotics that the more challenging aspect will be making them economically viable rather than simply the challenge of the task itself.
We can now use natural language to instruct computers generate stock photos and illustrations that would take a professional artist a few years ago, discover new molecule shapes, beat the best Go players, build the code for entire applications, or write documents of various shapes and lengths—but painting a wall? An unsurmountable task that requires a human to execute reliably, not even talking about economics.
With most other knowledge work, I don't think that is the case. Maybe actuarial or accounting work, but most knowledge work exists at a cross section of function and taste, and the latter isn't an automatically verifiable output.
Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.
* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)
* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.
* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.
* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.
* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).
* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.
TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.
Software, by its nature, is practically comprehensively digitized, both in its code history as well as requirements.
You can describe that somewhat formally as:
{What your computer can do} intersect {What you want done (consciously or otherwise)}
Well a computer can technically calculate any computuable task that fits in bounded memory, that is an enormous set so its real limitations are its interfaces. In which case it can send packets, make noises, and display images.
How many human desires are things that can be solved with making noises, displaying images, and sending packets? Turns out quite a few but its not everything.
Basically I'm saying we should hope more sorts of physical interfaces come around (like VR and Robotics) so we cover more human desires. Robotics is a really general physical interface (like how ip packets are an extremely general interface) so its pretty promising if it pans out.
Personally, I find it very hard to even articulate what desires I have. I have this feeling that I might be substantially happier if I was just sitting around a campfire eating food and chatting with people instead of enjoying whatever infinite stuff a super intelligent computer and robots could do for me. At least some of the time.
The ability to accurately describe what you want with all constraints managed and with proactive design is the actual skill. Not programming. The day PMs can do that and have LLMs that can code to that, is the day software engineers en masse will disappear. But that day is likely never.
The non-technical people I've ever worked for were hopelessly terrible at attention to detail. They're hiring me primarily for that anyway.
I'm not too worried about my job just yet.
AI slop videos will no doubt get longer and "more realistic" in 2026.
I really hope social media companies plaster a prominent banner over them which screams, "Likely/Made by AI" and give us the option to automatically mute these videos from our timeline. That would be the responsible thing to do. But I can't see Alphabet doing that on YT, xAI doing that on X or Meta doing that on FB/Insta as they all have skin in the video gen game.
They should just be deleted. They will not be, because they clearly generate ad revenue.
Not going to happen as the social media companies realise they can sell you the AI tools used to post slop back onto the platform.
...and the best of them all, OpenCode[1] :)
[1]: https://opencode.ai
I don't see a similar option for ChatGPT Pro. Here's a closed issue: https://github.com/sst/opencode/issues/704
Nvidia, Samsung, SK Hynix and some other voltures I forgot to mention are making serious bank right now.
Will 2026 fare better?
The big labs are (mostly) investing a lot of resources into reducing the chance their models will trigger self-harm and AI psychosis and suchlike. See the GPT-4o retirement (and resulting backlash) for an example of that.
But the number of users is exploding too. If they make things 5x less likely to happen but sign up 10x more people it won't be good on that front.
Same thing with “psychosis”, which is a manufactured moral panic crisis.
If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.
It’s time to stop parroting slurs like that.
But that one doesn't make headlines ;)
What I find interesting with chat bots is that they're "web apps" so to speak, but with safety engineering aspects that type of developer is typically not exposed to or familiar with.
On this including AI agents deleting home folders, I was able to run agents in Firejail by isolating vscode (Most of my agents are vscode based ones, like Kilo Code).
I wrote a little guide on how I did it https://softwareengineeringstandard.com/2025/12/15/ai-agents...
Took a bit of tweaking, vscode crashing a bunch of times with not being able to read its config files, but I got there in the end. Now it can only write to my projects folder. All of my projects are backed up in git.
I posted about my failures to try to get them to review my bank statements [0] and generally got gaslit about how I was doing it wrong, that I if trust them to give them full access to my disk and terminal, they could do it better.
But I mean, at that point, it's still more "manual intelligence" than just telling someone what I want. A human could easily understand it, but AI still takes a lot of wrangling and you still need to think from the "AI's PoV" to get the good results.
[0] https://news.ycombinator.com/item?id=46374935
----
But enough whining. I want AI to get better so I can be lazier. After trying them for a while, one feature that I think all natural-language As need to have, would be the ability to mark certain sentences as "Do what I say" (aka Monkey's Paw) and "Do what I mean", like how you wrap phrases in quotes on Google etc to indicate a verbatim search.
So for example I could say "[[I was in Japan from the 5th to 10th]], identify foreign currency transactions on my statement with "POS" etc in the description" then the part in the [[]] (or whatever other marker) would be literal, exactly as written, but the rest of the text would be up to the AI's interpretation/inference so it would also search for ATM withdrawals etc.
Ideally, eventually we should be able to have multiple different AI "personas" akin to different members of household staff: your "chef" would know about your dietary preferences, your "maid" would operate your Roomba, take care of your laundry, your "accountant" would do accounty stuff.. and each of them would only learn about that specific domain of your life: the chef would pick up the times when you get hungry, but it won't know about your finances, and so on. The current "Projects" paradigm is not quite that yet.
not AI’s highlights.
Easy with the hot take.
That's the pure, uncut copium. Meanwhile, in the real world, search on major platforms is so slanted towards slop that people need to specify that they want actual human music:
https://old.reddit.com/r/MusicRecommendations/comments/1pq4f...
That’s what most non-tech-person’s year in LLMs looked like.
Hopefully 2026 will be the year where companies realize that implementing intrusive chatbots can’t make better ::waving hands:: ya know… UX or whatever.
For some reason, they think its helpful to distractingly pop up chat windows on their site because their customers need textual kindergarten handholding to … I don’t know… find the ideal pocket comb for their unique pocket/hair situation, or had an unlikely question about that aerosol pan release spray that a chatbot could actually answer. Well, my dog also thinks she’s helping me by attacking the vacuum when I’m trying to clean. Both ideas are equally valid.
And spending a bazillion dollars implementing it doesn’t mean your customers won’t hate it. And forcing your customers into pathways they hate because of your sunk costs mindset means it will never stop costing you more money than it makes.
I just hope companies start being honest with themselves about whether or not these things are good, bad, or absolutely abysmal for the customer experience and cut their losses when it makes sense.
Companies have been doing this "live support" nonsense far longer than LLMs have been popular.
I’m on LinkedIn Learning digging into something really technical and practical and it’s constantly pushing the chat fly out with useless pre-populated prompts like “what are the main takeaways from this video.” And they moved their main page search to a little icon on the title bar and sneakily now what used to be the obvious, primary central search field for years sends a prompt to their fucking chatbot.
Add to this that all the hardware is already old and the amount of waste we’re producing right now is mind boggling, and for what, fun tools for the use of one?
I don’t live in the US, but the amount of tax money being siphoned to a few tech bros should have heads rolling and I really don’t want to see it happening in Europe.
But I guess we got a new version number on a few models and some blown up benchmarks so that’s good, oh and of course the svg images we will never use for anything.
I literally said:
"AI data centers continue to burn vast amounts of energy and the arms race to build them continues to accelerate in a way that feels unsustainable."
AND I linked to my coverage from last year, which is still true today (hence why I felt no need to update it): https://simonwillison.net/2024/Dec/31/llms-in-2024/#the-envi...
We want curious conversation here.
https://news.ycombinator.com/newsguidelines.html
HN isn't a place for thinking people any more (a long time coming, but you could squint and pretend until recently). Happy new year and adios, thanks for the 100s of accounts dang. Double pinky swear I won't make another.
2024 was a lot of talk, a lot of "AI could hypothetically do this and that". 2025 was the year where it genuinely started to enter people's workflows. Not everything we've been told would happen has happened (I still make my own presentations and write my own emails) but coding agents certainly have!
Objectively 0->1 lots of backlog.
This is me touting for Emacs
Emacs was a great plus for me over the last year. The integration with various tooling with comint (REPL integration), compile (build or report tools), TUI (through eat or ansi-term), gave me a unified experience through the buffer paradigm of emacs. Using the same set of commands boosted my editing process and the easy addition of new commands make it easy to fit my development workflow to the editor.
This is how easy it is to write a non-vague "tool X helped me" and I'm not even an English native speaker.
If you don't trust me, I can't conclusively convince you that AI makes me more efficient, but if you want I'm happy to hop on a screen-share and elaborate in what ways it has boosted my workflow. I'm offering this because I'm also curious what your work looks like where AI cannot help at all.
E-mail address is on my profile!
Your example is very vague.
See if you can spot the problem in my review of Excel in your style:
"It's great and I like how it's formula paradigm gave me a unified experience. It's table features boosted my science workflows last year".
they were right
It’s also possible that people more experienced, knowledgable and skilled than you can see fundamental flaws in using LLMs for software engineering that you cannot. I am not including myself in that category.
I’m personally honestly undecided. I’ve been coding for over 30 years and know something like 25 languages. I’ve taught programming to postgrad level, and built prototype AI systems that foreshadowed LLMs, I’ve written everything from embedded systems to enterprise, web, mainframes, real time, physics simulation and research software. I would consider myself an 7/10 or 8/10 coder.
A lot of folks I know are better coders. To put my experience into context: one guy in my year at uni wrote one of the world’s most famous crypto systems; another wrote large portions of some of the most successful games of the last few decades. So I’ve grown up surrounded by geniuses, basically, and whilst I’ve been lectured by true greats I’m humble enough to recognise I don’t bleed code like they do. I’m just a dabbler. But it irks me that a lot of folks using AI profess it’s the future but don’t really know anything about coding compared to these folks. Not to be a Luddite - they are the first people to adopt new languages and techniques, but they also are super sceptical about anything that smells remotely like bullshit.
One of the most wise insights in coding is the aphorism“beware the enthusiasm of the recently converted.” And I see that so much with AI. I’ve seen it with compilers, with IDEs, paradigms, and languages.
I’ve been experimenting a lot with AI, and I’ve found it fantastic for comprehending poor code written by others. I’ve also found it great for bouncing ideas. And the code it writes, beyond boiler plate, is hot garbage. It doesn’t properly reason, it can’t design architecture, it can’t write code that is comprehensible to other programmers, and treating it as a “black box to be manipulated by AI” just leads to dead ends that can’t be escaped, terrible decisions that will take huge amounts of expert coding time to undo, subtle bugs that AI can’t fix and are super hard to spot, and often you can’t understand their code enough to fix them, and security nightmares.
Testing is insufficient for good code. Humans write code in a way that is designed for general correctness. AI does not, at least not yet.
I do think these problems can be solved. I think we probably need automated reasoning systems, or else vastly improved LLMs that border on automated reasoning much like humans do. Could be a year. Could be a decade. But right now these tools don’t work well. Great for vibe coding, prototyping, analysis, review, bouncing ideas.
Here is the changelog for OpenBSD 7.8:
https://www.openbsd.org/78.html
There's nothing here that says: We make it easier to use it more of it. It's about using it better and fixing underlying problems.
Mistakes and hallucinations matter a whole lot less if a reasoning LLM can try the code, see that it doesn't work and fix the problem.
Does it? It's all prompt manipulation. Shell script are powerful yes, but not really huge improvement over having a shell (REPL interface) to the system. And even then a lot of programs just use syscalls or wrapper libraries.
> can try the code, see that it doesn't work and fix the problem.
Can you really say that does happens reliably?
If you mean 100% correct all of the time then no.
If you mean correct often enough that you can expect it to be a productive assistant that helps solve all sorts of problems faster than you could solve them without it, and which makes mistakes infrequently enough that you waste less time fixing them than you would doing everything by yourself then yes, it's plenty reliable enough now.
Its very difficult to argue the point that claude code:
1) was a paradigm shift in terms of functionality, despite, to be fair, at best, incremental improvements in the underlying models.
2) The results are an order of magnitude, I estimate, better in terms of output.
I think its very fair to distill “AI progress 2025” to: you can get better results (up to a point; better than raw output anyway; scaling to multiple agents has not worked) without better models with clever tools and loops. (…and video/image slop infests everything :p).
My point is purely that, compared to 2024, the quality of the code produced by LLM inference agent systems is better.
To say that 2025 was a nothing burger is objectively incorrect.
Will it scale? Is it good enough to use professionally? Is this like self driving cars where the best they ever get is stuck with an odd shaped traffic cone? Is it actually more productive?
Who knows?
Im just saying… LLM coding in 2024 sucked. 2025 was a big year.
Invariably they've never used AI, or at most very rarely. (If they used AI beyond that, this would be admission that it was useful at some level).
Therefore it's reasonable to assume that you are in that boat. Now that might not be true in your case, who knows, but it's definitely true on average.
- fart out demos that you don't plan on maintaining, or want to use as a starting place
- generate first-draft unit tests/documentation
- generate boilerplate without too much functionality
- refactor in a very well covered codebase
It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.
Different strokes, but I’m getting so much more done and mostly enjoying it. Can’t wait to see what 2026 holds!
Anyone that believes that they are completely useless is just as deluded as anyone that believes they're going to bring an AGI utopia next week.
https://news.ycombinator.com/newsguidelines.html
a personal attack would be eg calling him a DC.
all I did was point out the intellectual dishonesty of his argument. that's an attack on his intellectually dishonest argument, not his person.
by all means go ahead and ban me
Ditto for "I am very disappointed about your BULLSHIT" in the GP comment.
(For anyone else reading this thread: my comment originally just read "Got a good news story about that one?" - justatdotin posted this reply while I was editing the comment to add the extra text.)
If this post was by anyone other than me would you have any problems with its quality?
Is he, really? Most of his blog posts are little more than opportunistic, buttressing commentary on someone else's blog post or article, often with a bit of AI apologia sprinkled in (for example, marginalizing people as paranoid for not taking AI companies at their word that they aren't aggressively scraping websites in violation of robots.txt, or exfiltrating user data in AI-enbaled apps).
EDIT: and why must he link to his blog so often in his comments? How is that not SEO/engagement farming? BTW dang, I wasn't insinuating the mods were in league with him or anything, just that, IMO, he's long past the point at which good faith should no longer be assumed.
"buttressing commentary on someone else's blog post"
That's how link blogs work. I wrote more about my approach to that here: https://simonwillison.net/2024/Dec/22/link-blog/
(And yes, there I go again linking to something I've written from a comment. It's entirely relevant to the point I am making here. That's why I have a blog - so I can put useful information in one place.)
I'll also note that I don't ever share links to my link blog posts on Hacker News myself - I don't think they're the right format for a HN post. I can't help if other people share them here: https://news.ycombinator.com/from?site=simonwillison.net
But given the volume of LLM slop, it was kind of obvious and known that even the moderators now have "favourites" over guidelines.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity. [0]
The blog itself is clearly used as promotion all the time when the original source(s) are buried deep in the post and almost all of the links link back to his own posts.
This is now a first on HN and a new low for moderators and as admitted have regular promotional favourites on the top of HN.
[0] https://news.ycombinator.com/newsguidelines.html
Simon's posts are not engagement farming by any definition of the term. He posts good content frequently which is then upvoted by the Hacker News community, which should be the ideal for a Hacker News contributor.