I’m 20 minutes into the video and it does seem mostly basic and agreeable.
Two arguments from Ng that really stuck out that is really tripping my skepticism alarm are:
1) He mentions how fast prototyping has begun because generating a simple app has become easier with AI. This, to me, has always been quick and never the bottleneck for any company I’ve been at, including startups. Validating an idea was simple enough via wireframing. I can maybe see it for selling an idea where you need some amount of fidelity yo impress potential investors… but I would hope places like YC can see the tech behind the idea without seeing the tech itself. Or at least can ignore low fidelity if a prototype shows the meat of the product.
2) Ng talks about how everyone in his company codes, from the front desk to the executives. The “everyone should code” idea has been done and shown to fail for the past 15 years. In fact I’ve seen it be more damaging than helpful because it gave people false confidence that they could tell engineers how to do their job rather than a more empathetic understanding.
On point 1, it's worse than that. Adding detail and veracity to a prototype is well known to bring negative value.
Prototypes must be exactly a sketchy as the ideas they represent, otherwise they mislead people into thinking the software is built and your ideas can't be changed.
I’ve always said this as well, having done lots and lots of early stage building and prototyping, and suffering plenty of proto-duction foibles, however my view has shifted on this a lot in the last year or so.
With current models I’m able to throw together fully working web app prototypes so quickly and iterate often-sweeping UI and architectural changes so readily that I’m finding it has changed my whole workflow. The idea of trying to keep things low-fidelity at the start is predicated on the understanding that changes later in the process are much more difficult or expensive, which I think is increasingly no longer the case in many circumstances. Having a completely working prototype and then totally changing how it works in just a few sentences is really quite something.
The key to sustainability in this pattern, in my opinion, is not letting the AI dictate project structure or get too far ahead of your own understanding/oversight of the general architecture. That’s a balancing act to be sure, since purely vibe-coding is awfully tempting, but it’s still far too easy to wind up with a big ball of wax that neither human nor AI can further improve.
I don't think this reasoning holds up anymore now that somewhat polished prototypes are so cheap to create and change. Maybe not everyone is aware of that yet but eventually it will be common knowledge.
The “everyone should code” idea has been done and shown to fail for the past 15 years - I pretty much completely agree, and this idea shows the outsized importance on programming as some kind of inherently superior activity, and bringing the ability to program to the masses as some kind of ultimate good.
If you've worked long enough and had interacted with people with varied skillsets, people who don't code aren't only there for show, in fact, depending on the type of company you work at, their jobs might be genuinely more important for the company's success than yours.
I spent a very frustrating 20 minutes with someone this week (a nice person I like, which is why I spent this time) explaining that the Python code chatgpt had provided them would just copy files from one folder to another and was no different from using Windows drag and drop copy.
It would not do any of the things they thought (lots of parsing and file renaming that it took a while for them to articulate). We also discussed how the corporate IT would not be installing a Python interpreter on their computer. Oh what's that? Let me explain. And so on.
ChatGPT didn't help, in this situation, as it turned out.
At my company everybody codes, including PMs and business people. It can definitely be damaging done in the long run without any supervision from an actual programmer. This is why we assign an engineer to review every PR of a vibe coded project and they don’t really need all of the context to detect bs approaches that will surely fail.
About prototyping - its much faster and i dont know how anyone can argue this. PMs can get a full blown prototype for an MVP working in a day with AI assistance. Sure - they will be thrown in the trash after the demo, but they carry out their purpose of proving a concept. The code is janky but it works for its purpose.
Good lord I think I'd rather eat a shotgun than be forced to review a billion garbage PRs made by PMs and other non-technical colleagues. It's bad enough reviewing PRs backenders write for FE features badly with AI (and vice versa), I cannot even imagine the pits of hell this crap is like.
What happens when inevitably the PR/code is horrid? Do they just keep prompting and pushing out slop that some poor overworked dev is now forced to sit through lest he get PIP'd for not being brainwashed by LLMs yet?
> This is why we assign an engineer to review every PR of a vibe coded project and they don’t really need all of the context to detect bs approaches that will surely fail.
I see this trend in many companies as well, just curious, how do you make sure engineering time is not wasted reviewing so many PRs? Because, some of them will be good, you only need couple of your bets to take off, some definitely bad
Its commonly understood that whoever is reviewing the PR shouldn't concern themselves with all of the project’s context (business or any other).
It really takes a glance at the PR to see what the author wanted to create and you can pick up on bad directions the AI took and so you just help the person navigate these choices.
Of course if the project has to actually grow into a product at some point you would have to rewrite good chunks of it.
I would love to have access to whatever this guy is smoking, cause that is some grade-A mind rotted insanity right there. I can count on half of 1 hand the number of good PMs I've had trough my career who weren't a net negative on the projects/companies, and even they most definitely cannot build jackshit by throwing a bunch of LLM-hallucinated crap at the wall and seeing what sticks.
But sure, the devs are the ones that are going to be replaced by the clueless middle managers who only exist to waste everyone's time.
That is completely bizarre. I’ve been wondering what’s happening to programmer interviews now that AI makes all the standard formats pointless. I never imagined that they would ADD coding to the process for other roles. Having PMs vibe coding in an interview? That’s idiotic.
We've recently come under new management, and the interview process for ICs has changed about a week ago and is similarly absurd to me.
For the frontend role, we have candidates awkwardly read through an AI-generated document that is split up extremely awkwardly and in general has that AI tone to it which makes it hard to read through because it's extremely generic, non-specific and devoid of any useful details or indeed thought put into it. The new head of engineering also wants to be a part of every single one of these, and also wants FOUR OTHER PEOPLE in the interview alongside him. Did I mention already that they have to read through a 6-page document that they have never seen or been informed about live in front of 5 people, including their future manager?
In the interview itself, the head of engineering then asks the candidates to use Cursor (and yes, specifically cursor and only cursor, the guy is fucking obsessed with cursor to the point where I wouldn't be surprised if he's somehow getting paid to shove it everywhere) as much as they possibly can. He refuses to answer their questions should they have any, and tells them to direct all questions to Cursor instead.
There was one person who realized very early on that what we're basically asking for is about as simple of a thing as you can possibly imagine, basically a textarea, a button and a list that is dynamically generated (just phrased in the most obtuse possible way, for some reason). He completed this task manually with no Cursor in maybe 5 minutes, 15 if you count the 10 minutes to read through the monstrosity of an AI-generated task. He got points docked by our head of engineering for "Not using AI properly and inefficiently spending time manually coding" which is hilarious because literally nobody who relied on Cursor during the interview got even close to where this guy got.
It has so far been a very predictable disaster, with some extremely talented and promising people sending us emails afterwards to the effect of "This has been the worst interview experience of my life, and I don't care how I did, I'm withdrawing my candidacy. As an official GDPR request, please delete everything and anything you might have on me". Head of engineering is steadfast though, and has called pretty much everyone we've interviewed so far "A bad apple, not a great culture fit because of lack of enthusiasm for AI tooling".
There is absolutely no logic to anything currently happening. It is simply one of the most massive hype bubbles in human history, and VCs, C-levels, middle and upper management are DESPERATE for the marketing hype to be seen as the reality and for the untold billions poured into these systems to be the successes they were initially sold as. There's no humility here, there's no thought being put into any of it, it's extremely cult-like and people are just trying any random idiotic idea that crosses their mind because they have a sycophantic black box that will shower them in infinite praise for every idea they lazily shit out onto the text input area of their favorite LLM tool. They are DESPERATE to fire the expensive, cocky engineers who think they're irreplaceable, and god damn it if they won't burn the entire world down for the chance to be proven right.
he’s saying that the productivity of devs is increasing so much, especially during the prototyping phase, that gathering feedback is becoming the bottleneck, hence there is more PM labor needed. he didn’t say anything about reducing the quantity of dev labor needed.
Not sure why this has drawn silence and attacks - whence the animus to Ng? His high-level assessments seem accurate, he's a reasonable champion of AI, and he speaks credibly based on advising many companies. What am I missing? (He does fall on the side of open models (as input factors): is that the threat?)
He argues that landscape is changing (at least quarterly), and that services are (best) replaceable (often week-to-week) because models change, but that orchestration is harder to replace, and that there are relatively few orchestration platforms.
So: what platforms are available? Are there other HN posts that assess the current state of AI orchestration?
(What's the AI-orchestration acronym? not PAAS but AIOPAAS? AOP? (since aspect-oriented programming is history))
I'm guessing because this is basically an AI for Dummies overview, while half of HN is deep in the weeds with AI already. Nothing wrong with the talk! Except his focus on "do everything" agents already feels a bit stale as the move seems to be going in the direction of limited agents with a much stronger focus on orchestration of tools and context.
> I'm guessing because this is basically an AI for Dummies
I second this, for the silence at least, I listened to the talk because it was Andrew Ng and it is good or at least fun to listen to talks by famous people, but I did not walk away with any new key insights, which is fine, most talks are not that.
And he’s been doing it forever and all from the original idea that he could offer a Stanford education on ai for free on the Internet thus he created coursera. The dude is cool.
And between that and the rap group there’s this important movie:
Shaolin and Wu Tang (1983)
> The film is about the rivalry between the Shaolin (East Asian Mahayana) and Wu-Tang (Taoist Religion) martial arts schools. […]
> East Coast hip-hop group Wu-Tang Clan has cited the film as an early inspiration. The film is one of Wu-Tang Clan founder RZA's favorite films of all time. Founders RZA and Ol' Dirty Bastard first saw the film in 1992 in a grindhouse cinema on Manhattan's 42nd Street and would found the group shortly after with GZA. The group would release its debut album Enter the Wu-Tang (36 Chambers), featuring samples from the film's English dub; the album's namesake is an amalgamation of Enter the Dragon (1973), Shaolin and Wu Tang, and The 36th Chamber of Shaolin (1978).
Yea haha the chinese-to-english gets confusing, because it's not a 1:1, it's an N:1 thing, for the number different Chinese languages, different tones, and semi-malicious US immigration agents who botched the shit out of people's names in the late 19th and early 20th century.
Wu and Ng in Mandarin and Cantonese may be the same character. But Wu the common surname and Wu for some other thing (e.g. that mountain) may be different characters entirely.
It gets even more confusing when you throw a third Chinese language in, say Taishanese:
Wu = Ng (typically) for Mandarin and Cantonese et al. But if it's someone who went to America earlier, suddenly it's Woo. But even though they're both yue Chinese languages, Woo != Woo in Cantonese and Taishanese. For that name, it's Hu (Mandarin) = Wu / Wuh (Cantonese) = Woo (Taishanese, in America). Sometimes. Lol. Sometimes not.
I have never seen a Chinese name that's just two consonants and ZERO vowels. Is Ng some kind of special case? Also interestingly if you put his Chinese name 吳恩達 into Google Translate, you literally get "Andrew Ng"
I couldn't tell you, but what I can contribute to that discussion is that orchestration of AI in its current form would focus on one of two approaches: consistent output despite the non-deterministic state of LLMs, or consistent inputs that leans into the non-deterministic state of LLMs. The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis, so a lot of the "orchestration" of outputs is largely just brute-forcing tokens until you get an answer within that acceptable range; think the glut of recent "Show HN" stuff where folks built a slop-app by having agents bang rocks together until the code worked.
On the input side of things, orchestration is less about AI itself and more about ensuring your data and tooling is consistently and predictably accessible to the AI such that the output is similarly predictable or consistent. If you ask an AI what 2+2 is a hundred different ways, you increase the likelihood of hallucinations; on the other hand, ensuring the agent/bot gets the same prompt with the same data formats and same desired outputs every single time makes it more likely that it'll stay on task and not make shit up.
My engagement with AI has been more of the input-side, since that's scalable with existing tooling and skillsets in the marketplace instead of the output side, which requires niche expertise in deep learning, machine learning, model training and fine-tuning, etc. In other words, one set of skills is cheaper and more plentiful while also having impacts throughout the organization (because everyone benefits from consistent processes and clean datasets), while the other is incredibly expensive and hard to come by with minimal impacts elsewhere unless a profound revolution is achieved.
One thing to note is that Dr. Ng gives the game away at the Q&A portion fairly early on: "In the future, the people who are the most powerful are the people who can make computers do exactly what you want it to do." In that context, the current AI slop is antithetical to what he's pitching. Sure, AI can improve speed on execution, prototyping, and rote processes, but the real power remains in the hands of those who can build with precision instead of brute-force. As we continue to hit barriers in the physical capabilities of modern hardware and wrestle with the effects of climate change and/or poor energy policies, efficiency and precision will gradually become more important than speed - at least that's my thinking.
Really valid points. I agree with the bits about “expertise in getting the computer to do what you want” being the way of the future, but he also raises really valid points about people having strong domain knowledge (a la his colleague with extensive art history knowledge being better at midjourney than him) after saying it’s okay to tell people to just let the LLM write code for you and learn to code that way. I am having a hard time with the contradictions, maybe it’s me. Not meaning to rag on Dr. Ng, just further the conversation. (Which is super interesting to me.)
EDIT: rereading and realizing I think what resonates most is we are in agreement about the antithetical aspects of the talk. I think this is the crux of the issue.
> The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis
Do you mean you cannot guarantee the result based on a task request with a random query? Or something else? I was under the impression that LLMs are very deterministic if you provide a fixed seed for the samplers, fixed model weights, and fixed context. In cloud providers you can't guarantee this because of how they implement this (batching unrelated requests together and doing math). Now you can't guarantee the quality of the result from that and changing the seed or context can result in drastically different quality. But maybe you really mean non-deterministic but I'm curious where this non-determinism would come from.
> I was under the impression that LLMs are very deterministic if you provide a fixed seed for the samplers, fixed model weights, and fixed context.
That's all input-side, though. On the output side, you can essentially give an LLM anxiety by asking the exact same question in different ways, and the machine doesn't understand anymore that you're asking the exact same question.
For instance, take one of these fancy "reasoning" models and ask it variations on 2+2. Try two plus two, 2 plus two, deux plus 2, TwO pLuS 2, etc, and observe its "reasoning" outputs to see the knots it ties itself up in trying to understand why you keep asking the same calculation over and over again. Running an older DeepSeek model locally, the "reasoning" portion continued growing in time and tokens as it struggled to provide context that didn't exist to a simple problem that older/pre-AI models wouldn't bat an eye at and spit out "4".
Trying to wrangle consistent, reproducible outputs from LLMs without guaranteeing consistent inputs is a fool's errand.
Ok yes. I call that robustness of the model as opposed to determinism which to me implies different properties. And yes, I too have been frustrated by the lack of robustness of models to minor variations in input or even using a different seed for the same input.
Pointing out that LLMs are deterministic as long as you lock down everything, is like saying an extra bouncy ball doesn’t bounce if you leave it on flat surface, reduce the temperature to absolute zero, and make sure the surface and the ball are at rest before starting the experiment.
It’s true but irrelevant.
One of the GP’s main points was that even the simplest questions can lead to hundreds of different contexts; they probably already know that you could get different outcomes if you could instead have a fixed context.
The platforms I've seen live on top of kubernetes so I'm afraid it is possible. nvidia-docker, all the cuda libraries and drivers, nccl, vllm,... Large scale distributed training and inference are complicated beasties and the orchestration for them is too.
AOP always felt like a hack. I used it with C++ early on, and it was a preprocessor inserting ("weaving") aspects in the function entries/exits. Mostly was useful for logging. But that can be somewhat emulated using C++ constructors/destructors.
Maybe it can be also useful for DbC (Design-by-Contract) when sets of functions/methods have common pre/post-conditions and/or invariants.
My two takeaways is you build
1) Having a precise vision of what you want to achieve
2) Being able to control / steer AI towards that vision
Teams that can do both of these things, especially #1 will move much faster. Even if they are wrong its better than vague ideas that get applause but not customers
Yes this! The observation that being specific versus general in the problems you want to solve is a better start-up plan is true for all startups ever, not just ones that use LLMs to solve them. Anecdotal/personal startup experiences support this strongly and I read enough on here to know that I am not alone…
What's the balance between being specific in a way that's positive and allows you to solve good problems, and not getting pigeonhold and not being able to pivot? I wonder if companies who pivot are the norm or if you just here of the most popular cases.
Are you a student of Robert Fritz? He says this exactly. The only two things you need is 1) a vision and 2) ability to see present reality clearly. Beyond this it’s all about the skill to nudge a creation towards the vision without being prescribed to a process. The art is knowing when to just use the status quo tool or try something new at any point during the nudging is key. Based on his teachings I can easily see vibe coding fitting into creation process quite easily. Where it becomes tricky is “seeing current reality clearly”. If you have been vibe coding for two weeks and perhaps a weak programmer or worse no technical ability, can you actually see reality at that point? Probably not. It requires understanding the software structure. Maybe. Its all up in the air right now. But I truly believe that LLMs make software creation more like creating art.
I have had reservation about Ng from a lot of his past hype, but I thought this talk was extremely practical and tactical. I recommend watching it before passing judgement.
This talk is deceptively simple. The most sage advice that founders routinely forget is what concrete idea are you going to implement and why do you think it will work? There has be a way to invalidate your idea and as a corollary you must have the focus to collect the data and properly invalidate it.
not a single word about overwhelming replacement of humans with AI. nothing about countless jobs lost. nothing about ever increasing competition and rat-race. (speaking of software, but applies to all industries). his rose-glasses view is somewhere in between optimism-in-denial to straight-up lunacy. if this is the leader(s) we have been following, this should be a wake up call.
A good chunk of Ng's work these days seems to be around AI Fund [0] which he explicitly mentioned in the video, in the first 5 seconds, involves co-founding these startups and being in the weeds with the initial development.
Additionally, he does engage pretty closely with the teams behind the content of his deeplearning.ai lectures and does make sure he has a deep understanding of the products these companies are highlighting.
He certainly is a businessman, but that doesn't exlcudethe possibility that he remains highly knowledgeable about this space.
Except they aren't pay to play unless you consider doing the work for the course the "payment". There's certainly an exchange since there is a lot of work involved, but DLAI provides a team to help design, structure and polish the course and then the team creating the course does the majority of the work creating the content, but there's no financial exchange.
The DLAI team is also pretty good about ensuring the content covers a topic not a product in general.
The content is a repackage of previously existing, publicly available notebooks, docs, YouTube videos. I wouldnt be surprised if the repackaging was done by AI.
Ng built baidu's AI department and began their start in various sectors with actual AI system design, so yes, he isn't a failed startup entrepreneur like any vibe startup maker who already wants to stop and give advice.
Maybe you can help me hire a vibe coder with 10 years experience?
Right.. He's just a giant, not a midget with a step ladder.
But I do question why anyone who played a significant role in the foundation of the current AI generation would teach an obvious new Zuckerberg generation who will apparently think they are the start of everything if they get a style working in the prompt.
If not for 3 people in 2012, I find it highly unlikely a venture like OpenAI could have occurred and without Ng in particular I wouldn't be surprised if the field would have been missing a few technical pieces as well as the hire-able engineers.
He sold courses (great ones!) long before there was AI-gold rush. He's one of the OG players in online education and I think he deserves praise, not blame for that.
I think this is an interesting question, and I’d like to genuinely attempt an answer.
I essentially think this is because people prefer to optimize what they can measure.
It is hard to measure the quality of work. People have subjective opinions, the size of opportunities can be different, etc, making quality hard to pin down. It is much easier to measure the time required for each iteration on a concept. Additionally, I think it is generally believed that a project with more iterations tends to have higher quality than a project with less, even putting aside the concern about measuring quality itself. Therefore, we put aside the discussion of quality (which we’d really like to improve), and instead make the claim that we can actually measure (time to do something), with the strong implication that this _also_ will tend to increase quality.
Energy consumption and data protection were a thing and then came AI and all of a sudden it doesn’t matter anymore.
Between all the good things people create with AI I see a lot more useless or even harmful things.
Scams and fake news get better and harder to distinguish to a point where reality doesn’t matter anymore.
I think quality takes time and refinement which is not something that LLMs have solved very well today. They are very okay at it, except for very specific targeted refinements (Grammerly, SQL editors).
However, they are excellent at building from 0->1, and the video is suggesting that this is perfect for startups. In the context of startups, faster is better.
Two arguments from Ng that really stuck out that is really tripping my skepticism alarm are:
1) He mentions how fast prototyping has begun because generating a simple app has become easier with AI. This, to me, has always been quick and never the bottleneck for any company I’ve been at, including startups. Validating an idea was simple enough via wireframing. I can maybe see it for selling an idea where you need some amount of fidelity yo impress potential investors… but I would hope places like YC can see the tech behind the idea without seeing the tech itself. Or at least can ignore low fidelity if a prototype shows the meat of the product.
2) Ng talks about how everyone in his company codes, from the front desk to the executives. The “everyone should code” idea has been done and shown to fail for the past 15 years. In fact I’ve seen it be more damaging than helpful because it gave people false confidence that they could tell engineers how to do their job rather than a more empathetic understanding.
Prototypes must be exactly a sketchy as the ideas they represent, otherwise they mislead people into thinking the software is built and your ideas can't be changed.
With current models I’m able to throw together fully working web app prototypes so quickly and iterate often-sweeping UI and architectural changes so readily that I’m finding it has changed my whole workflow. The idea of trying to keep things low-fidelity at the start is predicated on the understanding that changes later in the process are much more difficult or expensive, which I think is increasingly no longer the case in many circumstances. Having a completely working prototype and then totally changing how it works in just a few sentences is really quite something.
The key to sustainability in this pattern, in my opinion, is not letting the AI dictate project structure or get too far ahead of your own understanding/oversight of the general architecture. That’s a balancing act to be sure, since purely vibe-coding is awfully tempting, but it’s still far too easy to wind up with a big ball of wax that neither human nor AI can further improve.
If you've worked long enough and had interacted with people with varied skillsets, people who don't code aren't only there for show, in fact, depending on the type of company you work at, their jobs might be genuinely more important for the company's success than yours.
It would not do any of the things they thought (lots of parsing and file renaming that it took a while for them to articulate). We also discussed how the corporate IT would not be installing a Python interpreter on their computer. Oh what's that? Let me explain. And so on.
ChatGPT didn't help, in this situation, as it turned out.
About prototyping - its much faster and i dont know how anyone can argue this. PMs can get a full blown prototype for an MVP working in a day with AI assistance. Sure - they will be thrown in the trash after the demo, but they carry out their purpose of proving a concept. The code is janky but it works for its purpose.
What happens when inevitably the PR/code is horrid? Do they just keep prompting and pushing out slop that some poor overworked dev is now forced to sit through lest he get PIP'd for not being brainwashed by LLMs yet?
I see this trend in many companies as well, just curious, how do you make sure engineering time is not wasted reviewing so many PRs? Because, some of them will be good, you only need couple of your bets to take off, some definitely bad
It really takes a glance at the PR to see what the author wanted to create and you can pick up on bad directions the AI took and so you just help the person navigate these choices.
Of course if the project has to actually grow into a product at some point you would have to rewrite good chunks of it.
I would love to have access to whatever this guy is smoking, cause that is some grade-A mind rotted insanity right there. I can count on half of 1 hand the number of good PMs I've had trough my career who weren't a net negative on the projects/companies, and even they most definitely cannot build jackshit by throwing a bunch of LLM-hallucinated crap at the wall and seeing what sticks.
But sure, the devs are the ones that are going to be replaced by the clueless middle managers who only exist to waste everyone's time.
In the end, what if technically sharp designers and well rounded developers actually end up pushing out incompetent managers?
Could be wishful thinking but you never know.
(the comments are especially revealing)
For the frontend role, we have candidates awkwardly read through an AI-generated document that is split up extremely awkwardly and in general has that AI tone to it which makes it hard to read through because it's extremely generic, non-specific and devoid of any useful details or indeed thought put into it. The new head of engineering also wants to be a part of every single one of these, and also wants FOUR OTHER PEOPLE in the interview alongside him. Did I mention already that they have to read through a 6-page document that they have never seen or been informed about live in front of 5 people, including their future manager?
In the interview itself, the head of engineering then asks the candidates to use Cursor (and yes, specifically cursor and only cursor, the guy is fucking obsessed with cursor to the point where I wouldn't be surprised if he's somehow getting paid to shove it everywhere) as much as they possibly can. He refuses to answer their questions should they have any, and tells them to direct all questions to Cursor instead.
There was one person who realized very early on that what we're basically asking for is about as simple of a thing as you can possibly imagine, basically a textarea, a button and a list that is dynamically generated (just phrased in the most obtuse possible way, for some reason). He completed this task manually with no Cursor in maybe 5 minutes, 15 if you count the 10 minutes to read through the monstrosity of an AI-generated task. He got points docked by our head of engineering for "Not using AI properly and inefficiently spending time manually coding" which is hilarious because literally nobody who relied on Cursor during the interview got even close to where this guy got.
It has so far been a very predictable disaster, with some extremely talented and promising people sending us emails afterwards to the effect of "This has been the worst interview experience of my life, and I don't care how I did, I'm withdrawing my candidacy. As an official GDPR request, please delete everything and anything you might have on me". Head of engineering is steadfast though, and has called pretty much everyone we've interviewed so far "A bad apple, not a great culture fit because of lack of enthusiasm for AI tooling".
There is absolutely no logic to anything currently happening. It is simply one of the most massive hype bubbles in human history, and VCs, C-levels, middle and upper management are DESPERATE for the marketing hype to be seen as the reality and for the untold billions poured into these systems to be the successes they were initially sold as. There's no humility here, there's no thought being put into any of it, it's extremely cult-like and people are just trying any random idiotic idea that crosses their mind because they have a sycophantic black box that will shower them in infinite praise for every idea they lazily shit out onto the text input area of their favorite LLM tool. They are DESPERATE to fire the expensive, cocky engineers who think they're irreplaceable, and god damn it if they won't burn the entire world down for the chance to be proven right.
Kinda aligns with what you’re saying
He argues that landscape is changing (at least quarterly), and that services are (best) replaceable (often week-to-week) because models change, but that orchestration is harder to replace, and that there are relatively few orchestration platforms.
So: what platforms are available? Are there other HN posts that assess the current state of AI orchestration?
(What's the AI-orchestration acronym? not PAAS but AIOPAAS? AOP? (since aspect-oriented programming is history))
I second this, for the silence at least, I listened to the talk because it was Andrew Ng and it is good or at least fun to listen to talks by famous people, but I did not walk away with any new key insights, which is fine, most talks are not that.
I doubt even 10% have written a custom MCP tool... and probably some who don't even know what that means
ng*, ng-*, or *-ng is typically "Next Generation" in software nomenclature. Or, star trek (TNG). Alternatively, "ng-" is also from angular-js.
Ng in Andrew Ng is just his name, like Wu in Chinese.
Shaolin and Wu Tang (1983)
> The film is about the rivalry between the Shaolin (East Asian Mahayana) and Wu-Tang (Taoist Religion) martial arts schools. […]
> East Coast hip-hop group Wu-Tang Clan has cited the film as an early inspiration. The film is one of Wu-Tang Clan founder RZA's favorite films of all time. Founders RZA and Ol' Dirty Bastard first saw the film in 1992 in a grindhouse cinema on Manhattan's 42nd Street and would found the group shortly after with GZA. The group would release its debut album Enter the Wu-Tang (36 Chambers), featuring samples from the film's English dub; the album's namesake is an amalgamation of Enter the Dragon (1973), Shaolin and Wu Tang, and The 36th Chamber of Shaolin (1978).
https://en.wikipedia.org/wiki/Shaolin_and_Wu_Tang
Wu and Ng in Mandarin and Cantonese may be the same character. But Wu the common surname and Wu for some other thing (e.g. that mountain) may be different characters entirely.
It gets even more confusing when you throw a third Chinese language in, say Taishanese:
Wu = Ng (typically) for Mandarin and Cantonese et al. But if it's someone who went to America earlier, suddenly it's Woo. But even though they're both yue Chinese languages, Woo != Woo in Cantonese and Taishanese. For that name, it's Hu (Mandarin) = Wu / Wuh (Cantonese) = Woo (Taishanese, in America). Sometimes. Lol. Sometimes not.
Similarly, Mei = Mai = Moy
One difference is in Mandarin pinyin vs other stuff
Like in Mandarin pinyin 子 turns into zi, but a lot of Cantonese transliterations will have it as tsz.
(Notably, not the more "official" Cantonese transliterations, where it would be written as zi or ji)
It is still pretty rare though, yea. I can't even think of others off the top of my head
I couldn't tell you, but what I can contribute to that discussion is that orchestration of AI in its current form would focus on one of two approaches: consistent output despite the non-deterministic state of LLMs, or consistent inputs that leans into the non-deterministic state of LLMs. The problem with the former (output) is that you cannot guarantee the output of an AI on a consistent basis, so a lot of the "orchestration" of outputs is largely just brute-forcing tokens until you get an answer within that acceptable range; think the glut of recent "Show HN" stuff where folks built a slop-app by having agents bang rocks together until the code worked.
On the input side of things, orchestration is less about AI itself and more about ensuring your data and tooling is consistently and predictably accessible to the AI such that the output is similarly predictable or consistent. If you ask an AI what 2+2 is a hundred different ways, you increase the likelihood of hallucinations; on the other hand, ensuring the agent/bot gets the same prompt with the same data formats and same desired outputs every single time makes it more likely that it'll stay on task and not make shit up.
My engagement with AI has been more of the input-side, since that's scalable with existing tooling and skillsets in the marketplace instead of the output side, which requires niche expertise in deep learning, machine learning, model training and fine-tuning, etc. In other words, one set of skills is cheaper and more plentiful while also having impacts throughout the organization (because everyone benefits from consistent processes and clean datasets), while the other is incredibly expensive and hard to come by with minimal impacts elsewhere unless a profound revolution is achieved.
One thing to note is that Dr. Ng gives the game away at the Q&A portion fairly early on: "In the future, the people who are the most powerful are the people who can make computers do exactly what you want it to do." In that context, the current AI slop is antithetical to what he's pitching. Sure, AI can improve speed on execution, prototyping, and rote processes, but the real power remains in the hands of those who can build with precision instead of brute-force. As we continue to hit barriers in the physical capabilities of modern hardware and wrestle with the effects of climate change and/or poor energy policies, efficiency and precision will gradually become more important than speed - at least that's my thinking.
EDIT: rereading and realizing I think what resonates most is we are in agreement about the antithetical aspects of the talk. I think this is the crux of the issue.
Do you mean you cannot guarantee the result based on a task request with a random query? Or something else? I was under the impression that LLMs are very deterministic if you provide a fixed seed for the samplers, fixed model weights, and fixed context. In cloud providers you can't guarantee this because of how they implement this (batching unrelated requests together and doing math). Now you can't guarantee the quality of the result from that and changing the seed or context can result in drastically different quality. But maybe you really mean non-deterministic but I'm curious where this non-determinism would come from.
That's all input-side, though. On the output side, you can essentially give an LLM anxiety by asking the exact same question in different ways, and the machine doesn't understand anymore that you're asking the exact same question.
For instance, take one of these fancy "reasoning" models and ask it variations on 2+2. Try two plus two, 2 plus two, deux plus 2, TwO pLuS 2, etc, and observe its "reasoning" outputs to see the knots it ties itself up in trying to understand why you keep asking the same calculation over and over again. Running an older DeepSeek model locally, the "reasoning" portion continued growing in time and tokens as it struggled to provide context that didn't exist to a simple problem that older/pre-AI models wouldn't bat an eye at and spit out "4".
Trying to wrangle consistent, reproducible outputs from LLMs without guaranteeing consistent inputs is a fool's errand.
It’s true but irrelevant.
One of the GP’s main points was that even the simplest questions can lead to hundreds of different contexts; they probably already know that you could get different outcomes if you could instead have a fixed context.
AOP is very much alive, people that do AOP have just forgotten what the name is, and many have simply reinvented it poorly.
Maybe it can be also useful for DbC (Design-by-Contract) when sets of functions/methods have common pre/post-conditions and/or invariants.
https://en.wikipedia.org/wiki/Aspect-oriented_programming#Cr...
Teams that can do both of these things, especially #1 will move much faster. Even if they are wrong its better than vague ideas that get applause but not customers
Additionally, he does engage pretty closely with the teams behind the content of his deeplearning.ai lectures and does make sure he has a deep understanding of the products these companies are highlighting.
He certainly is a businessman, but that doesn't exlcudethe possibility that he remains highly knowledgeable about this space.
The DLAI team is also pretty good about ensuring the content covers a topic not a product in general.
Additionally, Baidu wasn't a startup when he joined in 2014.
Maybe you can help me hire a vibe coder with 10 years experience?
But I do question why anyone who played a significant role in the foundation of the current AI generation would teach an obvious new Zuckerberg generation who will apparently think they are the start of everything if they get a style working in the prompt.
If not for 3 people in 2012, I find it highly unlikely a venture like OpenAI could have occurred and without Ng in particular I wouldn't be surprised if the field would have been missing a few technical pieces as well as the hire-able engineers.
Like with actual mortar, brick by brick?
I want an Andrew Ng Agent.
(I'll see myself out ...)
Why faster and not better with AI?
I essentially think this is because people prefer to optimize what they can measure.
It is hard to measure the quality of work. People have subjective opinions, the size of opportunities can be different, etc, making quality hard to pin down. It is much easier to measure the time required for each iteration on a concept. Additionally, I think it is generally believed that a project with more iterations tends to have higher quality than a project with less, even putting aside the concern about measuring quality itself. Therefore, we put aside the discussion of quality (which we’d really like to improve), and instead make the claim that we can actually measure (time to do something), with the strong implication that this _also_ will tend to increase quality.
Most of the time the problem it‘s quality but everyone only seems eager to ship as fast as possible.
Move fast and break things already happened and now we are adding more speed.
„Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
Or for the more sophisticated
https://en.wikipedia.org/wiki/The_Physicists
Energy consumption and data protection were a thing and then came AI and all of a sudden it doesn’t matter anymore.
Between all the good things people create with AI I see a lot more useless or even harmful things. Scams and fake news get better and harder to distinguish to a point where reality doesn’t matter anymore.
However, they are excellent at building from 0->1, and the video is suggesting that this is perfect for startups. In the context of startups, faster is better.
DOGE acts like a startup and we all fear the damage.
I would prefer better startups over faster at anytime.
Now I fear AI will just make the haystack bigger and the needles harder to find.
Same with artists, writers, musicians. They drown in the flood of the AI created masses.