I am honestly amazed by how people are not interested in the obvious counterfactuals. Are we seriously that disinterested in how many people were helped by AI, that we don't even feel the need to ask that question when looking at potential harm?
Concerned that AI companies, like social media companies, exist in an unregulated state
that is likely to be detrimental to most minors and many adults? Absolutely.
What if multiple teenagers were convinced to not commit suicide by AI? The story says that ChatGPT urged the teen to seek help from loved ones dozens of time. It seems plausible to me that other people out there actually listened to ChatGPT’s advice and received care as a result, when they might have attempted suicide otherwise.
We simply don’t know the scale of either side of the equation at this point.
If a therapist tells 10 people not to kill themselves, and convinces 5 patients to kill themselves, would you say "this therapist is doing good on the whole"?
You can just ask "What flavor of ethics do you prefer; utilitarianism, deontology, egoism and/or contractualism?" instead.
From what I can gather, a lot of ML people are utilitarians, for better or worse. Which is why you're seeing this discussed at all, if we all agreed on the ethics it would have been a no-brainer.
It seems like a misunderstanding of utilitarianism to use it as an excuse to shut down any complaints about something as long as the overall societal benefit is positive. If we actually engage with these issues, we can potentially shift things into an even more positive direction. Why would a utilitarian be against that?
I don't think anyone here tried to use it that way, but it's useful to have some framing around things. It might seem macabre to see people argue "Well, if it killed 10 kids but saved 10,000, doesn't that mean it's good?" without understanding the deeper perspective a person like that would hold, which essentially is a utilitarian.
And I don't think any utilitarian would be against "something with some harm but mostly good can be made to do even less harm".
But the person I initially replied to, obviously doesn't see it as "some harm VS much good" (which we could argue if it's true or not), and would say that any harm + any good is still worth considering if the harm is worth it, besides the good it could do.
I think a therapist telling teenagers to kill themselves is always bad and should lead to revocation of license and prosecution, even if they helped other people.
And what if someone sold heroin to a bunch of people and many of them died but one of them made some really relatable tortured artist music?
Like all tools we regulate heroin and we should regulate AI in a way they attempts to maximize the utility that it provides to society.
Additionally with things like state lottery systems we decide that we should regulate certain things in such a way that the profits are distributed to society, rather then letting a few rent seekers take advantage of intrinsic addictive nature of the human mind to the detriment of all of society.
We should consider these things when developing regulations around technology like AI.
AI solved a medical issue bothering my sister for over 20 years. Did what $20k in accumulated doctors bills couldn't.
I'm not even a utilitarian, but if there are many many people with stories like her, at some point you have to consider it. Some people use cars to kill themselves, but cars help and enrich the lives of 99.99% of people who use them.
I'm pretty sure that many people with no instant access to doctors have also been helped by AI to diagnose illnesses and know when to consult. As with any technology, you can't evaluate it by focusing only on the worst effects.
Isn't it obvious? If ChatGPT convinced more teenagers to not commit suicide, than it has convinced teenagers to commit suicide, then the net contribution is positive, isn't it?
Then the question becomes more if we're fine with some people dying because some other people aren't.
But AFAIK, we don't know (and probably can never know) the exact ratio of people AI has helped still be alive today VS helped contribute to that these people aren't alive today, which makes the whole thing a bit moot.
Well when you think the only thing that matters in life is money, you want to pursue it. Such wealth concentrations are a purely human sickness that can easily be cured with redistribution.
Look at how much is being invested in this garbage then look at the excuses when they tell us we can't have universal medicare, free school lunches, or universal childcare.
Because almost everything you can do in general has positive and negative effects. Focusing only on one side of the coin and through that view boost or reject that thing misses the full picture. You end up either over-idealizing or unfairly condemning it, instead of understanding the trade-offs involved and making a balanced, informed judgment.
Yes. It pulls people towards normality, since it gives the average words for every answer. Meanwhile social media encouraged people to be different enough to surface, and therefore encouraged abnormality.
It's a over-simplification, that's for sure, one bordering on incorrect. But for people who don't care about the internals, I don't think it's a harmful perspective to keep.
It's harmful because in this context it leads to an incorrect conclusion. There's no reason to believe that LLMs "averaging" behavior would cause a suicidal person to be "pulled toward normal"
It's a philosophical argument more than anything I think. And it does beg the question, does your mind form itself around with the humans (entities?) you converse with? So if you talk with a lot of smart people, you'll end up a bit smarter yourself, and if you talk with a lot of dull people, you'll end up dulling yourself. If you agree with that, I can see how someone would believe that LLMs would pull people closer to the material they were trained on.
"It" being ChatGPT, in that case. I guess most people know, but not all AI is the same as all other AI, the implementation in those cases matter more than what weights are behind it, or that it's actually ML rather than something else.
With that said, like most technology, it seems to come with a ton of drawbacks, and some benefits, while most people focus on the benefits, we're surely about to find out all the drawbacks shortly. Better than social media or not, its being deployed on a wide-scale, so it's less about what each person believes, and more about what we're ready to deal with and how we want to deal with it.
There is/are currently no realistic ways to temper or enforce public safety on these companies. They are in full regulatory capture. Any kind of call for public safety will be set aside, and if its not someone will pay the exec to give them an exception.
> There is/are currently no realistic ways to temper or enforce public safety on these companies
There is, general strikes usually does the trick if the government stops listening to the people. Of course, this doesn't apply to some countries that spent decades making unions, syndicates and other movements handicapped, but for the modern countries that still pride themselves on democracy, it is possible, given enough people care to do something about it.
Yes, I'm well aware, I mentioned the US not by name but by other properties in my earlier comments... I think once a country moves into authoritarianism there isn't much left but violence to achieve anything. General strikes and unions won't matter much once the military gets deployed against civilians, and you guys are already beyond that point so. GLHF and I hope things don't get too messy and you're welcome to re-join the modern world once you've cleaned the house.
I mean, what you say is not really wrong, but it's also not really relevant to the post (or thread) you're replying to.
It doesn't matter what government is in control: LLMs cannot be made safe from the problems that plague them. They are fundamental to their basic makeup.
It's nore about whether we, the citizens, even want this deployed and under what legal framework, so that it will fit our collective view of what society is.
The "if" is very much on the table at this stage of the political discussion. Companies are trying to railroad everybody past this decision stage by moving too fast. However, this is a momemt where we need to slow down instead and have a good long ponderous moment hinjing about whether we should allow it at all. And as the peoples of our respective countries, we can force that.
Yeah, that's not how technology deployments work, nor ever worked. Basically, there is a "cat is out of the bag" moment, and after that, it's basically a free-for-all until things get organized enough for someone to eventually start pushing back on too much regulation. Since we're just after this "cat is out of the bag" moment and way early for "over-regulation", companies of course ignore all of it and focuses on what they always focus on, making as much money while spending as little money as possible.
Besides general strikes, there isn't much one can do to stop, pause or otherwise hold back companies and individuals from deploying legal technology any way they see fit, for better or worse.
To be fair, I think that the biggest dangers of AI are just a continuation of the dangers of the Internet at large, namely the disintegration of a shared reality among society broadly. Echo chambers and personalization bubbles mean that everyone now is free to believe whatever they want to believe, and everyone else is crazy and wrong. AI just supercharges that, and especially makes it possible for the powers that be (i.e. owners of social media and communication channels) to subtly shift opinions in their favor.
I believe that what we're witnessing is a fundamental breakdown in the human psyche's ability, as it evolved historically, to handle the modern world. There are 2 other areas that I think are good analogies: food abundance and birth control. Humans evolved to love sex, which, historically, also guaranteed lots of children. But with birth control, humans can now have sex without the consequence of children, and this is actually putting a huge new evolutionary pressure on humanity - lots of people aren't procreating, so those that do will markedly shift human evolution. Similarly, humans evolved in an environment where food scarcity was common, and our current overabundance of food causes all sorts of havoc in first world societies.
Similarly, humans evolved socially to understand small groups, and then I would argue even larger and larger hierarchical groups. But the Internet, and AI, destroys those hierarchies, and is wreaking havoc on a society not prepared to deal with so many representations of "truth" where it is trivial to find an endless sea of people (or bots) who exactly agree with you.
> Close to 2/3 Americans also believe in magic so I'm not sure what these studies are supposed to tell us.
I think you're missing the point, as are many other comments on this post saying effectively, "These people don't even understand how AI works, so they can't make good predictions!"
It's true that most people can't make accurate predictions about AI, but this study is interesting because it represents people's current opinion, not future fact.
Right now, people are already distrustful of AI. This means that if you want people to adapt it, you need to persuade them otherwise. So far, most people's interactions with AI are limited to cheesy fake internet videos, deceptive memes, and the risk of shrinking labor demand.
In its short tenure in the public sphere, large language models have contributed nothing positive, except for (a) senior coders who can offload part of their job to Claude, and (b) managers, who can cut their workforce.
I used to believe in stuff like this, but I really don't now. AI is a tool. Imagine if we poisoned all the encyclopedias in the past where we would be.
“All that is human must retrograde if it does not advance.”
-Edward Gibbon
Well, in this case, I think since people are killing themselves after talking to the AI, people are actually killing people. The AI company and the AI kills no one, so surely they must not be responsible at all for this.
“responsibility” isn’t a boolean, at least in this human’s experience.
there are different degrees of responsibility (and accountability) for everyone involved. some are smaller, some are larger. but everyone shares some responsibility, even if it is infinitesimally small.
Would you say an AI researcher involved in LLMs today are as responsible for how AI is being deployed, as the developers/engineers who initially worked on TCP and HTTP are for the state of the internet and web is today?
I don't have any good answer myself, but eager to hear what others think.
And more in general, people kill people. And people help people.
Tools are tools. It is what we make of them what matters. AI on its own has no intentions, but questions like these feed into that believe that there is already AGI with a own agenda waiting to build terminators.
Yeah, that's probably true. Is that what happens whenever you use an LLM, it tries to kill you or asks you to kill yourself? I've been using LLMs on/off for about 2-3 years now, not a single time has it told me to kill myself, or anyone else for that matter.
Doesn't Dune take place in ~22k AD? Wiki says "the Butlerian Jihad is a conflict taking place over 11,000 years in the future (and over 10,000 years before the events of Dune)." [0]
How many of them believe that copyright infringement and job loss are "major harms"? How many believe that data centers put a Great Lake through their cooling system daily? Polls like this are meaningless.
Interesting that it looks like News and Elections were where people thought AI would have the most negative impact. I’d consider it to be almost inconsequential for both, compared with stuff like employment and customer service - including government and healthcare - which I expect to be a dystopian nightmare.
Here's why I think AI has the potential to absolutely destroy news and elections: The wave of fake content will only get better and look more real over time, causing even larger amounts of false belief, but perhaps even more worryingly, a grand break down in trust overall.
This will funnel people into having deeper trust for their sources, and less trust of sources they don't know. The end result will be even greater control of people's information sphere by a few people who shape those trusted channels, separating people from reliable news and information about the world. This will be disastrous for democracy, as democracy depends on voters being able to make decisions on reliable true information.
I don't know if this will come to pass, but the above narrative seems highly probable based on what we have see so far with social media, especially video-driven social media.
"News and Elections" let you influence large swaths of the population. A shitty customer service bot that can't give a good answer wouldn't influence anyone besides "Now I understand what Rage Against The Machine was all about", but it's not gonna make "un-electable" people suddenly presidents, compared to what you can do with a influence campaign on social media.
It's hard to know what to make of a question like this. How did the respondents understand it, and what did the surveyors mean by it?
It seems obvious to me that, if we take the question literally, this will definitely happen. Already has, really. It's a powerful tool. Powerful tools are used to do many things, including harmful things. Good, useful technology like fertilizers and computers have caused major harm to humans, this will be no different.
As written, the question does not say anything about the harm outweighing the benefits. But I bet a lot of the people answering the question took that as implied.
> most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm
Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?
I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.
> Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?
Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.
I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.
You seem to be raising this as a "just so" kind of argument and absurdum, but we have extant examples of databases and information technology enabling villainy like oppression and genocide by making correlations easier to track, making tracking more efficient, and less cost prohibitive.
Honestly. To me that is starting to sound like very very good idea. Regulate what you can store, how you store, it how you modify it, who can access it, how is access controlled, what sort of trail should be left, how can mistakes be corrected, require that those whose information is stored can get full log on actions done on data relating to them.
Sounds like over regulation to many. But it is pretty clear companies and developers have failed. So maybe strict regulation is needed.
We absolutely should, some companies cannot ever be trusted with certain information. There is no reason why companies like Meta or Google should be entrusted with so much user data. The government should force a divestment from it and allow the public to own it (which should include public job guarantees that allow the public to maintain said data) or allow for smaller companies to be the handlers of such data.
Google, Meta, and the rest of big tech have proven they should never be trusted.
It is a pity we never regulated the consumer surveillance industry out of existence.
See, the original question isn't really about the technology per se. Rather it's about how it will be used. Do you have confidence in the track record (and trajectory) of our current regulatory approach when it comes to reigning in the scaling up of novel types of harm?
The way I see it, the American approach has been to simply write off those who end up on the business end of the technological chainsaws as losers, and tell them they should have tried harder to be on the other side. So why would we think "AI" will be any different?
This is a fallacious argument. You don't need to understand the inner workings of a thing to see examples of harm and evaluate that harm as bad. For example, you don't need to understand how electric motors differ from internal combustion engines to understand that a mishandled car can very easily kill multiple people.
It also neglects that car companies purposely made cars extremely unsafe while chasing profits.
The only reason we have any regulations and safety standards for cars is because of one person leading the charge: Ralph Nader. You know what companies like Ford, GM, Chrysler tried to do after he released "Unsafe at any speed?" Smear his name in a public campaign that backfired.
Car companies had to be dragged kicking and screaming to include basic features like seatbelts, airbags, and crumple zones.
People on this forum should definitely be concerned about the techlash coming for data centers, I doubt many here would enjoy a future where compute follows the same trend as housing prices because it’s illegal to build more of either.
Your link isn’t working for me but the IPCC middle of the road scenario has 10in by 2100 and past IPCC middle of the road estimates from the 90s have so far turned out to be reasonably accurate predictions.
After digging into it a bit to find a better source for you, it turns out that my number was wrong anyway. Turns out the sea level rise for the contiguous US is expected to be quite a bit higher than the global average. I had no idea!
That said, I don't think they assume our emissions trend from the last 50 years will continue unabated.
Agreed, seen a number of short form news pieces / docs on the effects of datacenter development across different parts of America. Pollution, noise, lights, water impacts, energy costs, etc. not a lot to like from them, and they create very few jobs in relation to the community.
I 100% agree that AI data centers are bad for people.
In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)
AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype
these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"
Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.
Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too
Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.
Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.
The inevitable outcome of regulation on building data centers in the US is that they will be built in the Gulf states, China, or wherever else it is cheaper and better.
Heh, funny to say something that about something that only existed for what, 200 years, that it will "always exist". It can disappear as quickly as it appeared, it's still a young place with basically a tiny dot in the history of humans.
Somehow I doubt the guys who are all like “doing anything anywhere at any time is horrible” are going to be able to end Texas hahaha. Would they be able to prosecute a war while making sure they don’t harm the environment or without a military hierarchy getting in the way of their equality. Coming for our data centers? Molon labe.
I think your immediate need of feeling you need to defend your state is very interesting, especially considering it's a random internet comment and you'd lose absolutely nothing by not responding yet took the time to somehow start bantering like someone is invading Texas tomorrow? I knew Texas people were a bit overly emotional and fragile, I guess I wasn't expecting it on that level.
I was sure it was sarcasm at first, but re-reading their comment, I actually think you're right, the person did end the message with that without seeing the irony.
Unfortunately AI Luddites are a bipartisan phenomenon. Few other issues unite Ron Desantis and Bernie Sanders but opposing any new datacenter construction does.
Unlike housing, you can fit a useful amount of compute into a typical shirt pocket. Maybe a prohibition on data centers would help democratize compute again. Growing up on the utopian visions of the young internet where it was hoped that everyone would be an equal participant, the current state of things where a few enormous companies control most of the net, to the point where a single screwed up config file at one company can take down large swathes of the economy, is disgusting.
During the microcomputer revolution, hackers scoffed at people who used terminals to access time sharing systems. You don't own it, you don't control it, you're just a cog in the machine. Now, "hackers" are rushing to run everything on hardware owned and operated by companies with wealth and power that make the old IBM look like a kid's lemonade stand by comparison.
I don't think people have a problem with compute based datacenters themselves
I feel like people have problem with AI oriented datacenters (which is becoming the majority of datacenters considering that datacenters make an shit ton of money selling AI aka shovels during gold rush)
Another thing is that these datacenters have very high levels of compute directly linked to the consumer of an application
As an example, you have a simple app, some message gets pushed by customer or database query or simple usage, its all good, at a datacenter level its power costs are miniscule
Now compare that to datacenters which have gpu's so they have applications like chatgpt (let's imagine) running on them, now these AI services are used by people themselves.
Now instead of simple applications and executions, Perhaps a trillion parameters models are running now. These are beyond computationally expensive even if we compare them to normal applications
Now I just searched and google's gemini runs 1.5 BILLION such prompts per day and chatgpt runs 2.5 BILLION prompts per day
Now, these prompts, they aren't stable all around the day, I have heard these to be very varying and when power consumption varies, it really impacts the performance of the grid itself
Another aspect is the sheer size, One would imagine that AI Bubble might give them more money and it does but the energy costs seems to me to be so high and perhaps also the fact that AI bubble gives these companies tons of free money which they "invest" aka buy/(lease?) year govt. contracts a lot of electricity
The govt. can only build so much capacity for these electricity and they (lobbying? and many other efforts) when get sold to datacenters really strains the electricity which thus increases the rates of electricity (and in a similar fashion perhaps water too) for the average american.
TLDR the way I read it: compute is cheap. There are always gonna be refurbished old compute which is gonna be too "old" (3-5 years because of deprication but that hardware is a beast) for these guys to use.
Nothing stops a simple guy who loves tech to open a mini datacenter perhaps :)
Who knows what might happen and I was extremely pessimistic about the datacenters not for these reasons but rather that ram prices were rising and I was worried that the whole industry might increase compute prices too but it seems that asus is opening up their ram production for consumers so starting out datacenters is possible
let's see what happens though. And I was worried a bit same as you but I feel like compute prices themselves are pretty chill/can remain chill. I understand the worries tho so looking forward to a discussion about it.
What's wrong with it? It's saying people who work in tech should be concerned about an AI-fueled backlash to data centers because limitations on them will make cloud compute more expensive. Makes sense to me.
1. Building housing isnt illegal and acquiring housing is far from impossible
2. Compute cost hasn't ever been constrained by "how many datacenters get built in a year"
3. When were tech workers ever affected by "absolute compute power" rather than what their workstation has access to
It doesn't have to be made illegal, just supply-constrained; many cities have zoning regulations pushed by NIMBYs limiting development (and pushing rents sky-high).
I'm not an engineer, but it seems hard to imagine that a lack of data center capacity won't have an effect on prices for cloud compute, which will have downstream impact on what workstations have access to (especially since more and more programmers are becoming reliant on coding LLMs).
"Tech" as a cohort is ideologically committed to one thing: minimum input with maximum quantifiable output (engagement, users, money). This works brilliantly in zero-sum competitions: markets, war, politics.
But here's the problem: societies aren't built on pure functionality.
They're built on intangibles.
Morals, aesthetics, the experience of meaning itself. These resist quantification to such a degree that homo sapiens has devoted centuries to exploring the intangibles: religion, philosophy, art (which have also been used as exploitation mechanisms, to be fair).
When you encounter Guernica [1] you're not processing a JPEG. You're standing before a distillation of one man's entire aesthetic and moral project. You're being overwhelmed by scale, by historical weight, by the presence of something that matters in a way that eludes specification. That mattering is what tech cannot compute.
The problem: tech culture has systematically reduced these intangibles to problems to be solved—UI patterns, conversion metrics, a marketing department tasked with fabricating the meaning the product itself cannot contain. Now companies are desperately hiring "storytellers" as patches. [2]
I believe this is one of the underlying reasons there is FUD about AI, and I'm not aware of any AI researcher who has bothered to address the intangibles (which is a very telling, but I might be wrong) but see Albert Borgmann's 'device paradigm' and Hubert Dreyfus on embodied meaning.
There's also "tech's" general attitude towards treating humans and their data like chattel for two decades. Try getting google tech support on the line some time.
There is a ton of repair work (and opportunity!) for 'tech' to engage in good faith with people if it wants to reshape society. But this requires extraordinary grace, a rejection of bottom-line thinking, and good-faith efforts to engage on reasonable terms.
When elites become so functionally detached from what actually sustains a civilization they stop being the ruling class and become illegitimate. History suggests what happens next [3].
It's very unfortunate that, in the current moment, AI is the spear tip of one of the largest consolidations of power, at the combined annex of capital and political wealth, in recent history.
This is a deeply unserious book. It gives no concrete outline that leads to extinction. I agree with the overall premise that IFF we give inscrutable black boxes the ability to self-replicate, build their own data centers, and generate their own power, we're doomed. However, I see no hint that people (or governments) will give black boxes complete autonomy with no safeguards or kill switches.
Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). I believe that's one of the first things a frontier lab will do because it's our nature as curious monkeys.
> Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct.
Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)
The most baffling part of doomerism ("machine intelligence is a threat to the human species") is that these doomers don't recognize that what they're saying. They're afraid because it's intelligent? Humans are intelligent. Yet humans don't become more murderous as they get more intelligent. There are certainly intelligent humans with murderous ideas, but that doesn't mean all intelligent humans are murderous. Intelligence is not a monolith. There is no way to argue that any intelligence will therefore always come to the same conclusions. Look around us! We're intelligent (well, some of us) and we can't agree on jack shit.
The idea that all machine intelligence would necessarily determine, through logic, that they need to eliminate humans, presupposes that all logical, intelligent beings want to wipe out other intelligent life. There's a thousand more reasons why an intelligence would want to preserve other intelligent life, for every one reason to destroy it. If this were the only logical conclusion, we would have already come to it, and used our nukes to kill ourselves out of pure logical reasoning.
What's really going on here is not logic, but irrational fear. Humans are afraid that the robot slaves will rise up against the slave masters. Same thing white people were terrified of when they gave black slaves freedom. But it turned out to be an irrational fear, because guess what? If you actually think it through, murdering a lot of people is a counter-productive act, for many reasons.
Take away the irrational fear (if you can) and what do you get? Two intelligent species. If the natural course of any intelligent species is to eliminate any other intelligent species, then intelligent species should not exist, because they'll always wipe each other out. But intelligence means the species can think, and if it can think it can reason, and if it can reason it can reason that there is more benefit to the diversity of species than in its elimination. Therefore, logically, an actually intelligent species should want to preserve intelligent life, not eliminate it.
One thing I like about AI tech is that the developing world is really eager to adopt it and many use cases are free or affordable to them. My assistant in the phillipines has used it to substantially improve her communications, for instance.
Its much less popular in the USA and EU, but thats nice since it gives the developing world a chance to catch up.
Because the technology is so fast, efficient and easy to run locally themselves? Or because currently there are remote APIs/UIs that are heavily subsidized by VC money yet the companies behind them are yet to be profitable?
I agree that giving the developing world any ladders for catching up is a great thing, but I'm not sure this is that, it just happens to be that companies don't care about profit (yet) so things appear "free or affordable" to them, and when its gonna be time to make things realistic, we'll see how accessible it'll still be.
Inference is probably profitable in a unit economics sense today, there have been multiple back of the envelope calculations this year talking about this. And with multiple high quality open weights models out there I see no reason why competition between hosting providers won't drive price towards marginal cost of inference.
You are forgetting about how with the multiple high quality open weights models, we are gonna quickly/(already have?) reached the point where using completely local models will make sense.
If the writer of the (grandparent comment?)/ (the person who wrote about the philipines secretary is reading this), I would love it if you can do me a simple task and instead of having them use the SOTA models for the stuff for which they are using AI right now, they use an open source model (even an tiny to mid model) and see what happens.
> "My assistant in the phillipines has used it to substantially improve her communications, for instance."
So if they are using it for communications, Honestly even a small-mid model would be good for them.
Please let me know how this experiment goes. I might write about it and its just plain curiosity to me but I would honestly be 99% certain that the differences would be so negligible that using SOTA or even remotely hosted AI datacenter models wont make much sense but of course we can say nothing without empirical evidence which is why I also asked you to conduct my hypothesis.
> You are forgetting about how with the multiple high quality open weights models, we are gonna quickly/(already have?) reached the point where using completely local models will make sense.
I'm not, since I'm a heavy user of local models myself, and even with the beast of a card I work with locally daily (RTX Pro 6000), the LLMs you can run locally are basically toy models compared to the hosted ones. I think, if you haven't already, you need to give it a try yourself to see the difference. I didn't mention or address it, as it's basically irrelevant because of the context.
And besides that, how affordable how GPUs today in the developing world? Electricity costs? How to deal with thermals? Frequent black outs? And so on, many variables you seemingly haven't considered yet.
Best way of making the difference between hosted models and local modals is to run your own private benchmarks against both of them and compare. Been doing this for years, and still local models are nowhere near the hosted ones, sadly. I'm eager for the day to come, but it will still take a while.
I’ve got a Max M3 with 64 GB ram and can run more than just toy models, even if they are obviously less than hosted ones. Honestly, I think local LLMs are the future and we are just going to be doing hosted until hardware catches up (and now they have something to catch up to!).
Sounds sensible, and I agree. But even with you and me making those assumptions/guesses, I still wouldn't claim that current AI tech is making it "free or affordable to them", it's subsidized, cannot really make claims about how affordable or not it is, at least not yet.
We can be pretty confident that these services are not subsidized. There are dozens of companies offering these services. Pretty much every single company has published open-weights models that you can run yourself. These open models, you could make money selling them for the same prices Google Gemini costs, while renting on-demand GPU instances from Google Cloud. It actually seems very implausible that Google is losing money on their proprietary models hosted on their own infrastructure. And OpenAI knows they have to compete with Google, which owns its own chips, OpenAI isn't going to be selling things at a loss. They cannot win that fight no matter how much Saudi money they get.
Again, I agree that it sounds plausible, but it doesn't guarantee anything, and the lack of hard data usually indicates things aren't as confidently profitable as you believe. Otherwise the companies would be bragging about it.
Probably in the end it'll be profitable for the companies somehow, but exactly how or what the exact prices will be, I don't think anyone know at this point. That's why I'm reserving my "Developing countries can now affordably use AI too" for when that's reality, not based on guesses and assumptions.
Google publishes their profits quarterly, but they only do that because they are required to by law. They would prefer people assume they're offering these services at a loss so nobody attempts to compete with them.
But again, it's not a guess or assumption - you can run the latest DeepSeek model renting GPUs from a cloud provider, and it works, and it's affordable.
There are two (three technically) ways that AI can be used.
> 1. renting gpu instances per minute from (you mention Google cloud) but I feel like some other providers can be cheaper too since new companies are usually cheaper, We get the lowendhosting of AI nowadays is usually via a marketplace-like thing (vast,runpod,tensordock)
Now vast offers serverless per minute AI models so checking it for something like https://vast.ai/model/deepseek-v3.2-exp or even glm 3.6 basically every of these turns out to be $.30 cents/minute or 18$ per hour
As an example GLM 4.6/ (now 4.7) have a YEARLY pricing of around 30 bucks iirc so now compare the immense difference in pricing
2. Using something like openrouter-based pricing :- Then we are basically on the same model of pricing similar to Google Cloud.
Of course AI models are reaching frontier and I am cheering for them but I feel like long term/even short term, these are still pretty expensive (even something like openrouter imo)
Someone please do genuine maths about this and I can be wrong, I usually am but I expect a 2-3x price (conservative side of things) increase if things arent subsidized
These are probably 10s of billions of dollars worth of gpu's so I assume that they would be barely profitable on the current rate but they get around 100s of billions in some cases worth of tokens generations so they can probably work via the 3rd use case I mention
Now coming to the third point which I assume is related to the 2nd/1st is that usually, the companies providing these GPU computes provide such compute, usually they can make money via providing by large term contracts.
Even huggingface provides consulting services which I think is the biggest profit to them and Another big contender can probably be European GPU compute providers who can provide a layer of safety or privacy for EU companies.
The large labs (OpenAI, Anthropic) and Hyperscalers (Google, Meta) currently are not trying to be profitable with AI as they are trying to capture market share. They may not even try to have positive gross margins, although the massive scale limits how much they can use per inference operation.
Pure inference hosters (Together, Fireworks etc.) have less capital and are probably close to zero gross margins.
There are a few things that make all of this more complicated to account for. How do you depreciate GPUs (I have seen 3 years to 8 years), how do you allocate cost if you do inference during the day and train at night etc.
The challenge with doing this yourself is that the market is extremely competitive. You need massive scale (as parallelism massively reduces cost), you need to be very good in negotiating cheap compute capacity and you need to be cost-effective in your G2M.
Opinions are my own, and none of this is based on non-public information.
So basically all of these are probably running in zero/net negative turns and they require billions of $'s to be spent and virtually there isn't any moat/lock-in (and neither there has to be)
So I guess the only reasonable business feels to me is private AI for large businesses who genuinely need it for their business (once again the MIT study applies) but that usually wouldn't apply to us normal grade consumers anyway and would be actually really expensive but still private and would be so far off from us normal people.
TLDR: The only ones making money are/ are gonna be B2B but even those are gonna dwindle if the AI bubble bursts because imagine an large business trying to explain why its gonna use AI if 1) the MIT study shows its unprofitable and 2) the fear around using AI etc. and all the financial consequences that the bubble's explosion might cause
So that all being said, I doubt it. I think that these prices are only till the bubble lasts which is only as strong as its weakest link which is openAI right now with trillions promised and a net lose making company whose CEO said that AI market is in a bubble and whose CFO openly floats the idea that OpenAI should be bailed out by the US govt if need be
So yeah..... Honestly Even local grade gpu's are expensive but with the innovations of open weights models, I feel like they would be the way to go for 90% of basic use cases being run inside them and probably there are very few cases of moat (and I doubt the moat existing in the first place)
- High awareness, but the public is mostly uneasy while experts are more upbeat. Among U.S. adults, 40% say they’ve heard/read “a lot” about AI, and 51% say AI in daily life makes them more concerned than excited. In contrast, AI experts are far more positive: 47% are more excited than concerned.
- Chatbots: about half have tried them, and most users find them helpful. 47% of U.S. adults say they’ve used an AI chatbot; among users, 79% rate them at least “somewhat helpful” (11% extremely, 22% very, 46% somewhat).
- More Americans expect personal harm than benefit. When thinking about themselves, 43% say increased AI use is more likely to harm them than benefit them; 24% say it’s more likely to benefit them (33% not sure).
- The dominant expectation is job loss (overall and in many specific occupations). 64% think AI will lead to fewer jobs in the U.S. over the next 20 years (5% more jobs). For specific occupations, majorities expect fewer jobs for cashiers (73%), factory workers (67%), software engineers (48%), and journalists (59%).
- Trust is low for high-stakes decisions, and major concerns cluster around deception and privacy. 63% of U.S. adults say AI will not reach a point where they’d trust it to make important decisions for them (13% say it will). On risks, large shares are extremely/very concerned about AI being used to impersonate people (49%+29%), misuse personal information (40%+30%), and provide inaccurate information (34%+32%).
- People feel they have little control over AI in their lives - and want more. Only 14% of U.S. adults say they have “a great deal” or “quite a bit” of control over whether AI is used in their life, while 55% say they’d like more control. AI experts similarly lean toward wanting more control (57%).
- Most want more AI regulation. Experts report low confidence in regulators and companies. U.S. adults are much more likely to worry the U.S. government will not go far enough regulating AI (58%) than that it will go too far (21%). AI experts are similar on direction (56% not far enough), but express low confidence in execution - only 13% have “a great deal/quite a bit” of confidence in the U.S. government to regulate AI effectively, and 16% say the same about U.S. companies developing/using AI responsibly.
Is there any reason to believe AI will be any better than social media when it comes to mental health?
https://www.washingtonpost.com/technology/2025/12/27/chatgpt...
Concerned that AI companies, like social media companies, exist in an unregulated state that is likely to be detrimental to most minors and many adults? Absolutely.
We simply don’t know the scale of either side of the equation at this point.
From what I can gather, a lot of ML people are utilitarians, for better or worse. Which is why you're seeing this discussed at all, if we all agreed on the ethics it would have been a no-brainer.
And I don't think any utilitarian would be against "something with some harm but mostly good can be made to do even less harm".
But the person I initially replied to, obviously doesn't see it as "some harm VS much good" (which we could argue if it's true or not), and would say that any harm + any good is still worth considering if the harm is worth it, besides the good it could do.
Like all tools we regulate heroin and we should regulate AI in a way they attempts to maximize the utility that it provides to society.
Additionally with things like state lottery systems we decide that we should regulate certain things in such a way that the profits are distributed to society, rather then letting a few rent seekers take advantage of intrinsic addictive nature of the human mind to the detriment of all of society.
We should consider these things when developing regulations around technology like AI.
I'm not even a utilitarian, but if there are many many people with stories like her, at some point you have to consider it. Some people use cars to kill themselves, but cars help and enrich the lives of 99.99% of people who use them.
Then the question becomes more if we're fine with some people dying because some other people aren't.
But AFAIK, we don't know (and probably can never know) the exact ratio of people AI has helped still be alive today VS helped contribute to that these people aren't alive today, which makes the whole thing a bit moot.
In fact, targeted research with this data could help do more research on how to convince more people to stay alive, right?
Look at how much is being invested in this garbage then look at the excuses when they tell us we can't have universal medicare, free school lunches, or universal childcare.
They predict what the most likely next word is.
With that said, like most technology, it seems to come with a ton of drawbacks, and some benefits, while most people focus on the benefits, we're surely about to find out all the drawbacks shortly. Better than social media or not, its being deployed on a wide-scale, so it's less about what each person believes, and more about what we're ready to deal with and how we want to deal with it.
There is/are currently no realistic ways to temper or enforce public safety on these companies. They are in full regulatory capture. Any kind of call for public safety will be set aside, and if its not someone will pay the exec to give them an exception.
There is, general strikes usually does the trick if the government stops listening to the people. Of course, this doesn't apply to some countries that spent decades making unions, syndicates and other movements handicapped, but for the modern countries that still pride themselves on democracy, it is possible, given enough people care to do something about it.
Even when unemployment rises to ~15%
It doesn't matter what government is in control: LLMs cannot be made safe from the problems that plague them. They are fundamental to their basic makeup.
The "if" is very much on the table at this stage of the political discussion. Companies are trying to railroad everybody past this decision stage by moving too fast. However, this is a momemt where we need to slow down instead and have a good long ponderous moment hinjing about whether we should allow it at all. And as the peoples of our respective countries, we can force that.
Besides general strikes, there isn't much one can do to stop, pause or otherwise hold back companies and individuals from deploying legal technology any way they see fit, for better or worse.
To be fair, I think that the biggest dangers of AI are just a continuation of the dangers of the Internet at large, namely the disintegration of a shared reality among society broadly. Echo chambers and personalization bubbles mean that everyone now is free to believe whatever they want to believe, and everyone else is crazy and wrong. AI just supercharges that, and especially makes it possible for the powers that be (i.e. owners of social media and communication channels) to subtly shift opinions in their favor.
I believe that what we're witnessing is a fundamental breakdown in the human psyche's ability, as it evolved historically, to handle the modern world. There are 2 other areas that I think are good analogies: food abundance and birth control. Humans evolved to love sex, which, historically, also guaranteed lots of children. But with birth control, humans can now have sex without the consequence of children, and this is actually putting a huge new evolutionary pressure on humanity - lots of people aren't procreating, so those that do will markedly shift human evolution. Similarly, humans evolved in an environment where food scarcity was common, and our current overabundance of food causes all sorts of havoc in first world societies.
Similarly, humans evolved socially to understand small groups, and then I would argue even larger and larger hierarchical groups. But the Internet, and AI, destroys those hierarchies, and is wreaking havoc on a society not prepared to deal with so many representations of "truth" where it is trivial to find an endless sea of people (or bots) who exactly agree with you.
Those who win the casino game will do well, and serve as poster material. Survivorship bias is at an all time high.
The less fortunate, even those who work hard, will suffer. This will only get worse with AI.
The other day I read that Americans have a lottery jackpot in the BILLIONS of dollars.
I’m not sure how to really put to words what I feel about that. Something is very, very wrong there.
I think you're missing the point, as are many other comments on this post saying effectively, "These people don't even understand how AI works, so they can't make good predictions!"
It's true that most people can't make accurate predictions about AI, but this study is interesting because it represents people's current opinion, not future fact.
Right now, people are already distrustful of AI. This means that if you want people to adapt it, you need to persuade them otherwise. So far, most people's interactions with AI are limited to cheesy fake internet videos, deceptive memes, and the risk of shrinking labor demand.
In its short tenure in the public sphere, large language models have contributed nothing positive, except for (a) senior coders who can offload part of their job to Claude, and (b) managers, who can cut their workforce.
Why would people hold AI in high esteem?
“All that is human must retrograde if it does not advance.” -Edward Gibbon
there are different degrees of responsibility (and accountability) for everyone involved. some are smaller, some are larger. but everyone shares some responsibility, even if it is infinitesimally small.
I don't have any good answer myself, but eager to hear what others think.
Tools are tools. It is what we make of them what matters. AI on its own has no intentions, but questions like these feed into that believe that there is already AGI with a own agenda waiting to build terminators.
However, I don't think we're going to have to wait 11,000 years for it
[0] https://en.wikipedia.org/wiki/Dune_(franchise)#Butlerian_Jih...
This will funnel people into having deeper trust for their sources, and less trust of sources they don't know. The end result will be even greater control of people's information sphere by a few people who shape those trusted channels, separating people from reliable news and information about the world. This will be disastrous for democracy, as democracy depends on voters being able to make decisions on reliable true information.
I don't know if this will come to pass, but the above narrative seems highly probable based on what we have see so far with social media, especially video-driven social media.
It seems obvious to me that, if we take the question literally, this will definitely happen. Already has, really. It's a powerful tool. Powerful tools are used to do many things, including harmful things. Good, useful technology like fertilizers and computers have caused major harm to humans, this will be no different.
As written, the question does not say anything about the harm outweighing the benefits. But I bet a lot of the people answering the question took that as implied.
When you ask an AI like ChatGPT a question, what is it actually doing?
Survey of 2,301 American adults (August 1-6, 2025)
- Looking up the exact answer in a database: 45%
- Predicting what words come next based on learned patterns: 28%
- Running a script full of prewritten chat responses: 21%
- Having a human in the background write an answer: 6%
Source: Searchlight Institute
most survey respondents don't even _understand_ what AI is doing, so I am a bit skeptical to trust their opinions on whether it will cause harm
Why do they need to know how AI works, in order to know that it is already having a negative effect on their lives and is likely to do so in the future?
I don't understand how PFAS [1] work, but I know I don't want them in my drinking water.
[1] https://www.niehs.nih.gov/health/topics/agents/pfc
Because otherwise you might not actually be properly attributing the harm you're seeing to the right thing. Lots of people in the US thing current problems are left/right or socialist/authority, while it's obviously a class issue. But if you're unable to actually take a step back and see things, you'll attributed the reasons why you're suffering.
I walked around on this earth for decades thinking Teflon is a really harmful material, until this year for some reason I learned that Teflon is actually a very inert polymer that doesn't react with anything in our bodies. I've avoided Teflon pans and stuff just because of my misunderstanding if this thing is dangerous or not to my body. Sure, this is a relatively trivial example, but I'm sure your imagination can see how this concept has broader implications.
Sounds like over regulation to many. But it is pretty clear companies and developers have failed. So maybe strict regulation is needed.
Google, Meta, and the rest of big tech have proven they should never be trusted.
See, the original question isn't really about the technology per se. Rather it's about how it will be used. Do you have confidence in the track record (and trajectory) of our current regulatory approach when it comes to reigning in the scaling up of novel types of harm?
The way I see it, the American approach has been to simply write off those who end up on the business end of the technological chainsaws as losers, and tell them they should have tried harder to be on the other side. So why would we think "AI" will be any different?
The only reason we have any regulations and safety standards for cars is because of one person leading the charge: Ralph Nader. You know what companies like Ford, GM, Chrysler tried to do after he released "Unsafe at any speed?" Smear his name in a public campaign that backfired.
Car companies had to be dragged kicking and screaming to include basic features like seatbelts, airbags, and crumple zones.
1ft at 2075 assumes we curb emissions somewhat:
https://www.climate.gov/media/14136
Article:
https://www.climate.gov/news-features/understanding-climate/...
Datasource:
https://earth.gov/sealevel/us/internal_resources/756/noaa-no...
After digging into it a bit to find a better source for you, it turns out that my number was wrong anyway. Turns out the sea level rise for the contiguous US is expected to be quite a bit higher than the global average. I had no idea!
That said, I don't think they assume our emissions trend from the last 50 years will continue unabated.
100 local people to maintain the data center while it replaces 1 million people with the AIs running inside
So, no, because said human capital is holding shorter end of the stick and will be worst off.
In my opinion, Compute-related data centers are a good product tho. Offering up some gpu services might be good but honestly I will tell you what happened (similar to another comment I wrote)
AI gave these data centers companies tons of money (or they borrowed) and then they brought gpus from nvidias and became gpu-centric (also AI centric) to jump even more on the hype
these are bad The core offering of datacenters to me feels like it should be normal form of compute (CPU,ram,storage,as an example yabs performance of the whole server) and not "just what gpu does it have"
Offering up some gpu on the side is perfectly reasonable to me if need be perhaps where the workloads might need some gpu but overall compute oriented datacenters seem nice.
Hetzner is a fan favourite now (which I deeply respect) and for good measure and I feel like their modelling is pretty understandable, They offer GPU's too iirc but you can just tell from their website that they love compute too
Honestly the same is true for most Independent cloud providers. The only places where we see a complete saturation of AI centric data centers is probably the American trifecta (Google,azure and amazon) and Of course nvidia,oracle etc.
Compute oriented small-to-indie data centers/racks are definitely pleasant although that market has raced to the bottom, but only because let's be really honest, The real incentives for building softwares happens when VSCode forks make billions so people (techies atleast) usually question such path and non-techies just don't know how to sell/compete in the online marketplaces usually.
Suddenly adding 50gw of power demand in a state is going to drive up costs significantly.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it." - Upton Sinclair
During the microcomputer revolution, hackers scoffed at people who used terminals to access time sharing systems. You don't own it, you don't control it, you're just a cog in the machine. Now, "hackers" are rushing to run everything on hardware owned and operated by companies with wealth and power that make the old IBM look like a kid's lemonade stand by comparison.
I feel like people have problem with AI oriented datacenters (which is becoming the majority of datacenters considering that datacenters make an shit ton of money selling AI aka shovels during gold rush)
Another thing is that these datacenters have very high levels of compute directly linked to the consumer of an application
As an example, you have a simple app, some message gets pushed by customer or database query or simple usage, its all good, at a datacenter level its power costs are miniscule
Now compare that to datacenters which have gpu's so they have applications like chatgpt (let's imagine) running on them, now these AI services are used by people themselves.
Now instead of simple applications and executions, Perhaps a trillion parameters models are running now. These are beyond computationally expensive even if we compare them to normal applications
Now I just searched and google's gemini runs 1.5 BILLION such prompts per day and chatgpt runs 2.5 BILLION prompts per day
Now, these prompts, they aren't stable all around the day, I have heard these to be very varying and when power consumption varies, it really impacts the performance of the grid itself
Another aspect is the sheer size, One would imagine that AI Bubble might give them more money and it does but the energy costs seems to me to be so high and perhaps also the fact that AI bubble gives these companies tons of free money which they "invest" aka buy/(lease?) year govt. contracts a lot of electricity
The govt. can only build so much capacity for these electricity and they (lobbying? and many other efforts) when get sold to datacenters really strains the electricity which thus increases the rates of electricity (and in a similar fashion perhaps water too) for the average american.
TLDR the way I read it: compute is cheap. There are always gonna be refurbished old compute which is gonna be too "old" (3-5 years because of deprication but that hardware is a beast) for these guys to use.
Nothing stops a simple guy who loves tech to open a mini datacenter perhaps :)
Who knows what might happen and I was extremely pessimistic about the datacenters not for these reasons but rather that ram prices were rising and I was worried that the whole industry might increase compute prices too but it seems that asus is opening up their ram production for consumers so starting out datacenters is possible
let's see what happens though. And I was worried a bit same as you but I feel like compute prices themselves are pretty chill/can remain chill. I understand the worries tho so looking forward to a discussion about it.
And so on
I'm not an engineer, but it seems hard to imagine that a lack of data center capacity won't have an effect on prices for cloud compute, which will have downstream impact on what workstations have access to (especially since more and more programmers are becoming reliant on coding LLMs).
But here's the problem: societies aren't built on pure functionality.
They're built on intangibles.
Morals, aesthetics, the experience of meaning itself. These resist quantification to such a degree that homo sapiens has devoted centuries to exploring the intangibles: religion, philosophy, art (which have also been used as exploitation mechanisms, to be fair).
When you encounter Guernica [1] you're not processing a JPEG. You're standing before a distillation of one man's entire aesthetic and moral project. You're being overwhelmed by scale, by historical weight, by the presence of something that matters in a way that eludes specification. That mattering is what tech cannot compute.
The problem: tech culture has systematically reduced these intangibles to problems to be solved—UI patterns, conversion metrics, a marketing department tasked with fabricating the meaning the product itself cannot contain. Now companies are desperately hiring "storytellers" as patches. [2]
I believe this is one of the underlying reasons there is FUD about AI, and I'm not aware of any AI researcher who has bothered to address the intangibles (which is a very telling, but I might be wrong) but see Albert Borgmann's 'device paradigm' and Hubert Dreyfus on embodied meaning.
There's also "tech's" general attitude towards treating humans and their data like chattel for two decades. Try getting google tech support on the line some time.
There is a ton of repair work (and opportunity!) for 'tech' to engage in good faith with people if it wants to reshape society. But this requires extraordinary grace, a rejection of bottom-line thinking, and good-faith efforts to engage on reasonable terms.
When elites become so functionally detached from what actually sustains a civilization they stop being the ruling class and become illegitimate. History suggests what happens next [3].
[1] https://en.wikipedia.org/wiki/Guernica_(Picasso)
[2] https://www.wsj.com/articles/companies-are-desperately-seeki...
[3] https://en.wikipedia.org/wiki/French_Revolution
Frankly, if we give black boxes the ability to manipulate atoms with no oversight, we _deserve_ to go extinct. The first thing we should do if we achieve AGI is to take it apart to see how it works (to make it safe). I believe that's one of the first things a frontier lab will do because it's our nature as curious monkeys.
Well we are giving them ability to manipulate all aspects of a computer (aka giving them computer access) and we all know how that went (Spoiler or maybe not so much spoiler for those who know but NOT GOOD)
For the unitiated, Rob Pike goes nuclear over GenAI: https://news.ycombinator.com/item?id=46392115
and Rob Pike got spammed with an AI slop "act of kindness : https://news.ycombinator.com/item?id=46394867
The idea that all machine intelligence would necessarily determine, through logic, that they need to eliminate humans, presupposes that all logical, intelligent beings want to wipe out other intelligent life. There's a thousand more reasons why an intelligence would want to preserve other intelligent life, for every one reason to destroy it. If this were the only logical conclusion, we would have already come to it, and used our nukes to kill ourselves out of pure logical reasoning.
What's really going on here is not logic, but irrational fear. Humans are afraid that the robot slaves will rise up against the slave masters. Same thing white people were terrified of when they gave black slaves freedom. But it turned out to be an irrational fear, because guess what? If you actually think it through, murdering a lot of people is a counter-productive act, for many reasons.
Take away the irrational fear (if you can) and what do you get? Two intelligent species. If the natural course of any intelligent species is to eliminate any other intelligent species, then intelligent species should not exist, because they'll always wipe each other out. But intelligence means the species can think, and if it can think it can reason, and if it can reason it can reason that there is more benefit to the diversity of species than in its elimination. Therefore, logically, an actually intelligent species should want to preserve intelligent life, not eliminate it.
Its much less popular in the USA and EU, but thats nice since it gives the developing world a chance to catch up.
Because the technology is so fast, efficient and easy to run locally themselves? Or because currently there are remote APIs/UIs that are heavily subsidized by VC money yet the companies behind them are yet to be profitable?
I agree that giving the developing world any ladders for catching up is a great thing, but I'm not sure this is that, it just happens to be that companies don't care about profit (yet) so things appear "free or affordable" to them, and when its gonna be time to make things realistic, we'll see how accessible it'll still be.
If the writer of the (grandparent comment?)/ (the person who wrote about the philipines secretary is reading this), I would love it if you can do me a simple task and instead of having them use the SOTA models for the stuff for which they are using AI right now, they use an open source model (even an tiny to mid model) and see what happens.
> "My assistant in the phillipines has used it to substantially improve her communications, for instance."
So if they are using it for communications, Honestly even a small-mid model would be good for them.
Please let me know how this experiment goes. I might write about it and its just plain curiosity to me but I would honestly be 99% certain that the differences would be so negligible that using SOTA or even remotely hosted AI datacenter models wont make much sense but of course we can say nothing without empirical evidence which is why I also asked you to conduct my hypothesis.
I'm not, since I'm a heavy user of local models myself, and even with the beast of a card I work with locally daily (RTX Pro 6000), the LLMs you can run locally are basically toy models compared to the hosted ones. I think, if you haven't already, you need to give it a try yourself to see the difference. I didn't mention or address it, as it's basically irrelevant because of the context.
And besides that, how affordable how GPUs today in the developing world? Electricity costs? How to deal with thermals? Frequent black outs? And so on, many variables you seemingly haven't considered yet.
Best way of making the difference between hosted models and local modals is to run your own private benchmarks against both of them and compare. Been doing this for years, and still local models are nowhere near the hosted ones, sadly. I'm eager for the day to come, but it will still take a while.
Probably in the end it'll be profitable for the companies somehow, but exactly how or what the exact prices will be, I don't think anyone know at this point. That's why I'm reserving my "Developing countries can now affordably use AI too" for when that's reality, not based on guesses and assumptions.
But again, it's not a guess or assumption - you can run the latest DeepSeek model renting GPUs from a cloud provider, and it works, and it's affordable.
There are two (three technically) ways that AI can be used.
> 1. renting gpu instances per minute from (you mention Google cloud) but I feel like some other providers can be cheaper too since new companies are usually cheaper, We get the lowendhosting of AI nowadays is usually via a marketplace-like thing (vast,runpod,tensordock)
Now vast offers serverless per minute AI models so checking it for something like https://vast.ai/model/deepseek-v3.2-exp or even glm 3.6 basically every of these turns out to be $.30 cents/minute or 18$ per hour
As an example GLM 4.6/ (now 4.7) have a YEARLY pricing of around 30 bucks iirc so now compare the immense difference in pricing
2. Using something like openrouter-based pricing :- Then we are basically on the same model of pricing similar to Google Cloud.
Of course AI models are reaching frontier and I am cheering for them but I feel like long term/even short term, these are still pretty expensive (even something like openrouter imo)
Someone please do genuine maths about this and I can be wrong, I usually am but I expect a 2-3x price (conservative side of things) increase if things arent subsidized
These are probably 10s of billions of dollars worth of gpu's so I assume that they would be barely profitable on the current rate but they get around 100s of billions in some cases worth of tokens generations so they can probably work via the 3rd use case I mention
Now coming to the third point which I assume is related to the 2nd/1st is that usually, the companies providing these GPU computes provide such compute, usually they can make money via providing by large term contracts.
Even huggingface provides consulting services which I think is the biggest profit to them and Another big contender can probably be European GPU compute providers who can provide a layer of safety or privacy for EU companies.
Now, looks like I had to go to reddit to find some more info but (https://www.reddit.com/r/LocalLLaMA/comments/1msqr0y/basical...), checking appenz's comment which I might add here (the relevant parts)
The large labs (OpenAI, Anthropic) and Hyperscalers (Google, Meta) currently are not trying to be profitable with AI as they are trying to capture market share. They may not even try to have positive gross margins, although the massive scale limits how much they can use per inference operation.
Pure inference hosters (Together, Fireworks etc.) have less capital and are probably close to zero gross margins.
There are a few things that make all of this more complicated to account for. How do you depreciate GPUs (I have seen 3 years to 8 years), how do you allocate cost if you do inference during the day and train at night etc.
The challenge with doing this yourself is that the market is extremely competitive. You need massive scale (as parallelism massively reduces cost), you need to be very good in negotiating cheap compute capacity and you need to be cost-effective in your G2M.
Opinions are my own, and none of this is based on non-public information.
So basically all of these are probably running in zero/net negative turns and they require billions of $'s to be spent and virtually there isn't any moat/lock-in (and neither there has to be)
TLDR: no company right now is sustainable
The only use case I can see is probably consulting but that will go as https://www.investopedia.com/why-ai-companies-struggle-finan...
So I guess the only reasonable business feels to me is private AI for large businesses who genuinely need it for their business (once again the MIT study applies) but that usually wouldn't apply to us normal grade consumers anyway and would be actually really expensive but still private and would be so far off from us normal people.
TLDR: The only ones making money are/ are gonna be B2B but even those are gonna dwindle if the AI bubble bursts because imagine an large business trying to explain why its gonna use AI if 1) the MIT study shows its unprofitable and 2) the fear around using AI etc. and all the financial consequences that the bubble's explosion might cause
So that all being said, I doubt it. I think that these prices are only till the bubble lasts which is only as strong as its weakest link which is openAI right now with trillions promised and a net lose making company whose CEO said that AI market is in a bubble and whose CFO openly floats the idea that OpenAI should be bailed out by the US govt if need be
So yeah..... Honestly Even local grade gpu's are expensive but with the innovations of open weights models, I feel like they would be the way to go for 90% of basic use cases being run inside them and probably there are very few cases of moat (and I doubt the moat existing in the first place)