Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Change is a constant in history. Stuff happens, and then we adjust. Big changes may result in short term confusion, anger, etc. All the classic signs of the five stages of grief basically.

If you step back a little, a lot of people simply don't see the forest for the trees and they start imagining bad outcomes and then panic over those. Understandable but not that productive.

If you look at past changes where that was the case you can see some patterns. People project both utopian and dystopian views and there's a certain amount of hysteria and hype around both views. But neither of those usually play out as people hope/predict. The inability to look beyond the status quo and redefine the future in terms of it is very common. It's the whole cars vs. faster horses thing. I call this an imagination deficit. It usually sorts itself out over time as people find out different ways to adjust and the rest of society just adjusts itself around that. Usually this also involves stuff few people predicted. But until that happens, there's uncertainty, chaos, and also opportunity.

With AI, there's going to be a need for some adjustment. Whether people like it or not, a lot of what we do will likely end up being quite easy to automate. And that raises the question what we'll do instead.

Of course, the flip side of automating stuff is that it lowers the value of that stuff. That actually moderates the rollout of this stuff and has diminishing returns. We'll automate all the easy and expensive stuff first. And that will keep us busy for a while. Ultimately we'll pay less for this stuff and do more of it. But that just means we start looking for more valuable stuff to do and buy. We'll effectively move the goal posts and raise the ambition. That's where the economical growth will come from.

This adjustment process is obviously going to be painful for some people. But the good news is that it won't happen overnight. We'll have time to learn new things and figure out what we can do that is actually valuable to others. Most things don't happen at the speed the most optimistic person wants things to happen. Just looking at inference cost and energy, there are some real constraints on what we can do at scale short term. And energy cost just went up by quite a lot. Lots of new challenges where AI isn't the easy answer just yet.



We are the horses, though.

At some point those became almost fully obsolete in a productive economical sense (they're just fancy toys now, basically). No 'raising the ambition' is ever going to change that. They are what they are and they can do what they can do.

I don't know about you, but if the something in "we'll find something to do" is becoming a toy for AI or very rich people, I'm not exactly hopeful about the future.


I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome. Clinging to the past has never really worked that well.

As for rich people, they get richer and richer until people correct them. Sometimes violently. The current concentration of wealth in particularly the US seems more related to political changes since about the Reagan era than to any recent innovations related to technology.


> I try to not be fatalistic. As I was trying to argue, it's historically inaccurate and it doesn't actually change the outcome.

This is false. Being fatalistic and 'panicking' can definitely influence and thus change the outcome. Your logic is similar to what is (incorrectly) used to dismiss the Y2K-problem, for instance: Looking back it seems like there was no need to panic, but that is only because a lot of people recognized the urgency, worked their ass off and succeeded in preventing shit from going horribly wrong.

See: https://en.wikipedia.org/wiki/Preparedness_paradox

Your handwaving is doing harm by lulling people into a false sense of security. Your initial comment amounts to "Ah, it'll be fine, don't worry about it. We'll adapt, we always have.", even though you provide absolutely no arguments specific to this enormous force of insanely rapid change in an already incredibly unstable fragile world. We might adapt, but it will require serious thought rather than handwaving and leaning back; even then it might come with massive societal upheaval and a lot of suffering.


I'm wrong to not be fatalistic?! You lost me here.

A lot of people seem to be wasting a lot of energy insisting it is all going to end in tears because <fill in reasons>. All I'm doing here is pointing out that people like this come out of the woodwork with pretty much every big change in society and then people adapt and things are society fails to collapse.

I'm not arguing there won't be changes and that they won't be disruptive to some people. Because they will and people will need to adjust. But I am arguing that a lot of the dystopian outcomes are as unlikely to happen with this particular change as they have been with previous rounds of changes. I just don't see a basis for it. I do see a lot of people who want this to be true mainly because they are afraid of having to adapt.

> already incredibly unstable fragile world

There are a lot of people arguing that things are better than ever by most metrics you might want to apply for that. The reason you might feel stressed about the news is that dystopian headlines sell better and you are being influenced by those. That's also why the Y2K got a lot more attention than it deserved in the media and then a lot of people indeed freaked out over that. Of course a lot of that got caught up in people believing for other reasons we are all doomed and that the apocalypse was coming. And it made for amusing headlines. So, it got a lot more attention than it deserved. And then the clock ticked over and society failed to collapse.


You largely ignored what I said and displayed exactly the fallacious behavior I was pointing out. Again, Y2K was not a problem because people 'freaked out' (took the problem seriously). Similarly, AI will only not be a problem due to people that spend time and effort to mitigate its issues, not due to people like you pretending that because nothing went seriously wrong in the past, nothing automatically will this time (because you "just don't see the basis for it").


This is the key point that HN commenters frequently miss: We are not the transportation owners trading in horses for cars. We are the horses.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: