> It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position
Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”
It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group.
I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids).
It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians.
> It's also worth nothing that the "our" in that sentence is just SWEs
It isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency.
Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them
Not to mention utilizing AI for control, spying, invigilation and coercion. Do I need to explain how control is opposed to agency?
I'll grant that it does extend beyond SWEs, but whether AI atrophies skills is entirely up to the user.
I used to use a bookkeeper, but I got Claude a QuickBooks API key and have had it doing my books since then. I give it the same inputs and it generates all the various journal entries, etc. that I need. The difference between using it and my bookkeeper is I can ask it all kinds of questions about why it's doing things and how bookkeeping conventions work. It's much better at explaining than my bookkeeper and also doesn't charge me by the hour to answer. I've learned more about bookkeeping in the past month than in my entire life prior - very much the opposite of skill atrophy.
Claude does a bunch of low-skill tasks in my business, like copying numbers from reports into different systems into a centralized Google Sheet. My muscle memory at running reports and pulling out the info I want has certainly atrophied, but who cares? It was a skill I used because I needed the outcome, not because the skill was useful.
You say that using AI reduces all these skills as though that's an unavoidable outcome over which people have no control, but it's not. You can mindlessly hand tasks off to AI, or you can engage with it as an expert and learn something. In many cases the former is fine. Before AI ever existed, you saw the same thing as people progressed in their careers. The investment banking analyst gets promoted a few times and suddenly her skill at making slide decks has atrophied, because she's delegating that to analysts. That's a desirable outcome, not a tragedy.
Less capability doesn't necessarily mean less agency. If you choose to delegate a task you don't want to do so you can focus on other things, then you are becoming less capable at that skill precisely because you are exercising agency.
Now in fairness I get that I am very lucky in that I have full control of when and how I use AI, while others are going to be forced to use it in order to keep up with peers. But that's the way technology has always been - people who decided they didn't want to move from a typewriter to a word processor couldn't keep up and got left behind. The world changes, and we're forced to adapt to it. You can't go back, but within the current technological paradigm there remains plenty of agency to be had.
> but whether AI atrophies skills is entirely up to the user
Thing with society is that we cannot simply rely on self-discipline and self-control of individuals. For the same reason we have universal and legally enforced education system. We would still live in mostly illiterate society if people were not forced to learn or not forced to send their children to school.
Analogies to past inventions are limited due to the fact that AI doesn't automate physical-labor, hard or light - it automates, or at least its overlords claim it automates, lot of cognitive and creative labor. Thinking itself, at least in some of its aspects.
From sociological and political perspective there is a huge difference between majority of population losing capability to forge swords or sew dresses by hand and capability to formulate coherent opinions and communicate them.
It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.
edit: also the solution here isn't that every one writes their own software (based on open source code available on the internet no doubt) we just use that open source software, and people learn to code and improve it themselves instead of off-loading it to a machine
This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s.
LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet".
Yes, for some definition of OS. It could build a DOS-like or other TUI, or a list of installed apps that you pick from. Devices are built on specifications, so that's all possible. System API it could define and refine as it goes. General utilities like file management are basically a list of objects with actions attached. And so on... the more that is rigidly specified, the better it will do.
It'll fail miserably at making it human-friendly though, and attempt to pilfer existing popular designs. If it builds a GUI, it's be a horrible mashup of Windows 7/8/10/11, various versions of OSX / MacOS, iOS, and Android. It won't 'get' the difference between desktop, laptop, mobile, or tablet. It might apply HIG rules, but that would end up with a clone at best.
In short, it would most likely make something technically passable but nightmareish to use.
Given 100 years though? 100 years ago we barely had vacuum tubes and airplanes.
Given a century the only unreasonable part is oneshotting with no details, context, or follow up questions. If you tell Linus Torvalds "write a python script that generates and OS", his response won't be the script, it'll be "who are you and how did you get into my house".
Considering how simple "an OS" can be, yes, and in the 2020s.
If you're expecting OSX, AI will certainly be able to make that and better "in the next 100 years". Though perhaps not oneshotting off something as vague as "make an OS" without followup questions about target architecture and desired features.
JFYI, LLMs still can't solve 7x8, and well possibly never will. A more rudimentary text processor shoves that into a calculator for consumption by the LLM. There's a lot going on behind the scenes to keep the illusion flying, and that lot is a patchwork of conventional CS techniques that has nothing to do with cutting edge research.
To many interested in actual AI research, LLMs are known as the very flawed and limiting technique they are, and the increasing narrative disconnect between this and the table stakes where they are front and center of every AI shop, carrying a big chunk of the global GDP on its back, is annoying and borderline scary.
This is false. You can run a small open-weights model in ollama and check for yourself that it can multiply three-digit numbers correctly without having access to any tools. There's even quite a bit of interpretability research into how exactly LLMs multiply numbers under the hood. [1]
When an LLM does have access to an appropriate tool, it's trained to use the tool* instead of wasting hundreds of tokens on drudgery. If that's enough to make you think of them as a "flawed and limiting technique", consider instead evaluating them on capabilities there aren't any tools for, like theorem proving.
* Which, incidentally, I wouldn't describe as invoking a "more rudimentary text processor" - it's still the LLM that generates the text of the tool call.
> Heck, one company built a (prototype but functional) web browser
No, they built something which claimed to be a web browser but which didn't even compile. Every time someone says "look an LLM did this impressive sounding thing" it has turned out to be some kind of fraud. So yeah, the idea that these slop machines could build an OS is insane.
I personally observe AI creation phenomenally good code, much better than I can write. At insane speed, with minimal oversight. And today’s AI is the worst we will ever have.
Progress in AI can easily be measured by the speed at which the goalposts move - from “it can’t count” to “yeah but the entire browser it wrote didnt compile in the CI pipeline”
What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime.
Look, from a point of view of a person outside of US, you are all fascists, "democrats" and trumpists. Dont take this as "trolling", but as a sincere opinion (I dont care about your internal brawls, I care for what you do to others.)
Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”