> AIs are really good at writing code but really bad at debugging -- it's amazing to use Claude to prompt an app into existence, and pretty frustrating when that app doesn't work right and Claude is all thumbs fixing the problem.
LLM's are not "really good at writing code". They generate statistically relevant text based on their training data set.
Expecting people who do not understand code to use LLM’s for making solutions is like thinking “non-pilots” can successfully fly a 747.
This is the worst kind of pedantry. There is now code in a text file that wasn't there before. It doesn't really matter to me if they code came from human fingers, an auto generator tool, an LLM or a ouija board. It looks like written code and it compiles like written code - it's written code. You can rightly criticise the use of LLMs in many ways but it is more useful to focus on the actual reasons they cause harm than to set red lines over language.
> It looks like written code and it compiles like written code - it's written code.
And more often than not it's just plain wrong code. LLMs aren't actually writing code, they are guessing at what might satisfy an input. Writing code is more than guessing, it's about assembling instructions with intent, for a purpose. LLMs lack that intent and purpose.
> You can rightly criticise the use of LLMs in many ways but it is more useful to focus on the actual reasons they cause harm than to set red lines over language.
The "actual reasons" they cause harm are inherent to how LLMs work. The real problem is people not understanding how they work and placing too much trust and in them and believing the hype. They are not some miracle, they aren't even all that helpful, and in my experience they are more of a waste of my time.
LLM's are not "really good at writing code". They generate statistically relevant text based on their training data set.
Expecting people who do not understand code to use LLM’s for making solutions is like thinking “non-pilots” can successfully fly a 747.