Hacker Newsnew | past | comments | ask | show | jobs | submit | ceayo's commentslogin

Of course! Think of the dangers of an unsupervised child... (SHOCK WARNING) cooking... A gasp MEAL!

You're so right... Some of these patterns are, to their very core, parts of what make these social media bad.

I'm not really sure if the author (i.e. generative language model) is being serious or being sarcastic...


so much for the rule of law, I guess...


> drove its founding

IMO drove its funding.


Yes, the "spontaneous lab experiment that proved wildly successful" mythology around many of these companies is just elaborate camouflage.


What I believe the author did was instead of teaching their child that they may not talk to strangers, they believed there just is a magic button to have these strangers not exist.


If that's the case, then rules weren't clearly stated, if stated, at all.


> yet for the remaining 20% you have to be there.

Shouldn't you trust your children, to come to you in that 20%?


How they will identify that 20% if the previous comment was referring to them actually not being able (yet) to understand it?


I am opposed to the whole concept of these "parental controls". Instead of a bond of trust, between a parent and their child, the surveillance economy has gave us the ability to experience the top of the surveillance pyramid ourselves. As Google and Meta spy on the world, we spy on little Timmy. In fact, you are a bad parent if you don't spy on little Timmy. I really can't wrap my head around how asking "how was your day" has evolved into "I saw on your GPS tracker you walked a different route to school today... Do you have something to tell me?" If you look at everyone who is now joining the work force and all, the coming generations, you'll see the thing they lack most is independence.


You obviously don't have kids. You can't trust a child's judgement because they don't have the experience to exercise good judgement. Your job as a parent is to look out for them while helping them develop.


> You can't trust a child's judgement because they don't have the experience to exercise good judgement.

How can a child get any experience, if they are only ever exposed to perfect make-believe fairyland?


So after reading a link from here from yesterday I decided to make my own implementation for checking evenness of numbers, and making it more optimized.


Yeah, but where are the performance numbers? :D


I don’t see how that needs that “:D”

Is there any standard that says anything about the performance of fseek?

If not, how can one claim that this is O(1)? :D


> We can limit the damage before it's too late.

Maybe we should begin by waiting to see the scale of said so-called damage. Right now, there have maybe been a few incidents, but there are no real rates on "oh x people kill themselves a year from ai" and as long as x is still that, an unknown variable, it would be foolish to speed through limiting everybody for what can be just a few people.


It's like you didn't even read their statement...

>Trying to fix the problems _____ now that they're deeply rooted global issues and have been for decades is hard

The number of people that are already getting out of touch with AI is high. And we know that people have all kinds screwed up behaviors around things like cults. It's not hard to see that yes, AI is and will cause more problems around this.


To emphasize your point: there are literally multiple online communities of people dating and marrying corporate controlled LLM’s. This is getting out of hand. We have to deal with it.


"Married to Microsoft" [shudders]


For real though right? A bunch of nerds at openAI, Microsoft, etc. make it so a computer can approximate a person who is bordering on the sociopathic with its groveling and affirmations of the user’s brilliance, then people fall in love with it. It’s really unsettling!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: