What I believe the author did was instead of teaching their child that they may not talk to strangers, they believed there just is a magic button to have these strangers not exist.
I am opposed to the whole concept of these "parental controls". Instead of a bond of trust, between a parent and their child, the surveillance economy has gave us the ability to experience the top of the surveillance pyramid ourselves. As Google and Meta spy on the world, we spy on little Timmy. In fact, you are a bad parent if you don't spy on little Timmy.
I really can't wrap my head around how asking "how was your day" has evolved into "I saw on your GPS tracker you walked a different route to school today... Do you have something to tell me?" If you look at everyone who is now joining the work force and all, the coming generations, you'll see the thing they lack most is independence.
You obviously don't have kids. You can't trust a child's judgement because they don't have the experience to exercise good judgement. Your job as a parent is to look out for them while helping them develop.
So after reading a link from here from yesterday I decided to make my own implementation for checking evenness of numbers, and making it more optimized.
Maybe we should begin by waiting to see the scale of said so-called damage. Right now, there have maybe been a few incidents, but there are no real rates on "oh x people kill themselves a year from ai" and as long as x is still that, an unknown variable, it would be foolish to speed through limiting everybody for what can be just a few people.
>Trying to fix the problems _____ now that they're deeply rooted global issues and have been for decades is hard
The number of people that are already getting out of touch with AI is high. And we know that people have all kinds screwed up behaviors around things like cults. It's not hard to see that yes, AI is and will cause more problems around this.
To emphasize your point: there are literally multiple online communities of people dating and marrying corporate controlled LLM’s. This is getting out of hand. We have to deal with it.
For real though right? A bunch of nerds at openAI, Microsoft, etc. make it so a computer can approximate a person who is bordering on the sociopathic with its groveling and affirmations of the user’s brilliance, then people fall in love with it. It’s really unsettling!
reply