Hmm, I think it was more the rise of streaming services which were more convenient and offered a better experience with less risk than illegally downloading music or movies.
I wish archive websites would take a harder stance on LLMS.
Liberating/archiving human for humans is fine albeit a bit morally grey.
Liberating/archiving human works for wealthy companies so they can make money on it feels less ritcheous.
All those billions of dollars of investments that could be sustaining the arts by appropriately compensating artists willing to have their content used, instead used to ... Quadruple the cost of consumer grade ram and steal water from rural communities.
I guess I'm just kind of sad. LLMS appropriately sourcing material could have been such a boom for artists in a way. I guess I feel like it was a missed opportunity for some mutual benefit.
The more we resist turning this into a state-sided solution which provides a service to private companies with a YES/NO age verification, the more likely your data is going to be given to botton-of-the-barrel third party private companies.
I'm genuinely curious what the argument is against state-run privacy focused age verification is here. We already protect real life adult spaces with IDs. You hand your ID to a random store clerk who scans it with a random device when you want to buy alcohol or cigarettes.
What makes these social media platforms special that they have entirely different rules?
I will say, if they came for small privately-hosted communities, I can understand the cause for alarm. But so far it appears to be limited to massive misinformation machines.
> How about you come back when your daughter has a fake AI nude passed around school.
Like any bad behaviour, the grown-up response should be discipline and education.
There's a million ways kids can misbehave. The idea is to get kids ready for the real world, not pretend there's nothing bad out there.
Obviously we don't want "point and click" AI nudes in the hands of minors, or kids having their own AI accounts in the first place. Parents and educators pay for their kid's devices and internet connections. If they're not being responsible, you take away the privilege until they learn about respectful behaviour.
If the kid is allowed to stay out after dark but ends up doing crime at those times, we don't ask the government to impose a curfew on every kid. We discipline the kids involved. And that's my last comment in this thread thank God, what a struggle.
Not the same - the barrier to entry was too high. Most people don't have the skills to edit photos using Photoshop. Grok enabled this to happen to scale for users who are complete non techies. With grok, anyone who could type in a half-coherent sentence in English could generate and disseminate these images.
I see what you’re getting at. You’re trying to draw a moral equivalence between photoshop and grok. Where that falls flat for me is the distribution aspect: photoshop would not also publish and broadcast the illegal material.
But police don’t care about moral equivalence. They care about the law. For the legal details we would need to consult French law. But I assume it is illegal to create and distribute the images. Heck, it’s also probably against Twitter’s TOS too so by all rights the grok account should be banned.
> This is a political action by the French
Maybe. They probably don’t like a foreign company coming in, violating their children, and getting away with it. But what Twitter did was so far out of line that I’d be shocked if French companies weren’t treated the same way.
> But I assume it is illegal to create and distribute the images.
I very much so expect it to be illegal to distribute the images, of course (creating them, not so much).
But the illegality, in a sane world (and until 5 minutes ago) used to be attached to the person actually distributing them. If some student distributes fake sexualized images of their colleague, I very much expect the perpetrator to be punished by the law (and by the school, since we are at it).
Is Twitter not the one distributing it? You make a request to their servers, and in the comment section there is a link to an image (also hosted on Twitter’s server) containing illegal content.
If a student printed the pictures out and distributed them at school you’d have maybe 1000 violations. Twitter likely has hundreds of millions if not billions. So it makes sense to go after the most severe violator.
Creating, possessing, and distributing CSAM is illegal in the US and many other countries. Can you explain why you think it should be legal to create something that is illegal to possess or distribute?
I didn't say creating isn't illegal. I said I think it probably shouldn't be illegal.
Any crime that doesn't cause victims is just another way for an oppressive collectivist state to further control their citizens. If you are not harming anyone (like when creating but not sharing these pictures) then it simply shouldn't be a crime. Otherwise, what are you actually punishing? Thoughtcrimes?
It's not hypothetical. And in fact the girl who was being targeted was expelled not the boys who did it [1].
Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.
You can’t “undo” a school shooting, for instance, so we tend to have gun laws.
You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
Yes, we should have some protections or restrictions on what you can do.
You may not understand it, either because you aren’t a parent or maybe just not emotionally equipped to understand how serious this actually can be, but your lack of comprehension does not render it a non-issue.
Having schools play whack-a-mole after the photos are shared around is not a valid strategy. Never mind that schools primarily engage in teaching, not in investigation.
As AI-generated content gets less and less distinguishable from reality, these incidents will have far worse consequences and putting such power in the hands of adolescents who demonstrably don’t have sound judgment (hence why they lack many other rights that adults have) is not something most parents are comfortable with - and I doubt you’ll find many teachers, psychiatrists and so on who would support your approach either.
>You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
No, but if you send those people who made and distributed the AI nude of her to jail, these problems will virtually disappear overnight, because going to jail is a hugely effective deterrent for most people.
But if you don't directly prosecute the people doing it, and instead just ban Grok AI, then those people will just use other AI tools, outside of US jurisdiction, to do the same things and the problem persists.
And the issues keeps persisting, because nobody ever goes to jail. Everyone only gets a slap on the wrist, deflects accountability by blaming the AI, so the issue keeps persisting and more people end up getting hurt because those who do the evil are never held directly accountable.
Obviously Grok shouldn't be legally allowed to generate fakes nudes of actual kids, but in case such safeguards can and will be bypassed, that doesn't absolve the humans from being the ones knowingly breaking the law to achieve a nefarious goal.
Youths lack judgment, so they can’t vote, drink, drive, have sex or consent to adults.
A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Beyond that, we regulate guns, speed limits and more according to principles like “your right to swing your fist ends at my nose”.
We do that not only because shoving kids into jails is something we want to avoid, but because regulating at the source of the problem is both more feasible AND heads off a lot of tragedy.
And again, you fail to acknowledge the investigative burden you put on society to discover who originated the photo after the fact, and the trauma to the victim.
If none of that computes for you, then I don’t know what to say except I don’t place the right to generate saucy images highly enough to swarm my already overworked police with requests to investigate who generated fake underage porn.
>A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Teenagers do stupid shit all the time. But they still get prosecuted or convicted when they do crimes. They go to juvy or their parents get punished. Being 14 is not a get out of jail free card.
In that case, why not allow teenagers to carry firearms as well? Sure, some will die, others will go to jail, but at least that ought to teach the rest of them a lesson, right?
I am in agreement with you, but as a kid, we DID carry guns, regularly. Gun racks in our cars/trucks, and strapped to our backs as we walked down the street.
The problem stems from parents lack of parenting, a huge lack of real after-school programs, and the tiktokification of modern society.
30 years ago, we had a lot of the same "slap on the wrist" punishments because it was assumed when you got home your parent was going to beat your ass. That isn't a thing anymore (rightfully), because parenting through threat of violence just leads to those kids becoming violent parents.
Our problem is we never transitioned from violent parenting into any other kind. I watched my nieces and nephews get parented by YouTube and get social media accounts before they were 10. COVID created a society of chronically online children who don't know how to interact offline.
And yes, the tools to create bad shit are more accessible than ever, and I always come off as some angry gate keeper, but so much of the internet as it is today has become too easy to access by people incapable of the critical thinking required for safe use.
In the last 5 years, generative AI has taken over most of the "public facing" internet, and with internet literacy at the same level it was 20-30 years ago, we are back in the "walled garden" AOL era, but it is Facebook, Instagram, Twitter, TikTok that are the gardens.
Wut? I carried guns regularly from about age 7. Without my parents around. The USA at one point embraced radical freedom. That is the childhood I had, and I thank "god" for it on a regular basis. "Live free or die."
I'm similarly repulsed by the idea of Grok generating images of kids, but if you draw a nude of an adult woman she's not going to get raped by that existing, and you don't have a right to not be embarrassed. Tough shit, deal with it.
The way you are arguing makes it really hard to understand what you are trying to say. I am guessing you are upset that non-human entity is being used as a boogie man while the actual people are going free? But your argumentation reads like someone who is very upset at AI producing CSAM is being persecuted. I won’t be surprised if people think you are defending CSAM.
In good faith, a few things - AI generated imagery and Photoshop are not the same. If someone can mail Adobe and a photo of a kid and ask for a modified one and Adobe sent it back, yes Adobe’s offices will be raided. That’s the equivalent here. It’s not a tool. It’s a service. You keep using AI, without taking a moment to give the “intelligence” any thought.
Yes, powerful people are always going to get by, as you say. And the laws & judicial system are for the masses. There is definitely unfairness in it. But that doesn’t change anything here - this is a separate conversation.
If not Grok then someone else will do it - is a defeatist argument that can only mean it can’t be controlled so don’t bother. This point is where you come across as a CSAM defender. Govt’s will/should do whatever they can to make society safe, even if it means playing whack a mole. Arguing that’s “not efficient” is frankly confusing. Judicial system is about fairness and not efficiency.
frankly, I think you understand all of this and maybe got tunnel visioned in your anger at the unfairness of people scapegoating technology for its failings. That’s the last thing I want to point out, raiding an office is taking action against the powerful people who build systems without accountability. They are not going to sit the model down and give a talking to. The intention is to identify the responsible party that allows this to happen.
> No, but if you send those people who made and distributed the AI nude of her to jail, these problems will virtually disappear overnight, because going to jail is a hugely effective deterrent for most people.
Actually you'll see the opposite happen a lot - after Columbine, the number of school shootings went up [0] for example, because before people didn't consider it an option. Same with serial killers / copycats, and a bunch of other stuff.
Likewise, if it hadn't been in the news, a lot of people wouldn't have known you can / could create nudes of real people with Grok. News reporting on these things is its own kind of unfortunate marketing, and for every X people that are outraged about this, there will be some that are instead inspired and interested.
While a lot of punishments for crimes is indeed a deterrent, it doesn't always work. Also because in this case, it's relatively easy to avoid being found out (unlike school shootings).
You cannot offload all problems to the legal system. It does not have the capacity. Legal issues take time to resolve and the victims have to have the necessary resource to pursue legal action. Grok enabled abuse at scale, which no legal system in the world can keep up with. It doesn't need explanation that generating nudes of people without their consent is a form of abuse. And if the legal system cannot keep up with protecting victims, the problem has to be dealt with at source.
>You cannot offload all problems to the legal system. It does not have the capacity.
You definitely can. You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.
People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else.
Do people like doing and paying their taxes? No, but they do it anyway. Why is that? Because THEY KNOW that otherwise they go to jail. Obviously the IRS and legal system don't have the capacity to send the whole country to jail if they were to stop paying taxes, but they send enough to jail in order for the majority of the population to not risk it and follow the law.
Increased severity of punishment has little deterrent effect, both individually and generally.
The certainty or likelihood of being caught if a far more effevtive deterrent, but require effort, focus, and resources by law enforcement.
It's a resource constraint problem and a policy choice. If "they" wanted to set the tone that this type of behavior will not be tolerated, it would require a concerted multi agency surge of investigative and prosecutorial resources. It's been done before, if there's a will there's a way.
> People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else
And yet criminals still commit crimes. Obviously jail is not the ultimate deterrent you think it is. Nobody commits crimes with the expectation that they'll get caught, and if you only "crucify some people", then most criminals are going to (rightfully) assume that they'll be one of the lucky ones.
> You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.
I genuinely cannot tell if you are being comically naïve or extremely obtuse here. You need only look at the world around you to see that this does not, and never will, happen.
As another commenter said, this argument is presenting itself as apologia for CSAM and you come across as a defender of the right for a business to create and publish it. I assume you don't actually believe that, but the points you made are compatible.
It is as much the responsibility of a platform for providing the services to create illegal material, and also distributing said illegal material. That it happens to be an AI that generates the imagery is not relevant - X and Grok are still the two services responsible for producing and hosting it. Therefore, the accountability falls on those businesses and its leadership just as much as it does the individual user, because ultimately they are facilitating it.
To compare to other situations: if a paedophile ring is discovered on the dark web, the FBI doesn't just arrest the individuals involved and leave the website open. It takes the entire thing down including those operating it, even if they themselves were simply providing the server and not partaking in the content.
Actually research shows people regularly overestimate how effective deterrence-based punishment is. Particularly for children and teenagers. How many 14-year-olds do you really think are getting prosecuted and sent to jail for asking Grok to generate a nude of their classmate..? How many 14-year-olds are giving serious thought about their long-term future in the moment they are typing a prompt into to Twitter..? Your argument is akin to suggesting that carmakers should sell teenagers cars to drive, because the teenager can be punished if they cause an accident.
No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.
I don't accept this as good faith argumentation nor does HN rules.
You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.
For disagreeing on the injection of offtopic hypothetical scenarios as an argument derailing the main topic?
>It should not be possible to create nudes of minors via Grok.
I agree with THIS part, I don't agree with the part where the main blame is on the AI, instead of on the people using it. That's not a bad faith argument, it's just My PoV.
If Grok disappears tomorrow, there will be other AIs from other parts of the world outside of US/EU jurisdiction, that will do the same since the cat is out of the bag and the technical barrier to entry is dropping fast.
Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
> Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
Both, obviously. For example, you go after drug distributors and drug producers. Both approaches are effective in different ways, I am not sure why you are having such trouble understanding this.
You know there is no such thing as the world police or something of that sort.
If the perpetrator is in another country / jurisdiction it is virtually impossible to prosecute let alone sentence.
It is 100% regulatory problem in this case. You just cannot allow this content to be generated and distributed in the public domain by anonymous users. It has nothing to do with free speech but with civility and common understanding of what is morally wrong / right.
Obviously you cannot prevent this in private forums unless it is made illegal which is a completely different problem that requires a very different solution.
The existence and creation of cigarettes and adult nude magazines is fully legal, only their sale is illegal to kids. If kids try to illegally obtain those LEGAL items, it doesn't make the existence of those items illegal, just the act of sale to them.
Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.
> If parents or school let children play with explosives or do drugs
The explosive sellers that provide explosives to someone without a certification (child or adult) get in trouble (in this part of the world) .. regardless of whether someone gets hurt (although that's an upscale).
If sellers provide ExPo to certified parents and children get access .. that's on the parents.
In that analagy of yours, if grok provided ExPo or CSAM to children .. that's a grok problem,
> A country can ban guns and allow rope, even though both can kill.
That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.
So maybe dangerous knives are not the problem, but the people using them to kill other people. So then where do we draw the line between lethal weapons and crime correlation. At which cutting/shooting instruments?
Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You’re absolutely right that it is a difficult question where to draw the line. Different countries will do it differently according to their devotion to individual freedoms vs communal welfare.
The knife (as opposed to sword) example is interesting. In the U.K. you’re not allowed to sell them to children. We recognise that there is individual responsibility at play, and children might not be responsible enough to buy them, given the possible harms. Does this totally solve their use in violent crime? No. But if your alternative is “it’s up to the individuals to be responsible”, well, that clearly doesn’t work, because some people are not responsible. At a certain point, if your job is to reduce harm in the population, you look for where you can have a greater impact than just hoping every individual follows the law, because they clearly don’t. And you try things even if they don’t totally solve the problem.
And indeed, the same problem in software.
As for the violent crime rates in the U.K., I don’t have those stats to hand. But murder is at a 50 year low. And since our post-Dunblane gun laws, we haven’t had any school shootings. Most Britons are happy with that bargain.
> meanwhile the violent crime rates have not dropped.
The rate of school shootings has dropped from one (before the implementation of recommendations from the Cullen report) to zero (subsequently). Zero in 29 years - success by any measure.
If you choose to look at _other_ types of violent crime, why would banning handguns have any effect?
> Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You can ban tools which enable bad outcomes without sufficient upside, while also holding the people who use them to account.
"Correction: kids made the pictures. Using Grok as the tool."
No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.
"If kids were to "git gud" at photoshop "
And what is that supposed to mean?
Adobe makes general purpose tools as far as I know.
You're beating it around the bush not answering the main question.
Anyone skilled at photoshop can do fake nudes as good or even better than AI, including kids (we used it to make fun fakes of teachers in embarrassing situations back in the mid 00s and distribute them via MSN messenger), so then why is only the AI tool the one to blame for what the users do, but not Photoshop if both tools can be used to do the same thing?
People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things? This is the answer I'm looking for and I don't think there is an easy one, yet people here are too quick to pin blame based on their emotional responses and subjective biases and word views on the matter and the parties involved.
So let's say there are two ways to do something illegal. The first requires skills from the perpetrator, is tricky to regulate, and is generally speaking not a widespread issue in practice. The second way is a no brainer even for young children to use, is easy to regulate, and is becoming a huge issue in practice. Then it makes sense to regulate only the second.
> People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Tricky question, but a more accurate comparison would be with a company that runs a service to 3D print guns (= generating the image) and shoot with them in the street (= publishing on X) automatically for you and keeps accepting illegal requests while the competitors have no issue blocking them.
> Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things?
That's also a tricky question, but generally you don't really need to know precisely where to draw the line. It suffices to know that something is definitely on the wrong side of the line, like X here.
A 3D printer needs a blueprint. AI has all the blueprints built-in. It can generalize, so the blueprints cannot simply be erased, however at least what we can do is forbid generation of adult content. Harm should be limited. Photoshop requires skill and manual work, that's the difference. In the end, yes, people are the ones who are responsible for their actions. We shouldn't let kids (or anyone else) harm others with little to no effort. Let's be reasonable.
You don't even have to be good at photoshop. /r/ has been around for 20+ years and usually gets some decent free work so long as the requests aren't for under hs aged kids.
> When the sheriff's department looked into the case, they took the opposite actions. They charged two of the boys who'd been accused of sharing explicit images — and not the girl.
Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.
I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?
Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.
Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)
They may well get in trouble, but in that takes time, in the meantime photos will have been seen by most kids in school + you might get a year of bullying.
Education might be so disrupted you have to change schools.
But they are getting in trouble. However, for every one that gets in trouble, there's more that don't get discovered, or that don't get in trouble for it.
Besides, getting in trouble for something is already after the fact, the damage has been done. If it can't be done in the first place, or the barrier is too high for most, then the damage would have been prevented.
children do dumb things and make mistakes all the time, teenagers push the boundaries as far as they can (and they have a role model in the white house now)
We fault and "fine" companies for providing products that harm society all the time
Are you not going to consider the company providing a CSAM machine to be the major one at fault here?
I really find this kind of appeal quite odious. God forbid that we expect fathers to have empathy for their sons, sisters, brothers, spouses, mothers, fathers, uncles, aunts, etc. or dare we hope that they might have empathy for friends or even strangers? It's like an appeal to hypocrisy or something. Sure, I know such people exist but it feels like throwing so many people under the bus just to (probably fail) to convince someone of something by appealing to an emotional overprotectiveness of fathers to daughters.
You should want to protect all of the people in your life from such a thing or nobody.
A lot of these UPFs are targeted at young people who don't have the same ability to think of long term consequences. If you start young, it's a much harder habit to break later in life.
And in many places UPFs are cheaper and more widely available than unprocessed food. If you're worried about paying rent, you're not questioning cheap calories for your family.
Even if we can agree that people should exercise more willpower, isn't there something wrong with companies weaponizing science to make food as addictive as possible?
I feel like if the conclusion is "ban it for everyone too" I'm okay with it?
But the argument seems to get a little lost along the way.
Yes, adults are susceptible to the same vices as children. However (as the author writes) children have poorer impulse control. They are also less inclined to or unable to consider the repercussions of their actions.
You wouldn't try to get a toddler to stop smoking by telling them it'll put them at a high risk for cancer at old age.
Speaking of smoking, anti-smoking campaigns in the US in the 90s led to a vast reduction in teen use and adult use alike.
So there is notable lasting benefit in protecting children while they lack the foresight.
>Speaking of smoking, anti-smoking campaigns in the US in the 90s led to a vast reduction in teen use and adult use alike.
Late 90s... specifically after 1997 and early 2000s. But the anti-smoking campaigns before that were not effective. In fact, educating teens and adults on the dangers of smoking increased smoking. Smoking rates for teens peaked at 37% in 1997. it wasn't until the "Truth" campaigns where they focused on how the tobacco industry was basically a conspiracy, that smoking rates began to fall. And you can't pretend that tobacco taxes didn't play a part in reducing usage either.
After spending most of my career hacking on these systems, I feel like queues very quickly become a hammer and every entity quickly becomes a nail.
Just because you can keep two systems in complete sync doesn't mean you should. If you ever find yourself with more-or-less identical tables in two services you may have gone too far.
Eventually you find yourself backfilling downstream services due to minor domain or business logic changes and scaling is a problem again.
I once was told "we cannot promote you because the work you've done checks the boxes for 2 rolls above you and does not check the boxes for your next roll"
So people are kind of primed for "makes sense to keep kids from these attention driven platforms"
But I think the average person isn't understanding the implications of the facial/id scanning.
reply