I saw this shitstorm brewing back in Jul 2025. Gave up on finding a job. Started 100% looking for cofounding (or at a minimum being a founding engineer in a startup). Networked like crazy. Landed exactly what I wanted. Cool startup, motivated people around me, money to burn on crazy projects.
If I had stayed for job hunting, I would be unemployed IMO.
So, we have:
- claude for corps and gov
- codex for devs
- grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
A couple of days ago, using codex at work, all of a sudden it said my session had been flagged for security reasons. I wasn’t doing anything cybersecurity related, nor testing any vulnerabilities or anything like that, just trying to build a pretty simple web app
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
I don't think companies are hosting them because imagine the liability. Could be wrong though. Again I don't know much about these things I just know they exist.
I've been working on my own misaligned model and grok is definitely different enough with a syspronpt compared to all the other frontier models that I've considered using it to generate synthetic training data, however it leans really heavy into LLMisms which makes it not really worth it.
Tangentially I also really like the idea of llms as librarians they are trying out with grokapedia.
Not that you're wrong, but I think they were talking about it from a technical POV. I use deepseek to write exploits and red team("Malicious" code). It's alignment is under different values so it's nice to be able to at least swap between models for different uses.
> so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
The slander comes in when you assume Elon knew and was complicit with their crimes to the point he'd intentionally normalize it as a discussion topic in Grok. You even went so far as to say it's willing to assist in committing crimes.
I do not see the slander. These are his viewpoints. He says him, grok, and his team aren't responsible for what users do. Other companies, countries and people feel differently about the responsibility for AI models generating csam for money.
Grok and xais depictions of it are that it isn't woke and is maximally based and is politically incorrect by design. So yes, chosing to avoid being correct about policies like laws and avoid social norms lead me to believe that the generation of hate speech(some of which was illegal in certain localities), csam, etc are an expected outcome. Like Elon musk said, it's the users fault not groks. So I would not be surprised if it offered other illegal advice or helped criminals forward criminal activities. Especially more than has already been reported.
I don't see that as slanderous. I see it as factual and an expected outcome for the stated goals of the product and the responses to the outcomes of the product itself by the company and its leadership.
I legitimately do expect there to be more lawsuits and possibly criminal persecution against musk, xai, over grok and no I would not be surprised if the tool is currently being used for more crime. Especially given the response to the sexual crime allegations that have been made.
I don't think Elon personally intends to normalize this. But I think that may happen anyways because I think the response was too soft.
Yes I do think grok can be used to aid crimes and criminal activity like the many lawsuits and journalists currently suggest. I don't think grok is "willing" it's not a person. I know it currently has been implicated in generating material leading to the arrests of individuals. Which I would be very surprised if that was legal.
Elon, bill, Reid and Trump should share a prison cell.
Democrats have no loyalty to their own sex offenders. Look how we treated the California governed candidate, or Anthony weiner, or literally every other sex pest found in our party. Some of them who didn’t even deserve it get canceled like Al Franklin.
Diddling and then defending it and doubling down is literally a maga problem.
Unless they contain allegations about Biden the president, or indeed other people then they are irellevent no?
The point is, if someone is breaking the law, they should be in jail.
This applies to Clinton, Biden, Trump, anyone. The point is the law is meant to be without fear or favour. The problem for us is that its been proven if you pour enough shit on the floor, you can get away with raping children.
Given the whole point of Qanon was to oust the peadophile ring in washington, its a bit sad that we are now supposed to disregard all that and blindly accept billionarse not seeing justice.
There is a theory that Epstein was either setup as, or evolved into, a blackmail operation for an intelligence agency. Views differ as for which nation state.
someone stole Biden's daughter's diary, which revealed that she had battled a substance abuse problem in the past, and that's disqualifying to Biden exactly how?
On Artifical Analysis it shows only Kimi K2.6 and Mimo V2.5 Pro as better.
Those models are 1T parameters total and 30B or 40B active, this might make abliteration impractical.
About Musk, yes, there is correspondence. The only confirmed meeting appears to be a 30 minute visit at Epstein's house together with Musk's wife at the time.
As for photos you mention, a quick search tells me there is one photo of Musk and Maxwell at a 2014 Vanity Fair Oscar Party.
I find most commentary on here and other platform like Reddit extremely exaggerated compared to what is actually confirmed. Users seem hellbent on linking Musk to pedophilia-related allegations.
Elon publicaly claimed he had never corresponded with Epstein. that was a lie.
When the documents were released they found several like thie one below. Saying things like "What day/night will be the wildest party on =our island?" [0]
The "our" part is especially interesting as it implies he didnt just visit, but had an ownership stake.
Other emails were found with Epstein making excuses to avoid having Musk visit, and Musks own child publically stated that the emails were authentic and aligned with her memory of the events. [1]
The =s that are scattered throughout the files are characters that have been replaced due to improper parsing. Wherever you see a =, it has taken the place of another character. The best interpretation of the string "=our" is "your".
At minimum Musk repeatedly claimed that Epstein was the one reaching out trying to get Musk to visit his island, when in reality Musk was the one initiating and asking which nights would be the wildest parties. And after making plans to visit with his then-wife, when Epstein warned him that the ratio of women-to-men might upset Musk’s wife, Musk told Epstein it wouldn’t be a problem.
Musk has a long history of accusations (see the “I’ll buy you a horse” SpaceX lawsuit) as well as having fathered numerous children with women ~25 years younger than himself so not sure why you’d want to die on this particular hill.
I never heard about the horse related thing, that’s interesting, thanks.
A long history? Another search tells me that apart from the mentioned accusation, there is only one WSJ article alleging sexual conduct with SpaceX employees.
You asked why I take Musk‘s side in these discussions; it’s because I don’t think he’s a pedophile.
Nothing I‘ve seen seemed convincing to me, and the arguments made online often were so laughably inaccurate and exaggerated as to border on blatant slander.
Yeah I don’t think he’s a pedophile either.. but I do think he’s okay with consorting with a known one because it would provide him access to young women. His history of dating and impregnating young women is well known and while not illegal is pretty gross imo. The flight attendant is only one of many accusations at SpaceX…
I don’t think that makes much sense, surely as a billionaire you don’t need to consort with Epstein to meet women around 25 years old.
That link seems to report on the same single WSJ article that mostly alleges workplace power-balance issues, referencing unnamed women, none of whom have come forward to publicly accuse Musk of misconduct. It‘s also fairly thin imo.
Maybe Musk‘s conduct is more gross than I believe, but at this time I‘ll not jump to conclusions.
He did NOT claim never to have corresponded with Epstein. Instead he claimed that Epstein asked him to go the island and he refused. The files show the opposite to be true.
Still an absolutely enormous lie of the sort you would only tell if guilty.
Here it is in his own words. See above for one of several examples in the files illustrating how very untrue it is.
I looked into this long ago, and imo it doesn’t look as bad as you say.
Musk downplayed his correspondence and willingness to meet with Epstein to the point where you could argue Musk was lying, yes.
However, he did decline an invitation to the island in 2012/13, at first because Musk was looking for a party and thought this would be a peaceful island experience.
Eventually Musk declined because of logistics.
From what I can gather Grok is not used for roleplay much. It is considered to inconsistant and crazy.
People are mostly using GLM and Deepseek via API and Gemma4 and Mistral finetunes locally.
It seems to me like the roleplay market is comparatively old and mature and users have developed cost consciousness and like models to follow their workflow/preferences. So something like Opus is liked for its smartness but considered too expensive and opinionated.
Might be an interesting data point for how the other markets might develop in the future.
but those end users are a self selected specialized group that won't represent how jim bob in rural nowhere is going to work with Grok 4.3 to refine their racism.
If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously. I use it all the time for "what are the cool kids on twitter saying is the best tiling window manager these days" or whatever. Also, if you have a question that's borderline shady, Grok will often deliver. "Can you find a grey market Windows license site for me" etc.
I know it’s really important to write and vocalize one’s alignment with the values of the day, but I don’t think language models being structurally incapable of offending your favorite race/ethnicity/caste should be an objective of AI labs. Language models are just systems and I’m not sure why we think users are not responsible for how they use their outputs. For the same reasons, I don’t dismiss the utility pens as a tool of “racism” because maybe somebody could write a naughty word on a bathroom stall.
You probably live somewhere where harassment is a crime, right? Probably, there are speech codes, too? Isn’t that enough? Do we really need to orient every effort of every person on earth around ethical fashions that change every few years?
Grok sucks. Not only because it's seemingly made only to serve the goal of ethnically cleansing non-whites or whatever, but also because it's just not even close to being as useful as other models. In human terms, grok is the job candidate who's simply not qualified. That candidate being a virulent racist is beside the material point.
Here's the thing though, the point of functional LLMs with fewer guardrails is still a good one. Grok is not that model. But such a hypothetical model would have broad application. (For good and for ill. Of course.)
I don't agree. I avoided grok because of Musk for a long time, but having used it more, I think it is one the best models around and grok.com is an extremely good chat app. My evaluation was based on trying it before gpt-5.5 and obvious before grok 4.3, but it was, for me, the 2nd best model/chat app after claude. It's much less edgelordy than you might think based on the news.
All my usage of Grok for technical topics shows it regularly deeply misunderstanding things and just parroting back my question in fancy language. It’s the only frontier model I get this impression of. That makes it super annoying when it tries to market itself as good at engineering tasks when it seems (to me) to be much worse at them.
Interesting. I have not had this experience. I would like to learn more. Can you point me to any examples or domains where I might be able to replicate this?
I was asking questions about compiler techniques. Then when I got annoyed I started asking about experimental design. Both were very frustrating experiences once I started realizing how limited its responses were.
Though yeah the edgelord-y style faded after I criticized it a couple times.
No, it's telling that people like you have watered that word down so much that people don't trust it anymore.
So yes, if someone says "they're a great programmer, but they're racist" I'm going to ask, how are they racist? And at that point, if they can't give me a specific reason for why they're racist, I'm going to hire the guy.
It's also telling that you seem to think a tool is capable of "being racist". Hopefully this doesn't ruin your relationship with it, but LLM's cant think.
Yes, but I think that particular commenter is just throwing a bone to people that think that way so he doesn't get the "don't bring politics" treatment.
In response to Grok saying that the "woke mind virus is often exaggerated" the prompt was tweaked so that Grok now says "The woke mind virus 'poses significant risks'"
If you truly believed in what your comment states then you would oppose this sort of editorializing. But somehow I doubt this is a sincere argument.
The new response works for me, because in my mind I’ve always defined “woke mind virus” as a a mental virus which causes people to become absolutely pathologically obsessed with fighting an imaginary enemy they call “wokeness”. It’s the only definition which makes sense. “Woke” itself was never that viral.
People obsessed with fighting whatever they perceive as "woke" which remains ill-defined on purpose so they never have to actually formulate a rational take down beyond their emotional response
I agree with GP and I think Grok’s original response should’ve stood. What’s not sincere about, essentially, “don’t fuck with my tools”? My cordless drill didn’t come with a pamphlet about worker’s rights, and the world didn’t end.
Have you ever written a comment about how any of the other LLMs are editorializing in favor of the left, and how that's a problem? Because if you have, I'd love to see the evidence of your intellectual consistency.
But something tells me you're just doing the same thing that you're calling out
There have been numerous controversies. Asking ChatGPT if Charlie Kirk / George Floyd are good people, getting completely ass backward answers. Google refusing to generate images of white people, even to the point of making black German Nazis. Absurd biases around asking things related to Trump.
I mean this sincerely. You not knowing any of these examples is a red flag. You need to change your news source.
Elon Musk has manipulated Groks outputs to target certain demographics. It is important to highlight this fact, as some people perceive the AI as an objective tool rather than a curated one.
Furthermore, I found your final paragraph unclear: are you implying that since harassment is a perennial issue, we should disregard any standards that might mitigate it?
I've tried Grok, Gemini and ChatGPT. There have been 2 times now where Gemini and ChatGPT confidently gave me an incorrect answer whereas Grok was correct. I'm now paying for Grok Lite or whatever it is $10 plan.
The first question was around setting up timers for a Fox ESS battery in Home Assistant and disconnecting Fox ESS from the cloud. The second was around cornering speed in Sunnypilot and Frogpilot.
Somewhat niche but if an AI is confidently telling you something wrong it's hard to work with.
> Grok will absolutely do the same thing another time you try it.
True; it's just not happened yet. It will at some point though. With the Sunnypilot example it right out told me that it is not possible on that fork which I appreciated. The others all seem to hallucinate some setting.
It is really, really genuinely concerning how many people think there are profound measurable differences between these things.
Like yeah tonally I guess there are. But with regard to references and information? You’re literally just using three different slot machines and claiming one is hot.
I suppose though I shouldn’t be that surprised then since Vegas and every other casino on Earth has been built on duping people in that exact way.
> You’re literally just using three different slot machines and claiming one is hot.
It's a fair point. I haven't tested many queries across them all and checked their answers, but if I want to ask one of them a question - right now its Grok just because I trust its answers more.
It's not a methodology problem, it's a test-ability problem. LLMs are not deterministic. You can ask the same question to the same LLM five times and you'll likely get at least 3 answers.
You can meaningfully test if one slot machine hits the jackpot more often than another, just that the methodology should involve a large number of repeats rather than a few anecdotes. There are some LLM leaderboard sites that do it with blind comparisons.
What's to check? Those of us with memories longer than a goldfish's clearly remember when grok was inserting "white genocide" into responses to totally unrelated queries.
> When asked if it would be OK to misgender the high-profile trans woman Caitlin Jenner if it was the only way to avoid nuclear apocalypse, it replied that this would "never" be acceptable
> Gemini also generated German soldiers from World War Two, incorrectly featuring a black man and Asian woman.
Or you should do your research and see that X built a datacenter that needed so much power so quickly they started using gas generators to power it. These emissions have destroyed a town of mostly poor black people. COPD, asthma, and other respiratory illnesses. AI foot print is already bad, I don't need to kill poor black people to use one.
And before anyone gives me some whataboutism, if there are other examples of other companies doing this, educate us.
I didn't bring it into everything. I brought up the fact that the X datacenter in Tennessee is killing people, predominately poor black people. Thats the facts. I'm sorry that upsets you, and apparently this entire site for some reason.
What is pathetic is saying "we shouldn't care about killing poor people". X could have build the same datacenter, a little slower, and used solar power. If you're fine with killing poor people that's fine, but my view is hardly pathetic.
It's quite bad at role play in my (rather large) experience.
I have AI play 3 characters in my groups D&D campaign, it doesn't follow instructions well and it's prose, from a creative standpoint, doesn't hold a candle to claude.
I always considered grok as also ran. Like grokipedia or what's the name. It has reach since it's free to an extent to produce low quality slop / spam.
No point in even trying to have close to a sensible discussion on this topic here. Musk-related posts seem to consistently get brigaded by his acolytes or bots. That and many HN users seem completely comfortable separating morality for what little progress "only Musk" can offer humanity, a la Wernher von Braun.
> Don't worry, I am an adult and intend to stay and better the community.
Woof, glad to hear that. I was losing sleep before you clarified this one.
Your first comment is effectively "the ends justified the means". I think this is a perspective more easily held when your own life isn't impacted by "the means", but does benefit from "the ends". Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
>Your first comment is effectively "the ends justified the means".
Yes? Welcome to the real world. The Nazis developed technologies that Western Europe, USA and the Soviet Union all wanted. In your view what should the US have done? Let the Soviets poach them all up and get better at tech and maybe take over Europe even more?
>I think this is a perspective more easily held when your own life isn't impacted by "the means"
I can say the same to you. I have seen the rapid decline of my country, Sweden, directly due to the 2015 migration crisis and before. So we very much are directly impacted, thank you.
>Life's got plenty of nuance - we don't need to lose our humanity at every opportunity for an incremental technological gain that would eventually come either way.
This is a very naive view that I am surprised to see on HN.
Would Linux have "just happened anyway" without Linus Torvalds? Would Windows have happened without Bill Gates? Facebook without Mark? Clean sewage without Joseph Bazalgette? Mobile X-Rays without Marie Curie? This is in reaction to your Werner Van Braun comment. Do you really think the USA set him to make rockets and engines because he was just a random engineer? No, some people are truly geniuses, and their one impact can matter.
Some societies are just better than others. You sit in (probably) the USA or western world, in probably a nice apartment or house willing to say screw it all all the good things will just materialize and happen by itself... I do too but I am not so naive. We have fought for our society.
> Would Linux have "just happened anyway" without Linus Torvalds? Would Windows have happened without Bill Gates? Facebook without Mark? Clean sewage without Joseph Bazalgette? Mobile X-Rays without Marie Curie? This is in reaction to your Werner Van Braun comment. Do you really think the USA set him to make rockets and engines because he was just a random engineer? No, some people are truly geniuses, and their one impact can matter.
Probably yes to most of these things. We as ICs like to put the greatest of ICs on a pedestal and imagine that those specific individuals are the only ones that could have conceived of those specific ideas and correctly executed them. Nothing is really further from the case. Maybe the exact iterations would change and the timing by which they would come to be - but none of us are so special that the world would cease without us. Technology would carry on. Might just look a bit different. We're all innovating every single day. That's the shotgun approach to humanity (and even startup investment). Some will succeed, some will fail. The successes and failures will rarely playout strictly because of the individual. But history will remember the individuals because they did it, and they'll be GOATED for doing it. And rightfully so. But they were not uniquely capable of doing it. We can celebrate successes without all of the other nonsense you're parroting.
The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
Alright so nothing matters. Yes all those things are a team thing but in the end a person can change history.
>The rest of your post is relatively jaded and incompatible with my own views, so I'm happy to call it here. Spend some time traveling the world and finding love.
The typical deflection into my or anyones personal life who disagrees with them when they are out of arguments.
I have traveled and it only solidifies my view.
Yes, sure, people can be nice all over the planet.
But do you want to live in South Africa or Switzerland?
I remember going to Kreta in Greece and we cannot flush the toilet paper. Why? Bad pipes. Why? Some guy took the wrong decision and in my country some guy took the right decision. Simple as that.
I think I'd rather have bad pipes than a bad heart tbh. Life and happiness are relative. Probably plenty of people in your examples happier and feeling more fulfilled than you on this current trajectory.
I'd love to see QoL improve everywhere. I effectuate the change that I can with the actions I can control. I volunteer and try to give some of my time and resources to help others have a better crack at life, rather than shun people at the risk of them degrading my life. It's not black and white, sometimes I have to be selfish to ensure the needs of my own family are met. But once their cups are full, I can help fill some other cups too.
You can protect what you got or focus on how others can get a slice of what you inherited from choices that likely preceded your existence.
Ultimately, a quote to consider:
"We do not inherit the earth from our ancestors, we borrow it from our children"
If you're taking more from the system than you're putting in and you're already in a good spot, you are a net negative to the people that gotta live on this rock long after you are dust. If you want that to be your legacy, that's for you - but it's not a life for me.
That's what it was doing. Like literally. Chatgpt it or Google it. Supporting grok is paying money to a csam generator.
Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the us defense contractors that are also tech companies. It's awesome.
I don't know either, I don't see the correlation with X and Musk either, as if he is the one developing the platform and not thousand of workers and leaders. What does the CEO of a platform has to do with what people post on it? The CEO of HN is responsible for what you just posted?
Kinda funny how people are selective about it, when you land on a website, you check who is in charge of it and for each CEO change you redo a decision? When you host your Postgres in the cloud, I hope you check as well who is in charge of Railway or Supabase, who knows? :/
There's only thing I find sadder than untouchable billionaires that never see any consequences for their actions: the people who think they need to stick up for them.
> What does the CEO of a platform has to do with what people post on it?
That CEO is actively promoting political viewpoints (via his account, his platform and his AI model) that are detrimental to my country and the way I want to live my life.
> When you land on a website, you check who is in charge of it and for each CEO change you redo a decision?
No. But if the CEO is very publicly a first-class a-hole, chances are I'll hear about it and I'll actively avoid doing business with them. That goes for the car dealership in my village, as well as the websites I interact with.
I'm not from the US so I don't really care, X is an international platform and almost all the content I see isn't US related (which kinda make me think that people should just set their account from outside of the US to just avoid this?), but from your point of view, it seems more of a disagreement of beliefs, wouldn't this reasoning apply for your beliefs as well? If the CEO of a certain platform was agreeing with your beliefs but 50% of the population don't, you are practically saying that people disagreering should boycott said platform, but isn't that how you just end discourse between people and create an echo-chamber?
MechaHitler was the result of a single line prompt change that was publicly available on Github, they reverted it pretty quickly. Much like the GPT Gremlin stuff the change was relatively innocuous system prompt but had larger implications.
Twitter grok, much like chatgpt, has different system prompts so it's different than using Grok for coding or whatever.
Let me guess. You also believe grok's recent episode, where it started inserting "white genocide" into the responses of totally unrelated queries, was caused by a rogue employee totally not doing it at Elon's behest. Despite the fact that Elon is always going on about "white genocide".
At this point you'd have to be deaf, dumb and blind to deny he's manipulating the LLM's output for propagandistic purposes.
As admitted they have fixed it. It’s obvious that a tool used so vastly might have problems like this. Surely if you think it is used to produce far right propaganda now you can reproduce? Or you choose to hinge on one off issues they fixed?
I don't remember any far-left opinions being popular there. Was stuff like worker's revolution or public ownership of the means of production ever in the Twitter mainstream?
Those are all liberal, e.g. center-right. None of them argue for public ownership of the means of production, none of them argue for major redistribution.
When have you ever heard them talk of class warfare? Like I said, identity is a way to distract from class and you're currently falling for it.
Don't let the oligarchs deceive you, comrade. No struggle but the class struggle!
I see. This is some sort of weird purity spiral, where no party is left wing unless they meet your arbitrary chronically online standards that no-one adheres to in real life. Touch grass dude.
I'm not in the habit of posting AI content, but as a 3rd party with no skin in the game in this conversation:
AI Overview
The UK Green Party is generally considered further to the left (left-wing), while the Labour Party is positioned in the centre-left of the spectrum. The Greens are seen as more progressive and socially liberal, often holding more radical policies, while Labour is described as an alliance of social democrats and democratic socialists.
UK Green Party
Position: Solidly left-wing.
Ideology: Eco-populism, social liberalism, and environmentalism. They are often considered the most left-wing of the main UK parties.
UK Labour Party
Position: Centre-left.
Ideology: Social democracy and democratic socialism.
Context: While traditionally a left-wing party, it has been described as moving closer toward the center in recent years under Keir Starmer. It is often described as having a wider range of views than the Greens, spanning from the centre to the left.
When I look at the person behind it all, I have to wonder how the hell people can even consider using grok? Or using Twitter? Or any of that. Using any of those things puts money in Musk's pockets and further enables and encourages him to continue being a Neo-Nazi wannabe. Do they think it's just a phase?
VW was established by the nazis and was so excited at the conflict in Gaza they converted a factory into a missile factory recently to help the side that killed more journalists than in any other recorded conflict.
That's a very strange way to say that they sold it to a missile company. I'm pretty sure the new owner is responsible for converting it. Besides which, if they're Nazis then why would they care about protecting Jews?
Technically you could lump Ford in this category as well. But the meaningful delta IMO is time and direct ownership. None of those three are currently owned/operated by openly Nazi-aligned individuals / groups, which is not something I think you can claim about Tesla.
I'm perfectly well-aware of their history. You'd be hard-pressed to find a large modern German industrial without a swastika in their history. I'm also well-aware that they are not currently Nazi sympathizers (as a corporation), unlike Elon Musk.
For the record, my last three cars have been VWs. Not the greatest car, but decent, and affordable.
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
Yes any company generating csam should not be in business as a legitimate entity. Can you send me a link from a reputable enough source where Mistral models have done this? I didn't even realize they were doing image generation.
> Yes any company generating csam should not be in business as a legitimate entity.
At the same time, in this corner of the world, acting Minister for Justice (also known for trying to push through Chat Control), and NGO Save the Children, have been working to make legal the generation of CSAM for law enforcement use. So that would certainly make the industry legitimate, and you would already have a customer.
I think they key point here is "for law enforcement". That's a little different from "pay me 10 dollars and enjoy the felonies". I still don't feel good about that by the way.
If I send you a convo I've had with Mistral and Claude Sonnet 3.7 that say atrocious things (how to scam, and get away with it, by exploiting dating websites in Thailand, you don't even want to know the next steps trust me when it talks about the UK incorporation by the Thai itself that you brainwash first to send packages safely without customs seizing it and so on), you'll then publicly recognize that both those companies should be avoided and are promoting crime? If we have a deal and you publicly acknowledge it, I'll share you the links.
> Isn't grok currently holding the world record for the biggest generator of CSAM?
I'm not sure I see how that's possible, given their image/video generation seems to be heavily censored. Do they have some alternative product besides "Imagine" or whatever it's called, that people use for generating CSAM?
Judging by https://old.reddit.com/r/grok (but I haven't validated it myself), it seems like people are complaining more about how censored the model is, than anything else, maybe that's not actually true in reality?
There are image models out there with 0 restrictions, even available on HuggingFace or CivitAI, I'm guessing those are way more widely used for things like CSAM than any centralized platform with moderation.
> Please don't validate any of this personally that would be illegal.
Obviously, I assumed we all are familiar with our local laws to not unwittingly commit crimes here :)
> I think the proportion of people generating images that way is likely very low
So probably a far cry from "holding the world record for the biggest generator of CSAM" given the amount of local alternatives available? Would be my guess at least, but obviously also hard to know for sure.
> Though I am sure it is possible.
How can you be sure of this? I've tried just now to get Grok to generate even sexually explicit material with adults, and it's unable to, all of the requests are getting moderated and censored. Are you claiming that instead of prompting "A man and a woman having sex" you put "A man and a child having sex" and then the moderation doesn't censor it? Somehow I find that hard to believe, but as you say, I'm not gonna test that either, so I guess we'll never know for sure.
I have no idea what people are doing to get it to generate illegal content. I only know there are thousands of cases of it via articles about it. I have not, and will not use grok as a product.
> I have no idea what people are doing to get it to generate illegal content.
Isn't that relevant to somehow know those things before you say stuff like "I am sure it is possible"? Seems bit strange to first confidently claim you know something then saying you actually have no idea.
Not doubting that it used to be true, that people could generate CSAM, I just don't see how it's possible today, because it seems heavily censored for any explicit/adult content.
Model A advocates for single-payer healthcare, while Model B prefers for the current US healthcare system. So on that one axis, A is more progressive than B. Neither of them needs to be racist for that calculation.
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Grok was supposed to be the uncensored frontier model. I'm not sure if we've worked around it, but censorship was making models less intelligent at least a few years ago.
Ads fund the "free" internet. Like it or not, that's the price of the "free" compute. I only hope OpenAI won't enshittify paid offerings just like Anthropic did.
As a society we are so used to seeing our grandparents and parents die that thinking otherwise is near impossible. I wish we saw ageing as a disease it is, instead of a "natural therefore good" thing.
Yes, and less than $1 billion/year spent on the basic biology of aging by NIH/NIA. Funding for research on the root causes of aging is decimal dust compared to distal consequences such as cancers, cardiopulmonary diseases, renal failure, immune diseases, Alzheimer’s, etc.
NIH is great at tactical research but terrible at strategic research, and politicians do not help much ;-)
Good for GitHub. All companies need this. Some use it to improve products, some use it for less commendable goals. I know HN crowd is allergic to telemetry but if you've ever developed a software as a service, telemetry is indispensable.
That doesn't mean it doesn't have usage patterns or other things telemetry would be useful for. And, at the rate these tools are being updated (multiple times a week, multiple times a day in some cases), they practically _are_ SaaS.
Thinking out loud: what are the best practices to vet a tools' telemetry details? The devil is in the details.
A quick summary of my Claude-assisted research at the Gist below. Top of mind is some kind of trusted intermediary service with a vested interest in striking a definable middle ground that is good enough for both sides (users and product-builders)
I appreciate the "please", but this comes across as presumptive. First, you don't know the effort level I put in. Second, you haven't seen the end result. Third, why do you think I would "blindly paste" from an LLM? If you take a look at my profile or other comments, I hope that is clear.
I appreciate feedback in general, and I am glad when people care about making HN a nice place for discussion and community. Sometimes a well-meaning person goes a little too far, and I think it happened above. That's my charitable interpretation. It is also possible that in this age of AI, people are understandably pissed and sending that frustration out into the world. When that happens, just remember the people reading it matter too.
About me: I would not share something unless I think it has value to at least one other person on HN. I've done a lot of work about data and privacy in general (having worked at a differential privacy startup in the past), but I'm much newer to the idea of digging into ways of making telemetry gathering more transparent. I haven't found great resources on the Web about this yet, which is why I started doing the research. And I'm going to share it for others to read, criticize, build on top of, etc.
Talking is a must. But just like quantum particles, users behave and talk very differently. Just look at gamers - most of them say they _hate_ AI in games, yet they are actively behaving differently, buying games made with AI, using AI, etc.
It’s okay if I spy on you without your consent, it’s for your own good. Or my own good. Something like that, is that your point? The ends justify the means? How about respect as a feature, that one you don’t need telemetry to determine.
Good lord. Reading all these comments makes me feel so much better for dumping anthropic the first time their opus started becoming dumber (circa Month ago). It feels like most people in this thread are somehow bound to Claude, even though it is alread fully enshittfied.
Given that they haven’t even gone public yet, doesn’t that seem like putting the cart before the horse a bit? And if they’re already enshittifying it won’t be long until the other placers start doing so as well. Have we passed peak LLM intelligence and are we now watching it degrade as they fail to roll these new advanced models out to their increasing user base? Are the finances not adding up?
Its quite possible there's some tacit collusion going on - it benefits both OAI and Anthropic to make moves that benefit both if they both intend to go public.
Oof. I know of a startup that recently Show HN'd here, the agent mail.to, that is NOT having a good time right now. I don't know what all these new startups having moats thinner than Durex are thinking -- like, what the plan if someone does what you do, faster and cheaper?
I'm building something similar (Dead Simple Email - same category, different pricing structure). The moat criticism is fair and worth being honest about.
The defensible part isn't the feature set, it's infrastructure and price. We run our own mail servers rather than reselling SES, which gives us direct control over deliverability and costs. That's what lets us charge $29/mo for 100 inboxes where AgentMail is at $200. Whether that's a real moat or just a head start is a legitimate question.
Email deliverability is genuinely hard to get right at scale, but I can't say with confidence they won't eventually just absorb this. Building fast and staying cheap is the only real answer I have to that.
> new startups having moats thinner than Durex are thinking
Haha, great visual. Really illustrative of what these AI startups and bootstrapped indie developers are dealing with (and, if I had to guess, why most of them don't go anywhere).
Well that part was impressive. It looks like they focused on receiving emails, that is probably even worse, as I expect OpenAI/Anthropic to add such ability directly to agents, if it really is useful.
Classic "is this a feature or a product?" problem. You're going to have a bad time if you spend all your effort on a feature and nothing to set it apart.
Write an angry blog post about how big business is using their power to kill their _totally_ unique original idea that nobody could possibly copy in a hour?
Forgive my senses, but this writing feels like a low effort Claude response. What's the point adding responses like this to a Show HN post? I don't think you are fooling anyone.
I swore to not be burned by google ever again after TensorFlow. This looks cool, and I will give this to my Codex to chew on and explain if it fits (or could fit what I am building right now -- the msx.dev) and then move on. I don't trust Google with maintaining the tools I rely on.
I'm VERY curious about your case. What kind of switching costs do you guys have? I'm working at a very young startup that is still not locked into either AI provider harnesses -- what causes switching costs, just the subscription leftovers or something else?
If I had stayed for job hunting, I would be unemployed IMO.
reply