Hacker Newsnew | past | comments | ask | show | jobs | submit | jfkimmes's commentslogin

Codex is open source and allows any model to be configured.

Many thanks for that info!

Codex is not open source. And it's not even that extensible


Why Codex when you can use something that hasn't been touched by Sam Altman? Surely, your drive to get the very best model isn't stronger than your sense of ethics?

They hint at their AI-augmented reversing methodology, which demonstrates one of the core strengths of current LLM agents. These models, trained extensively on code, can immensely speed up the process of understanding complex system internals.

Security research historically has two difficult components that build on one another: 1. Understanding complex system internals: uncovering the inner workings hidden by abstractions or interfaces 2. Finding vulnerabilities in these uncovered mechanisms

Sometimes both steps are equally hard. But often, finding the vulnerability is trivial once the real mechanisms are uncovered, rather than relying on assumptions about inner workings.

CVE-2026-3854 is a case where the vulnerability is not plainly obvious after understanding the internals. Still, I am confident that this command injection would have been found quickly had it been exposed to a more traditional or accessible attack surface.


Yep, there was a signal to help reverse engineer c++, as it could have been good at helping c++ mass porting to plain and simple C.

But recently this signal got somewhat scrambled, or being sabotage by c++ fan boys (those coding AIs would help getting rid of dev/vendor lock-ing using c++ syntax complexity)


Everyone talked about the marketing stunt that was Anthropic's gated Mythos model with an 83% result on CyberGym. OpenAI just dropped GPT 5.5, which scores 82% and is open for anybody to use.

I recommend anybody in offensive/defensive cybersecurity to experiment with this. This is the real data point we needed - without the hype!

Never thought I'd say this but OpenAI is the 'open' option again.


The real 'hype' was that the oh-snap realization that Open AI would absolutely release a competitive model to Mythos within weeks of Anthropic announcing there's, and that Sam would not gate access to it. So the panic was that the cyber world had only a projected 2 weeks to harden all these new zero days before Sam would inevitably create open season for blackhats to discover and exploit a deluge of zero-days.

The GPT-5.5 API endpoint started to block me after I escalated with ever more aggressive use of rizin, radare2, and ghidra to confirm correct memory management and cleanup in error code branches when working with a buggy proprietary 3rd party SDK. After I explained myself more clearly it let me carry on. Knock on wood.

So there is a safety model watching your behavior for these kinds of things.


So you're saying that blackhats will be required to do a small bit of roleplay if they want the model to assist them? I'm not against public access BTW just pointing out how absurd that PR oriented "safety" feature is. "We did something don't blame us" sort of measure.

It isn't even my intent to naysay their approach. They probably have to do something along those lines to avoid being convicted in the court of public opinion. I just think it's an absurd reality.


It's a liability shield and helps to avoid unsavory headlines in the news

the role-play makes it harder to fully automate attacks, which is the real fear

I bet you can use a not gated model as intermediary to do the role-playing job for you.

Does that mean that we're likely to see Mythos released soon?

The prevailing theory is that Anthropic doesn't have sufficient compute capacity to support Mythos at scale, which is the real reason it hasn't released.

Thanks. Makes sense

It's almost embarrassing how susceptible we are to these marketing campaigns.

Dunno about you, but I didn’t fall for it. I’m reminded of how they were “afraid” to release GPT-2 because of the “power” it had. Hype train!

Most of the things they feared back then did happen

Lack of information, lack of knowledge.

The “AI” “technology” is an easy excuse to create artificial information gap in the era of the interconnected.


> Never thought I'd say this but OpenAI is the 'open' option again.

Compared to Anthropic, they always have been. Anthropic has never released any open models. Never released Claude Code's source, willingly (unlike Codex). Never released their tokenizer.


What's "open" about any of these companies?

I'm tired of words being misused. We have hoverboards that do not hover, self-driving cars that do not, actually, self-drive, starships that will never fly to the stars, and "open"… I can't even describe what it's used for, except everybody wants to call themselves "open".


And the vast majority of current and past countries with the word “democratic” in their name weren’t actually democratic.

It’s open as in the sign in the door of your favorite local diner that says;

“Yes, we are OPEN ”

Open, as in not currently out of business.


Doesn't OpenAI get mad if you ask cybersecurity questions and force you to upload a government ID, otherwise they'll silently route you to a less capable model?

> Developers and security professionals doing cybersecurity-related work or similar activity that could be mistaken by automated detection systems may have requests rerouted to GPT-5.2 as a fallback.

https://developers.openai.com/codex/concepts/cyber-safety

https://chatgpt.com/cyber


Anthropic has started to ask for IDs for use of their products period

I don't like that trend. I get why they're doing it, but I don't like it


Are you in the UK? I've not had this happen to me (I'm not in the UK) so I'm wondering if the Online Safety Act has affected this, as it has with other products.

I am from the UK and have not had this happen to me (Yet? perhaps)

Fingers crossed!

I don't like this trend, but I get why they require it. The alternative seems to just ban cybersecurity-related questions.

They flatout gate any API access of the main models behind Persona ID verification. Entirely.

From my experience OpenAI has become very sensitive when it comes to using their tools for security research. I am using MCP servers for tools like IDA Pro or Ghidra (for malware analysis) and recently received a warning:

> OpenAI's terms and policies restrict the use of our services in a number of areas. We have identified activity in your OpenAI account that is not permitted under our policies for: - Cyber Abuse

I raised an appeal which got denied. To be fair I think it's close to impossible for someone that is looking at the chat history to differenciate between legitimate research and malicious intent. I have also applied for the security research program that OpenAI is offering but didn't get any reply on that.


isnt it like cyber question are being routed to dumper models at openai?

Do you have a source for that?

Neither the release post, nor the model card seems to indicate anything like this?


Anything that even vaguely smells like security research, reverse engineering or similar "dual-use" application hits the guardrails hard and fast. "Hey codex, here is our codebase, help us find exploitable issues" gives a "I can't help you with that, but I'm happy to give you a vague lecture on memory safety or craft a valgrind test harness"


Being "more" open than something totally closed doesn't make you open. The name is still bs

Seems like OpenAI only acts Open for theatric and attentional purposes though, i.e. when backed into a corner and its for their image.

> Anthropic's gated Mythos model

aka the perfect marketing ploy


Reminds me of Gmail's early invite only mode.

it's still somewhat gated behind "trusted access" for cyber, see https://chatgpt.com/cyber

I ignore any hype news.

Anthropic is the embodiment of bullshitting to me.

I read Cialdini many decades ago and I am bored by Anthropic.

OpenAI is very clever. With the advent of Claude OpenAI disappeared from the headlines. Who or what was this Sam again all were talking about a year ago?

OpenAI has a massive user advantage so that they can simply follow Anthropic’s release cycle to ridicule them.

I think it is really brutal for Anthropic how they are easily getting passed by by OpenAI and it is getting worse with every new GPT version for Anthropic.

OpenAI owns them.


Who's Sam again? oh that person whose house was molotov'd last week? Or the person who had an expose written in the new yorker calling him a sociopath? I forget.

I think it’s the Sam that agreed to allow the United States Department of War to go full murderbot.

Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability:

https://media.ccc.de/v/39c3-skynet-starter-kit-from-embodied...

> [...] we also exploit the embodied AI agent in the robots, performing prompt injection and achieve root-level remote code execution.


Here's a little more context about the author's motivation: https://mathstodon.xyz/@csk/116162797629337132


> In my online undergraduate P5.js course, students are about to begin the module on motion and physics, including a bit of physics simulation using Matter.js.

When did things get specialized this much?


Looking through the website of the course, it's not really a general computer science course - it "explores the use of graphics in art, design and visualization contexts" and is part of the digital art program. Quite a reasonable tech stack, for that purpose I think.


Oh cool, a product of Waterloo's Craig Kaplan, most famous for his work on the discovery of the einstein monotile


Funny, I was just configuring ghostty. I finally made the jump in order to get rid of tmux as a layer of indirection.

Here's what I landed on: This config tries to emulate as much of tmux with native ghostty features (splits and tabs).

https://codeberg.org/jfkimmes/dotfiles/src/branch/master/gho...


Add https://zmx.sh into the mix for detach/attach functionality. It uses libghostty for rehydrating session state and scrollback


This is a Google Play Services update. For GrapheneOS users without GApps wondering: A similar feature is already built-in: https://grapheneos.org/features#auto-reboot


Heh, my first thought was “Don't they do this already?”, but apparently GrapheneOS was ahead of the curve there. Nice.


Still ahead of the curve, as it can be disabled on grapheneOS while it apparently won’t bee possible in Android ;)


> GrapheneOS was ahead of the curve there

Not really. Samsung was the first with this, but their reasoning had absolutely nothing to do with security. It was because their phones slowed down over time and their solution was to give users the option to reboot it at specific intervals. You could even make the argument that the Samsung solution is still the superior solution because you get to set the interval.


How would an OS taking over your hardware would be ahead of the curve or nice?


Because it's an effective tactic against exploits that can't survive a reboot, which is somewhat common from my understanding. The idea being that police can confiscate your phone and just keep it on and charged until they can buy or develop an exploit targeting your current device and software.

I was admittedly confused about this distinction at one point too. It's a trade-off (although few people effected by this own phones with truly free, user-respecting soft/hardware in the first place).


> This is a Google Play Services update

As the GrapheneOS docs note, the feature is better implemented in init and not in system server or the app/services layer like Google has done here? Though, I am sure Google engs know a thing or two about working around limitations that GrapheneOS developers may have hit (in keeping the timer going even after a soft reboot, where it is just the system server, and the rest of the userspace that depends on it, that's restarted).


Huh, I have GrapheneOS and I never noticed it rebooting. (And when i manually reboot, the "BIOS" prevents it from booting without acknowledging that I'm aware it's a non-Google OS, so how does it work?)


The feature is not enabled by default. Also, the boot doesn't wait for you indefinitely - it just gives you a few seconds to glance the checksum and halt it, before it proceeds automatically.


It's enabled on mine, at least. By default, it reboots after 18 hours without being unlocked.


Perhaps it's a recent change. My install of GrapheneOS is from August last year.


You don't have to acknowledge anything. The boot screen shows a warning which you can interrupt. If you don't do anything it'll continue to load as normal.


That’s weird. I wouldn’t expect Play Services to handle a system function like rebooting unrelated to any Google services.


No. Play Services is Google's way to make Android closed source. Many new features don't get implemented in Android, but Play Services. Many apps don't work (correctly) without Play Services.


Being closed source is not the goal. Update speed, consistency across the ecosystem, and feature development speed are key reasons things are implemented via play services. Also dependency on google services, but that's not relevant here. AOSP is greatly improving in its ability to tackle these things, so the choice to implement things in play services won't be as compelling as it is today for things not ultimately tied to Google.


Play services is how Google delivers many Android updates now so that all users can get security updates without waiting for the device vendor to publish it for each device.


Samsung has also had this feature for ages.


not for security tho, it was for bloat/fixing random issues like a typical Win95 daily reboot.


Typical lazy Ars reporting. The feature originates from GrapheneOS, not iOS.


No, the feature first appeared on Samsung phones to fix their bloat / slowdown issues. Now it’s suddenly a security feature.


And here I was, thinking I was clever for coming up with the agent smith image for an agent framework.

https://codeberg.org/jfkimmes/TinyAgentSmith



See https://masto.ai/@vagina_museum/110938928634133136 for a little background history.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: