Hacker Newsnew | past | comments | ask | show | jobs | submit | Nition's commentslogin

The other problem I always notice on top of all this is that when you pluck a string, it adds tension to it temporarily, so the pitch when you first play it is a little higher than the pitch as it settles down. The louder you play it, the more the effect.

In heavy metal, especially in modern down-tuned genres, this is often used as an artistic choice.

If Microsoft hadn't killed XNA (what MonoGame is based on) a decade ago, they could be packaging it with Copilot right now as the ideal code-first AI-assisted game engine. Easy to use, easy to test, no visual editor where AI will struggle like with Unity/Unreal/Godot.

> AI will struggle like with Unity/Unreal/Godot.

I am automating Unity with headless method invocation of agent authored editor scripts. I don't think "struggle" is the word I'd use to describe how GPT5.4 is currently performing.

I can tell the agent things like "iterate over all scenes. Wrap lightmap baking in a 5 minute timeout. Identify all scenes that exceed baking time. Inspect the scene objects and identify static geometry with poorly configured light map scale relative to their world space extents."


That's fair. I suppose instead of saying "struggle" re the others I should have said "be even more effective" re XNA.

Quite curious about this. Does the agent gets its own repo and deliver with commits?

No. Not yet, anyways. I maintain autonomy over source control at the moment. Headless activity is verified in a separate unity editor instance before I push any commits. I might look into source control tools once I get through perspective and orthographic screenshot tools. Giving the agent a way to see the final composed scene seems much more valuable than SCM automation right now.

If they hadn't killed it, it would have a visual editor by now. Or worse, dominated by Maya integrations.

Microsoft's head died long time ago. Corpo parasite took control of the body completely.

For Lazarus (an IDE with visual components similar to Delphi) I switched to code-first components and did away with the form files. You can probably do this with all of these frameworks.

We’re building an AI agent for Delphi — and a major part is it supporting visual form editing. It works. You can see the form change live in the designer as the AI does its stuff.

It’s not publicly available yet but has an active group of beta testers. https://www.remobjects.com/codebot/delphi.aspx


Right, good that you've got that going, congratulations to you and your team. Design time does make human development much easier, since you are able to see the working prototype run as you design it, not sure what the point is if AI is doing that, since the value is in making the development cycle shorter, which the AI has no use for.

There's a little throwaway thing in the book (or maybe it was in the prequel) that I always liked, re understanding human tendencies. They're still using Unix time, starting in Jan 1st 1970, but given that their culture is so space-travel-focused they assume the early humans set it to coincide with man's first trip to the moon.

That's from the prequel, A Deepness in the Sky. (Which is also excellent.)

Deepness in the Sky is probably the first Sci Fi alien I read who didn't feel like a human wearing an alien suit.

Fantasy sometimes does this better but usually with specific tropes.


If you liked that and you haven't read it yet, give "Dragon's Egg" by Robert L. Forward a read.

I was sure the parent comment was a joke about OpenAI's recent deal with the DoD. But no, there it is, disallowing violence down from 90.9% of the time to 83.1%.

No, I was just remarking how ridiculous it is to pretend to do violence safely. It's like a fat score for butter.

Sorry I meant gradparent comment, by theParadox42.

I was slightly more inclined to think it might be some bored employee somewhere acting in a sort of Robin Hood capacity just because it's unusually accurate and thorough for a test message. I'd expect more like TEST TEST test DFOIUHDFUOHDFOIUHDFROIHDSFOIHDSF LOREM IPSUM 999999.

Sometimes enthusiastic or particularly bored developers do put in the effort to write things out like a real message though.


From the screenshot it looks like he actually received "only" around 2TB of free mobile data.

You may be right, but whatever happens with OpenAI and the military, I'd rather not be personally contributing money towards it.

Are you paying taxes?

Not to the USA.

When this was first created, how did people usually navigate back to the previous page? I notice there are no "previous" or "home" links here. Was there a "back" button/key, or would you have to edit the URL directly?

Edit: Answered my own question I think. If you choose the option to browse "using the line-mode browser simulator", you can literally type in "Back" to go back.


This site has a way to experience as it once was. I’m on mobile now, but from what I remember when I tried it, each link opened up a new document window. So the idea of going back wasn’t relevant. You’d simply close the window.

https://worldwideweb.cern.ch/


Yeah, I just wrote an edit to my comment actually after I noticed that. It in fact has an explicit Back command you can run; one of the few commands it supports.

It looks like you can also shorten "Back" to "b".

So far, I like this line-mode browser simulator much more than what is commonly available for the command line (lynx or links2). Does any one know of a modern implementation of it? (Where links are numbered instead of the user having to navigate around the document).


There are browser extensions such as Vimium C that provide keyboard-based navigation.

We used telnet. There were no graphics per se. Before www the "interactive" internet was gopher and wais and co.

Navigation was moving a cursor around to highlight points of interest, some of which would be links to further stuff or controls to do something like go back or forwards.

Install lynx or links2 (ie text mode browsers) and you'll get the idea.

The vaguely graphic efforts with browsable content that you might recognise before www were the likes of Compuserve. That got you a sort of forum style interface.

It's quite hard to explain just how fast things have moved over the last 40 odd years (I'm 1970 to date - 55). I should also point out that my granddad saw rather a lot of change from 1901 to 1989. To be honest the last 15 odd years are even madder than the previous 25 and that's just my own personal recollection.


They're talking about electronic instruments there. The comment is about how electronic instruments don't generally match the physical expressiveness of acoustic instruments (like the Cello).

There have been some interesting keyboard input devices coming out which allow for more expression than normal piano keys, using a sort of hack to the MIDI system called MPE - MIDI Polyphonic Expression. For example the Seaboard Rise or the Osmose. Depending on the instrument it's possible to do per-note pitch bends, change pressure while holding notes, perform vibrato etc. Visually the physical movement is not as interesting as electric guitar though, so yours probably still wins.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: