Hacker Newsnew | past | comments | ask | show | jobs | submit | cube2222's commentslogin

Spacelift | Remote (Europe) | Full-time | Senior Software Engineer | $80k-$110k+ (can go higher)

We're a VC-funded startup (recently raised $51M Series C) building an infrastructure orchestrator and collaborative management platform for Infrastructure-as-Code – from OpenTofu, Terraform, Terragrunt, CloudFormation, Pulumi, Kubernetes, to Ansible.

On the backend we're using 100% Go with AWS primitives. We're looking for backend developers who like doing DevOps'y stuff sometimes (because in a way it's the spirit of our company), or have experience with the cloud native ecosystem. Ideally you'd have experience working with an IaC tool, i.e. Terraform, Pulumi, Ansible, CloudFormation, Kubernetes, or SaltStack.

Overall we have a deeply technical product, trying to build something customers love to use, and have a lot of happy and satisfied customers. We promise interesting work, the ability to open source parts of the project which don't give us a business advantage, as well as healthy working hours.

If that sounds like fun to you, please apply at https://careers.spacelift.io/jobs/3006934-software-engineer-...

You can find out more about the product we're building at https://spacelift.io and also see our engineering blog for a few technical blog posts of ours: https://spacelift.io/blog/engineering


This seems to agree with my own previous tests of Sonnet vs Opus (not on this version). If I give them a task with a large list of constraints ("do this, don't do this, make sure of this"), like 20-40, Sonnet will forget half of it, while Opus correctly applies all directives.

My intuition is this is just related to model size / its "working memory", and will likely neither be fixed by training Sonnet with Opus nor by steadily optimizing its agentic capabilities.


I'd agree that this effect is probably mainly due to architectural parameters such as the number and dimensions of heads, and hidden dimension. But not so much the model size (number of parameters) or less training.

Saw something about Sonnet 4.6 having had a greatly increased amount of RL training over 4.5.


Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.


So tldr it seems like it's

- a reasonable improvement over sonnet 4.5, esp. with agentic tool use

- generally worse than opus 4.6

Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.


It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.

To remind, Opus 4.5 was SOTA 2-3 weeks ago.


Yes but Opus 4.6 is a massive step up. Some applications don’t need that power though.


Spacelift | Remote (Europe) | Full-time | Senior Software Engineer | $80k-$110k+ (can go higher)

We're a VC-funded startup (recently raised $51M Series C) building an infrastructure orchestrator and collaborative management platform for Infrastructure-as-Code – from OpenTofu, Terraform, Terragrunt, CloudFormation, Pulumi, Kubernetes, to Ansible.

On the backend we're using 100% Go with AWS primitives. We're looking for backend developers who like doing DevOps'y stuff sometimes (because in a way it's the spirit of our company), or have experience with the cloud native ecosystem. Ideally you'd have experience working with an IaC tool, i.e. Terraform, Pulumi, Ansible, CloudFormation, Kubernetes, or SaltStack.

Overall we have a deeply technical product, trying to build something customers love to use, and have a lot of happy and satisfied customers. We promise interesting work, the ability to open source parts of the project which don't give us a business advantage, as well as healthy working hours.

If that sounds like fun to you, please apply at https://careers.spacelift.io/jobs/3006934-software-engineer-...

You can find out more about the product we're building at https://spacelift.io and also see our engineering blog for a few technical blog posts of ours: https://spacelift.io/blog/engineering


I'm not an expert, but I've done a bunch of reading on this previously, and also skimmed the article which also mentions some parts of this.

First, when taking omega 3 supplements, you generally care about increasing the ratio of omega 3 to omega 6. Hemp hearts have much more omega 6 than omega 3, so they're not very effective for improving the ratio.

Second, hemp hearts contain ALA, while what you generally want to improve is EPA and DHA (this is also covered in TFA). The body can convert ALA to EPA and DHA, but it's not efficient.

So all in all, if Omega 3 for the article's stated benefits is what you want, this is not the way. I recommend looking into eating more fish, or if you want a vegan route, algae-based supplements. [0] is a decent source from the NIH about foods and their Omega 3 content, split by ALA/EPA/DHA.

[0]: https://ods.od.nih.gov/factsheets/Omega3FattyAcids-HealthPro...


The ratio of Omega 6 to 3 needs to be below 4:1 to be a good source of Omega 3, and hemp hearts are at 3:1, so they're listed as a good source of Omega 3.

Flax seeds are even better just for Omega 3 at 1:3, but hemp hearts have other benefits, like more protein, which is why I called them out. That said, I eat a fair amount of flax seeds as well.


Just to reiterate, both of those (hemp hearts and flaxseed) only contain ALA, while what you're generally looking for is EPA and DHA. TFA also explicitly mentions it's only talking about EPA.

This is not to say that they're unhealthy of course.

EDIT: see the sibling comment by code_biologist, it's much more comprehensive than what I've written.


Your body converts ALA into EPA and DHA, however, so plants are fine sources of both.


I think the main problem in estimating projects is unknown unknowns.

I find that the best approach to solving that is taking a “tracer-bullet” approach. You make an initial end-to-end PoC that explores all the tricky bits of your project.

Making estimates then becomes quite a bit more tractable (though still has its limits and uncertainty, of course). Conversations about where to cut scope will also be easier.


But how long it'll take you to make that PoC? Any idea? :P


Yeah, I have written multiple almost completely-vibecoded linters since Claude Code came out, and they provide very high value.

It’s kind of a best case scenario use-case - linters are generally small and easy to test.

It’s also worth noting that linters now effectively have automagical autofix - just run an agent with “fix the lints”. Again, one of the best case scenarios, with a very tight feedback loop for the agent, sparing you a large amount of boring work.


I bought a w-oled monitor for office work and gaming, very happy with my oled tv. I returned it after a couple days.

I got unbearable eye strain from it, even though I use rather large fonts, and the ppd was the same as with my previous IPS. Yes, the “more fuzzy” text was very much noticeable too.

Maybe it varies by person, maybe it’s influenced by things like astigmatism, but I totally see where the author is coming from, and I too am waiting for the new OLED panels to see if there’s an improvement.


(Author here.)

I do have astigmatism. You do make me wonder if this plays a part as well...


In my experience, it seems to. My astigmatism (or other eye stuff) seems to move different colours different amounts, leading to wider RGB pixels and making things like Cleartype so much worse. So people were enjoying Cleartype and I was hating the obvious colour-changes and fringes that somehow they weren't seeing. I assume some people are lucky enough to have aberrations that actually make cleartype more pleasant.


I do too. Combined with progressive lenses and I have significant chromatic aberration issues. Blue and red pixels require different focus, which is sometimes an issue when solid blues and reds are on screen in close proximity. I turn off pure blue colors in my terminal emulator, for example.


That sounds familiar. I also have ever so slight green-brown color blindness. It's only really noticeable in low light (like in the woods in evenings), but that could well all stack up to be a problem.

I also have significant problems with blue LEDs around the house, to the point where I've removed, replaced, or covered almost all of them. They really, really bother me because it feels like my eyes never focus on them and they leave me feeling slightly disoriented.


I’ve gone through this series of videos earlier this year.

In the past I’ve gone through many “educational resources” about deep neural networks - books, coursera courses (yeah, that one), a university class, the fastai course - but I don’t work with them at all in my day to day.

This series of videos was by far the best, most “intuition building”, highest signal-to-noise ratio, and least “annoying” content to get through. Could of course be that his way of teaching just clicks with me, but in general - very strong recommend. It’s the primary resource I now recommend when someone wants to get into lower level details of DNNs.


Karpathy has a great intuitive style, but sometimes it's too dumbed down. If you come from adjacent fields, it might be a bit dragging, but it's always entertaining


>Karpathy has a great intuitive style, but sometimes it's too dumbed down

As someone who has tried some teaching in the past, it's basically impossible to teach to an audience with a wide array of experience and knowledge. I think you need to define your intended audience as narrowly as possible, teach them, and just accept that more knowledgeable folk may be bored and less knowledgeable folk may be lost.


When I was an instructor for courses like "Intro to Programming", this was definitely the case. The students ranged from "have never programmed before" to "I've been writing games in my spare time", but because it was a prerequisite for other courses, they all had to do it.

Teaching the class was a pain in the ass! What seemed to work was to do the intro stuff, and periodically throw a bone to the smartasses. Once I had them on my side, it became smooth sailing.


I think this is where LLM-assisted education is going to shine.

An LLM is the perfect tool to fill the little gaps that you need to fill to understand that one explanation that's almost at your level, but not quite.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: