Hacker Newsnew | past | comments | ask | show | jobs | submit | cylemons's commentslogin

Apperantly the AKID and SKID extensions are used instead these days

TCP still works this way?

When I personally use chatgpt and friends, I am not seeing any slowdowns or anything, meaning that their servers can handle the loads just fine. So then, why are these companies spending so much building new capacity if the current capacity is enough?

Frontier labs flagship models are ~2T params at the moment, but they intend to ship 10T models like Claude Mythos, which would require substantial datacenter expansion. Same thing for training.

Where did you get the 10T figure from? I thought it was a big secret.

Rumors and extrapolated from the token price.

Oh, we have estimated token pricing for Mythos? Man, I'm missing out on all the juicy gossip.

so it's 5x as expensive as Opus then.


Microsoft itself is 73% owned by institutional investors, so more of the same really.

see: https://www.nasdaq.com/market-activity/stocks/msft/instituti...


Yeah why is this so common in .NET?


Enterprise usage. Devs know companies will just pay out. Easier than trying to get sponsored.


Whatabout the extensions? is it widely supported


That is always one check away: https://vulkan.gpuinfo.org/listextensions.php


VK_KHR_buffer_device_address has 91.3% support

and

VK_KHR_variable_pointers has 98.66% support

looks good to me


You mean the locking would be done in software?


I assume to save on resources, even if your algorithm is not much more taxxing on silicon, maybe the designers at intel and amd just didn't think optimizing split locks was worth it


Biggest pump and dump in history


There is a limit on how much copilot can do in one request, pretty generous but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request


> but after some time vscode will say "this request is taking very long, do you want to continue" and that would count as a seperate request

I don't think that's true. In VS Code, that's also configurable via the chat.agent.maxRequests setting.

There was absurd latency in the Copilot Opus 4.6 model on 1st and 2nd April which led to lots of my requests timing out with nothing to show though.


> chat.agent.maxRequests

"Maximum number of requests that copilot can make using agents"

I don't get how this setting is relevant?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: