Yup. The Dolby/Disney vs Snapchat lawsuit is going to be the first one. So far it's only been filed.
The big question is if AOMedia is going to make good on their Mutually Assured Destruction promise of using their patent and financial war chest to to countersue into oblivion anyone trying to go after AV1 adaptors.
Since we're doing anecdata, I experience the exact opposite.
What's most crazy to me is how somehow almost all boomers are more addicted to smartphones than gen Z and Alpha. They'll have their grandkids over, and they'll be glued to their smartphone instead of interacting with those kids.
As a boomer, I'm sure it's because we didn't grow up with smart phones and therefore never learned good habits around them. Hell I was probably near 50 when I got my first one.
I think it's similar to kids who grow up with alcohol vs those who don't. The ones not exposed go off to college and go completely nuts.
I have this vs. a TV webshop in The Netherlands that stiffed my parents because their €430 TV broke and the warranty was expiring in a few months.
Anytime anyone in my social circle asks for a TV recommendation, I specifically tell them not to order from that shop, explaining they have a habit of stiffing people on warranties. I also tell those people to tell anyone they know not to order from there. I do the same whenever TVs in general or that webshop comes up on Tweakers, the biggest Dutch tech site.
I've been at it for quite some years, and roughly estimating it's costing them ±20 TV sales a year, averaged €650 per TV. That's €13.000 in lost sales per year. Working my way towards €100k cumulative, at which point the score feels settled.
Losing €100k in sales over not honoring the warranty on a €430 TV. A nice, solid x233 loss multiplier :)
If you have a vindictive streak in you, see this as your clarion call. You can cause some real cost to a company's bottom line with relatively little effort. And the more of us do this, the worse the pain gets for crappy companies.
Oh, I meant within! I guess that is ambiguous, I figured within = inside, and outside = expired. I'll edit.
Honestly what really egged me on was that I told them I might take them to small claims, and their response was sending a bunch of small claims cases they won.
That's not the point. They were gleeful about their behaviour. Its even more despicable than then faux-kind "oh we are so sorry for your trouble and you are a valued customer, but computer says no."
Linux desktop (and the kernel) felt awful for such a long time because everyone was optimizing for server and workstation workloads. Its the reason CachyOS (and before that Linux Zen and.. Licorix?) are a thing.
For good UX, you heavily prioritize latency over throughput. No one cares if copying a file stalls for a moment or takes 2 seconds longer if that ensures no hitches in alt tabbing, scrolling or mouse movement.
When Kon Colivas introduced a scheduler optimized for desktop latency, about 15 years ago, the amount of abuse he got from Linux developers was astonishing, and he ended up quitting for good. I remember compiling it on my laptop and noticing how it made a huge improvement in the useability of X and desktop environment.
Just to add a dissenting voice to all the complainers:
- autofill on desktop is rock-solid, it virtually never fails, much less so than any other password manager autofill
- it works great with passkeys, again rock-solid, and again the best UX of any password manager. passkeys itself are also great
- OTP code integration (only use this for non-important stuff) works great too, again best-in-class
- switch to Electron was great for most, the Windows app sucked and there was nothing on Linux, now we have a good application across all 3 desktop platforms, although it was a slight downgrade for Mac users
- autofill works fine on Android 99% of the time
- 1Password CLI and SSH agent are interesting additions but SSH has a lot of paper cuts
In general, they have by far the nicest UX and UI of all password managers. And they really seem to care. They were the first to introduce stuff like "no automatic autofill" because of security implications, their vault spec is open source (in case they go belly up), they get audited regularly. They were the first to add passkeys and actually made a site (name escapes me) that shows which services have passkeys and how to activate them.
The problem with opt-in telemetry is that 95% of users don't change defaults, and the 5% who do are your power users. They're not representative of the average user. And only a subset of them will turn it on
Ironically enough the opposite happens with opt-out telemetry, for the same reason: a lot of power users will turn off telemetry, thus you will never see their usage patterns and will have to infer them. Dogfooding helps.
A subset of power users want to their usage to be profiled (me, if I trust the company. Brave, Mozilla, Mullvad, 1Password, Bitwarden, Valve, companies like that). But most power users will not want that because of privacy worries.
From that you get two situations.
Opt-in:
- Regular users: click all 'ok' through setup at lightning speed, no telemetry enabled.
- Most power users: consciously don't check the box to opt-in because of privacy worries.
- Big picture power users: consciously check the opt-in box given they trust you (because they want their usage patterns to be profiled and optimized for).
Opt-out:
- Regular users: click all 'ok' through setup at lightning speed, telemetry enabled.
- Most power users: consciously check the box to opt-out because of privacy worries.
- Big picture power users: consciously don't check the opt-out box given they trust you (because they want their usage patterns to be profiled and optimized for).
If they really were they would turn it off. And stop using Gmail and Android.
The overwhelming majority of people don't care about digital privacy because the cost is opaque to them.
Also, telemetry when done right isn't "spying". Again, it is anonymized and used to see, for example, where the hot paths and paper cuts in applications are.
i think that in a free society, you should be able to sell the product you want to sell. but, you should give information of what you are selling to the customer.
if it has telemetry, then it is a tool the customer buys, that also has the function of listening and reporting to others, how it is being used.
you want to sell it - no problem. but tell the customer, "look, this is bugged, and it's going to tell me what you are doing. but it's a great product." anything with opt-out telemetry needs a big version of that warning on the top of the page.
personally i am not a buyer. but that's my preference.
Again: telemetry isn't "spying" and it isn't "bugging" the application. It collects usage patterns: how often is which button being pressed by which type of user.
It is not collecting data on you personally nor is it collecting the actual data you enter.
Of course the question remains if a company has "maskAllImages: true, maskAllTextInputs: true" (and I also wonder if they hide UI elements like message titles / contents), but that's why I mentioned I only turn on telemetry for companies that seem to explicitly, consistently and robustly care about privacy and security.
Telemetry (if it’s truly telemetry) is nowhere close to “tracking”. People conflate the two all the time. One can provide useful, anonymous metrics (e.g. “user enabled feature X”) without doing anything but incrementing the counter for “feature X”.
The “Firefox Problem” is that all the power users disable telemetry, so all the “cool” features that power users like (but never get used by “regular people”) get ignored or removed instead of improved because, according to the metrics, “nobody uses them”.
The user doesn't conflate the two, the developers do, and that's why we turn off telemetry, because its damn close to tracking.
Knowing what (vulnerable) version of software a user is using transmitted in the clear was absolutely a part of the NSA monitoring error information from windows crash logs https://www.schneier.com/blog/archives/2017/08/nsa_collects_... - so forgive me if I do not trust the developer to know what makes me unsafe or not.
If you enable telemetry by default I will do my best to never use your product.
If Charmin put sensors in toilet paper rolls to optimize the wiping experience, it would be dystopian. Why do we give software a pass? Privacy is a right not a telemetry problem and opt-out by default is non-consensual surveillance.
In fairness Charmin is probably backed by millions of dollars of market research on simple user questions like softness, tendency to crumble, size, etc., while free software faces more criticism for issues that are exponentially more difficult to express.
Ok, replace Charmin with a toilet paper startup disrupting the industry. They wouldn’t be given a pass either. Still disgusting.
It should probably be noted that if there’s no agreement, collecting telemetry without opt-in probably violates several state and federal laws. Not that these are enforced, but it would be nice if they were.
The GPUs are bottom-barrel for compute-focused industries. It is mobile-grade hardware that arguably can't even scale to prior Mac Pro workloads.
> The GPU is monstrously good. Depending on the workload, the M1 series GPU using 120W could beat an RTX 3090 using 420W.
You're just listing the TDP max of both chips. If you limit a 3090 to 120W then it would still run laps around an M1 Max in several workloads despite being an 8nm GPU versus a 5nm one.
> It is kind of sad Apple neglects helping developers optimize games for the M-series
Apple directly advocated for ports like Death Stranding, Cyberpunk 2077 and Resident Evil internally. Advocacy and optimization are not the issue, Apple's obsession over reinventing the wheel with Metal is what puts the Steam Deck ahead.
Edit (response to matthewmacleod):
> Bold of them to reinvent something that hadn't been invented yet.
Vulkan was not the first open graphics API, as most Mac developers will happily inform you.
I'm confused how anyone ever thought the NPU would be a good idea. The GPU is almost always underutilized on Mac and could do the brunt of the work for inference if it embraced GPGPU principles from the start. Creating a dedicated hardware block to alleviate a theoretical congestion issue is... bewildering. That goes for most NPUs I've seen.
Apple had the technology to scale down a GPGPU-focused architecture just like Nvidia did. They had the money to take that risk, and had the chip design chops to take a serious stab at it. On paper, they could have even extended it to iPhone-level edge silicon similar to what Nvidia did with the Jetson and Tegra SOCs.
I think they built the NPU with whatever models they needed to run on the iPhone in mind vs trying to build a general purpose chip, and then got lucky it was also useful for LLMs.
(Like “I want to do object detection for cutting people into stickers on device without blowing a hole in the battery, make me a chip for that”.)
I'm not sure even Apple thought that, given that they don't officially provide access to ANE internals under macOS (barring unsupported hacks). But if that was fixed, it could then be useful for improving the power efficiency of prefill, where the CPU/GPU hardware is quite weak (especially prior to the M5 Neural Accelerators).
I very recently ran the numbers on these GPUs for an upcoming blog post. The token generation performance is bad, but the prefill performance is _really_ bad.
For a Qwen 3.6 35B / 3B MoE, 4-bit quant:
- parsing a 4k prompt on a M4 Macbook Air takes 17 seconds before generating a single token.
- on an M4 Max Mac Studio it's faster at 2.3 seconds
- on an RTX 5090, it's 142ms.
RTX 5090 uses more power than an M4 Max Mac Studio but it's not 16x more power.
Somehow Apple has always been able to sell their stuff as somehow Magic. Remember the megahertz myth? Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.
> Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon.
The thing that Apple has always been excellent at is efficiency - even during the Intel era, MacBooks outclassed their Windows peers. Same CPU, same RAM, same disks, so it definitely wasn't the hardware, it was the software, that allowed Apple to pull much more real-world performance out of the same clock cycles and power usage.
Windows itself, but especially third party drivers, are disastrous when it comes to code quality, and they are much much more generic (and thus inefficient) compared to Apple with its very small amount of different SKUs. Apple insisted on writing all drivers and IIRC even most of the firmware for embedded modules themselves to achieve that tight control... which was (in addition to the 2010-ish lead-free Soldergate) why they fired NVIDIA from making GPUs for Apple - NV didn't want to give Apple the specs any more to write drivers.
> NV didn't want to give Apple the specs any more to write drivers.
I think that's a valid demand, considering Nvidia's budding commitment to CUDA and other GPGPU paradigms. Apple, backing OpenCL, would have every reason to break Nvidia's code and ship half-baked drivers. They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal.
Apple wouldn't have made GeForce more efficient with their own firmware, they would have installed a Sword of Damocles over Nvidia's head.
> They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal.
It was even worse than that, they just stopped updating OpenGL for years before either Vulkan or Metal existed at all. Taking a Macbook and using bootcamp would instantly raise the GPU feature level by several generations just because Apple's GPU drivers were so fucking old & outdated.
On Geekbench 5, the M1 hits 483 FPS and the RTX 3090 hits 504 FPS.
There are other workloads where the M1 actually beats the 3090.
Apple does plenty of hyping but it's always cute when irrational haters like you put them down. The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.
What geekbench 5 fps are you talking about? Geekbench only has OpenCL and Vulkan scores for the 3090 as far as I can tell, and the M1 Ultra is less than half the OpenCL score of the 3090. And the M1 Ultra was significantly more expensive.
Find or link these workloads you think exist, please
> The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt.
The GTX 1660 also smokes the 3090 in perf per watt. Being more efficient while being dramatically slower is not exactly an achievement, it's pretty typical power consumption scaling in fact. Perf per watt is only meaningful if you're also able to match the perf itself. That's what actually made the M1 CPU notable. M-series GPUs (not just the M1, but even the latest) haven't managed to match or even come close to the perf, so being more efficient is not really any different than, say, Nvidia, AMD, or Intel mobile GPU offerings. Nice for laptops, insignificant otherwise
Here you go[0]. 'Aztek Ruins offscreen'. Although I misremembered the exact FPS, the 3090 is at 506 FPS.
Also note how the M1 Ultra is pushing 2/3 of the FPS of the 3090 despite 1/3 of the power budget and the game itself being poorly optimized for the M-series architecture.
And here[1] you have it smoking an Intel i9 12900K + RTX 3900. The difference doesn't look too impressive until you realize the power envelope for that build is 700-800W.
Also, the GTX 1660 (technically an RTX 2000 series, but whatever) is about 26% less efficient than an 3090[2].
> Being more efficient while being dramatically slower
That's my whole point and what you're refusing to see. The M1 is not dramatically slower than an i9 or 3090 despite having dramatically lower power use.
The proof for this will really start to come once Qualcomm and Mediatek have gotten a handle on their PC ARM chips and Valve decides they're good enough for a Steam Deck 2 or 3. You'll get to see 2-3x the battery life along a modest performance increase.
> Here you go[0]. 'Aztek Ruins offscreen'. Although I misremembered the exact FPS, the 3090 is at 506 FPS.
Oh, GFXBench not geekbench.
Realistically that 506 fps result is probably CPU bottlenecked, not that aztec ruins is all that relevant. It's a very old benchmark, released in 2018, that was destroyed for mobile GPUs, so realistically is using a 2010-ish GPU feature set.
If that's your use case, great. But it's not significant at all.
> And here[1] you have it smoking an Intel i9 12900K + RTX 3900.
Not using the GPU, so irrelevant. Also not using 700-800w
> Also, the GTX 1660 (technically an RTX 2000 series, but whatever) is about 26% less efficient than an 3090[2].
"bestvaluegpu" I've never heard of but holy AI slop nonsense batman. Taking 3dmark score and dividing it by TDP is easily one of the worst ways to compare possible.
MacOS has solved laptop suspend since the 2000s. Windows and Linux still struggle with this, especially due to the switch from S3- to S0ix-style sleep.
Modern Apple laptops seem less special now but you also have to look at them through the lens of their introduction.
A similar thing is true for Sonos. They don't seem all that special now, but you have to realize they have been offering multi-room synced audio with a good UX since 2006. That's before the iPhone even was released.
Yeah, it's not that hard if the hardware is high quality and of small number of known types.
Windows and Linux is judged by whether it works on any hardware, including the so-cheap-it-should-not-have-been-produced-ever machines, that will obviously just plain suck. No amount of software can save shitty hardware.
I feel like Linux proselytizers are always talking about how Linux will revive or improve low-powered hardware, and that’s one of the reasons it’s so great. Then when it’s still a poor experience, the same Linux users say things like this, that no software can save bad hardware. You can’t have your cake and eat it too.
Also, Linux expressly aims to run on a wide array of hardware, and macOS doesn’t. So Linux should be judged across a large range of hardware and macOS shouldn’t, in the same way a Jeep should be judged on its off-roading abilities and a Civic shouldn’t.
A supersonic airplane is not better than a bicycle, nor the reverse is true. They are just.. different and only marginally related.
Also, "revive" a device is more of a niche thing. What's more generally in line with linux's philosophy is it scaling down to embedded-like hardware, but also scaling up to supercomputers. Neither end is "a bad experience", and none of the other mainstream desktop OSs can even hold a candle next to it.
Countless other things about the way they work and how they handle what you want to do with them? We're not comparing radically different things, I was intentional about my comparison of Jeep vs Civic: they're the same basic tool, with different applications and contexts where they shine. This isn't an airplane and a bicycle.
Not really - a Jeep and a Civic is pretty similar in use case still. The Mac would be more like a tram that can only go on rails vs perhaps a bus. If we want to make some useless comparison.
Hard to agree with those critics when the OS is doing the right thing, but the hardware won't play ball. The reason there's so much code in the Linux kernel is for various shenanigans that hardware vendors came up with. Yesterday I was looking at how HDMI audio is being implemented. From the specs, it looks quite nice with support for PCM and rates supported sent via EDID, but there's like 5 implementations for that one, 3 of them handling hacks by the GPU vendors.
Are we talking about the same Google? They still haven't fixed Android gesture navigation after almost a decade.
reply