Hacker Newsnew | past | comments | ask | show | jobs | submit | jeremyscanvic's commentslogin

It's very insightful how they explain the difference between dataframes and SQL tables / standard relational structures!


Like other commenters the tone of this post threw me off but I was really impressed by the design of the website. Congrats for building it, it shows your hard work and taste!


What I usually do when I have to read large man pages like bash(1) is I read them as PDFs:

man -Tpdf bash | zathura -

Replace zathura with any PDF viewer reading from stdin or just save the PDF. Hope that can be useful to someone!


my manpager is `vim -`, can't beat that


You probably can — by using neovim:

https://wiki.archlinux.org/title/Neovim#Use_as_a_pager

https://neovim.io/doc/user/filetype/#_man

I've also been running (neo)vim as a manpager. You get the same features as with vim (like easily copying text or opening referenced files/other manpages without using the mouse), but neovim also parses the page and creates a table of contents, which can be used for navigation within the page. It doesn't always work perfectly, but is usually better than nothing.


For those interested you can also look up for opto-electronic transfer functions (OETF) and electro-optical transfer functions (EOTF).


Is it possible in practice to control the side effects of making changes in a huge legacy code base?

Maybe the software crashes when you write 42 in some field and you're able to tell it's due to a missing division-by-zero check deep down in the code base. Your gut tells you you should add the check but who knows if something relies on this bug somehow, plus you've never heard of anyone having issues with values other than 42.

At this point you decide to hard code the behavior you want for the value 42 specifically. It's nasty and it only makes the code base more complex, but at least you're not breaking anything.

Anyone has experience of this mindset of embracing the mess?


I've never seen code truly get that bad, but I can already think of several problems with that approach.

Do you really know all of the expected behavior you're hardcoding in? What happens if your hardcoded behavior is just incorrect enough that it breaks something somewhere else? How can you be sure that your test for that specific value is even correct?

I think the better approach is to let things break naturally and open a bug with your findings. You'd be surprised how often someone else knows exactly what's going on and can fix it correctly. Your hacks are not just pouring gasoline onto the fire, but opening a well directly underneath that will keep it burning for a long time.


I believe this is called Microsoft Driven Development

(seriously though, this book has answers for you: Working Effectively with Legacy Code, by Michael Feathers)


You misspelled Oracle.


All. The. Time. And I hate it. Imagine giving a customer a rebate based on buggy code. You fix a bug, the customer comes back and wants to check that the rebate was correct that last time. Now you have to somehow hard-code the rebate they did get so that your (slightly less buggy) code gives the same result. But hard-coding has the risk of introducing other errors on its own. Oh yes, and you've never enough time to do things properly because Customers (or maybe Management). A tangled mess of soul destroying lifeblood-sucking code and pressures ensues.


How would you refer to it in French out of genuine curiosity?


"La triade mortelle" would fit. Perhaps "Tiercé mortel" if your audience is equine oriented.


The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.

For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...


Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].

[1] https://ieeexplore.ieee.org/document/11370202


Just to add to this: intentional/digital blur is even easier to undo as the source image is still mostly there. You just have to find the inverse metric.

This is how one of the more notorious pedophiles[1] was caught[2].

1 - https://en.wikipedia.org/wiki/Christopher_Paul_Neil

2 - https://www.bbc.com/news/world-us-canada-39411025


Isn't that roughly (ok, very roughly) how generative diffusion AIs work when you ask them to make an image?


You're absolutely right! Diffusion models basically invert noise (random Gaussian samples that you add independently to every pixel) but they can also work with blur instead of noise.

Generally when you're dealing with a blurry image you're gonna be able to reduce the strength of the blur up to a point but there's always some amount of information that's impossible to recover. At this point you have two choices, either you leave it a bit blurry and call it a day or you can introduce (hallucinate) information that's not there in the image. Diffusion models generate images by hallucinating information at every stage to have crisp images at the end but in many deblurring applications you prefer to stay faithful to what's actually there and you leave the tiny amount of blur left at the end.


I believe diffusion image models learn to model a reverse-noising function, rather than reverse-blurring.


Most of them do but it's not mandatory and deblurring can be used [1]

[1] Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise, Bansal et al., NeurIPS 2023


I didn't learn about this trick (deconvolution) until grad school and even then it seemed like spooky mystery to me.


Really cool! Any specific reason for the choice of Oklab instead of say HSL/HSV?


Oklab is a great color space that does what you expect[0] much better than HSL/HSV.

[0] https://bottosson.github.io/posts/oklab/. The better a color space matches human perception, the easier it is to certain processing operations, such as converting to grayscale while preserving the perceived brightness.


Thanks!


Any reference you can share on this? I'm genuinely curious speaking as a PhD student in image processing for computer vision


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: