Hacker Newsnew | past | comments | ask | show | jobs | submit | RossBencina's commentslogin

When I first started working with dataflow computation I was fortunate to have a computer scientist point me in the direction of an introductory compiler textbook.

It's worth considering that the dataflow graph (as an abstract mathematical graph), the computation graph (the partial order of function execution required to compute the data), the traversal strategy, the runtime representation of the graph, the runtime data structure for the graph, and the runtime data structures for efficient reactive update are all separate but related aspects.

For instance, push and pull are both directed graphs. They have the same connectivity, but the direction of the arrows is reversed. You can only efficiently traverse edges in the direction that you represent. A dataflow graph has edges pointing from sources to sinks, a data dependency graph has edges pointing from sinks to sources. [Side note: if a computation can produce multiple results the data dependency graph and the computation dependency graph are not exactly the same thing and you need to be clear on the distinction, but I am assuming here single-output nodes]. In a dataflow graph you want to evaluate the changed nodes prior to evaluating the downstream nodes that depend on them. As TFA states, this necessitates a postorder (children first) traversal of the data dependency graph, starting at all terminal sinks, and terminating at sources or already visited nodes. You can use a sense-reversing "visited" flag on each node to avoid a reset pass. As noted in the article this traversal need only be performed when the graph topology changes. But for stable traversal order the topological sort can be cached in an array. Needless to say that arrays are much faster to iterate over than any kind of pointer chasing. [Witness the rise of Entity-Component systems over OO models]. I suspect that there is a cut-over point where it is more efficient to iterate the entire array (perhaps with memoized results, or JIT compilation) than to perform a more surgical "update only what is downstream of the changes" approach. Another approach is to assign all nodes a contiguous integer id, and maintain a dirty node bitmask where bit-indices correspond to node ids. In addition, each source has a bitmask that is 1 for all downstream dependent nodes. When a source changes, bitwise-or source.downstream_dependents bitmask with the global dirty_nodes bitmask. To evaluate (not necessarily immediately), iterate in topological order processing only the dirty nodes. In any case, the point I'm trying to make is that the data structure that is best for building or manipulating the graph could very well be different from the data structure that is best for computing the desired results. There will be trade-offs to be made. For this reason alone it's best to keep the graph-theoretic properties and the implementation data structures separate in your head.

In my view the interesting requirements raised by the article are (1) lazy evaluation (e.g. of expensive or conditionally required data). This might be where control flow graphs of basic blocks enter the story. and, (2) dynamic reconfiguration during node evaluation. Some questions I'd be asking about dynamic reconfiguration are: what happens if you delete a node that has yet to be evaluated? will new subgraphs "patched in" to the existing graph (how exactly?), or are they always disconnected components that can be evaluated after the current graph traversal completes?


> exponential functions remain (scaled) exponential when passed through such operations.

See also: eigenvalue, differential operator, diagonalisation, modal analysis


Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.* There is a book "Math you Can't Use" that discusses this.

* not a legal definition, IANAL.


> Pretty sure in the USA you can patent mathematics if it is an integral part of the realisation of a physical system.

Yes, that's true. In that example, you're not patenting mathematics, you're patenting a specific application, which can be patented. In my reading I see that mathematics per se is an abstract intellectual concept, thus not patentable (reference: https://ghbintellect.com/can-you-patent-a-formula/).

There is plenty of case law in modern times where the distinction between an abstract mathematical idea, and an application of that idea, were the issues under discussion.

An obligatory XKCD reference: https://xkcd.com/435/

And IANAL also.


I would think you could only patent a particular usage of it.


Moreover, The Unreasonable Effectiveness of Linear, Orthogonal Change of Basis.



To be more precise, when working with sampled data with uniform sample rate you use the Discrete Time Fourier Transform (DTFT), not the Fourier Transform!. None the less, you still end up with an approximate spectrum which is the signal spectrum convolved with the window function's spectrum.

In my view the Fourier Transform is still useful in the real world. For example you can use it to analytically derive the spectrum of a given window.

But I think the parent is hinting at wavelet basis.


> Honestly, just pick up the Art of Electronics

I got this advice in 1998. I have the book. I found it useful for the "art" part. It got me through the projects that I was working on at the time, but personally it didn't help me with the fundamentals. Paraphrasing what has been said on this site many times in the past: AoE is a great first book in practical electronics if you already have an undergraduate degree in physics. I showed my brother AoE when he was building guitar pedals and he couldn't make sense of it and said it was obviously assuming things that he didn't know (he had no high-school science background).

There are a lot of potential and/or assumed pre-requistites even for basic electronics: high school physics, first-year calculus, maybe a differential equations course, certainly familiarity with complex numbers. As I understand it EEs take vector calculus and classical electromagnetism, that's a long road for self-study. For that reason it's hard to give general advice about where to begin.

For someone starting out I think the first things to study are DC and then AC analysis of passive circuits (networks of resistors, capacitors, inductors), starting with networks of resistors. Ohms Law, what current and voltage actually mean, some basic introduction to the physics passive components. This is the basics, and I don't see AoE getting anyone over this hump. This could be learnt in many ways, electronics technicians and amateur radio people know this stuff -- there are no doubt courses outside university both on line and in person. If we're talking books, get a second hand copy of Grob's "Basic Electronics." Once that's covered you can move on to semiconductors. I can recommend Malvino's "Electronic Principles," but this book won't teach you about resistors, capacitors and inductors. After that I think the Art of Electronics would be approachable. And also more specialised topics like digital design or operational amplifier circuits.

A book that usually gets a mention is Paul Scherz "Practical Electronics for Inventors." I got that book later, I personally found it a bit overwhelming with the mixture of really basic practical stuff combined with more advanced circuit theory, but it's no doubt popular for a reason.

Another standard recommendation is to buy one ARRL Handbook from each decade (I have 1988), the older ones have less advanced (hence more accessible) material. But reading the "Electronics Fundamentals" chapter is no substitute for Grob and Malvino.


Seconding an old ARRL handbook.


Alex Forencich has been live streaming rebuilding Corundum starting a few weeks ago: https://www.youtube.com/@AlexForencich/streams

As I understand it Taxi is where new development is happening: https://github.com/fpganinja/taxi


This is correct, and the result of those streams has been released as corundum-proto here: https://github.com/fpganinja/taxi/tree/master/src/cndm_proto . Note that this simplified design is intended for educational purposes only, the "production" variants will be much more capable (corundum-micro, corundum-lite, and corundum-ng).


In general, symbolic execution will consider all code paths. If it can't (or doesn't want to) prove that the condition is always true or false it will check that the code is correct in two cases: (1) true branch taken, (2) false branch taken.


I understand how this works in general. I had static analyzers at Uni, I know lattice theory and all this - I am just wondering how Xr0 handles it.


> we don't even use static analysis and validators for c or C++

There is some use, how much I don't know. I guess it should be established best practice by now. Also run test suites with valgrind.

Historically many of the C/C++ static analyzers were proprietary. I haven't checked lately but I think Coverity was (is?) free for open source projects.


The part about task initiation induced stress -> flight or fight -> distraction/relief-seeking resonated with me. I hadn't noticed that before. The small steps bit reminds me of BJ Fogg's "brush one tooth."

One common failure mode of "do the smallest/easiest thing first" that the article didn't address was that sometimes it's so easy to "buy the running shoes" that you end up with a house full of "easy first steps." I think a better approach is to aim to eliminate unnecessary complexity in moving towards the goal. You can do this by aiming for the smallest, easiest, and simplest first step that simultaneously maximises progress towards the goal. e.g. "I want to make a stand to hold my XYZ." Bad first step: Buy a 3D printer. Good first step: Improvise something out of cardboard.


Ha--totally agree about the 'house full of easy first steps'. I have a few.

But I think it all still applies; the key is to keep taking small steps toward the thing, not just 'keep taking small steps'. You look at a successful small step and (like I wrote) ask 'what's the next step?' that will build on it.


Love BJ Fogg's Tiny Habits!

2025 was the first time I have been able to implement and maintain a series of routines for the entire year (still going strong), and the concept of starting tiny was a key epiphany for me. I wrote about my experiences with it recently on my blog[0], but the point you make about good first steps is a great one.

A phrase I heard some time back that has stuck with me is "don't buy something hoping to be someone." In other words, don't buy running shoes hoping to become a runner.

In my personal experience, a good first step is the smallest version of doing the thing you ultimately want to be doing. "Brush one tooth" is a great example. Doing one push-up is another. For running, maybe just getting dressed, walking outside, and doing some stretching. The idea is that it's the stuff you would have to do anyways if you were going to do a more robust/thorough version of the thing you're trying to ultimately do. Buying shoes, on the other hand, is just purchasing more stuff.

[0] https://onebadbit.com/posts/2025/12/year-in-review/


And hunting for the perfect first step product becomes a dopamine chasing activity itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: