middling code, delivered within a tolerable time frame, budget, without taking excessive risk, is good enough for many real-world commercial software projects. homogeneous middling code, written by humans or extruded by machines, is arguably even a positive for the organisation: lots of organisations are more interested in delivery of software projects being predictable, or having a high bus-factor due to the fungibility of the folks (or machines) building and maintaining the code, rather than depending upon excellence.
in contrast, a toddler equipped with an ion thruster & a modest quantity of xeon propellant could achieve enough delta-v to attain cheetah-escape velocity, provided the initial trajectory during the first 31 hours of the mission was through a low-cheetah-density environment
That initial trajectory also needs to go through a low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h before the drag force from the air equals the thrust from the ion thruster.
The only way that ion thruster might save the toddler is if it was used to blast the cheetah in the face. It would take a pretty long time to actually cause enough damage to force the cheetah to stop, but it might be annoying enough and/or unusual enough to get it to decide to leave.
> low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h
agreed. this also provides an explanation for the otherwise surprising fact that prey animals in the savannah have never been observed to naturally evolve ion thrusters.
+1 to pavel_lishin's Splendor suggestion. There's also Splendour: Duel [1] which is a more complex version of the game designed for 2 players.
Another quick, low-complexity game that is easy to teach & pretty good fun is Century: Spice Road [2]
Chinatown [3] (re-themed as Waterfall Park [3b] ) is a simple highly interactive game that is basically 100% negotiations between players who are trying to make real estate deals with each other. Can be played in 90 minutes, including rules explanation, plays up to 5. For a more complex asymmetric game that's more focused on engine building, with a healthy dose of negotiation, check out Sidereal Confluence [4].
For more complex games that take a bit longer to play to teach and play, that are largely focused on players doing their own thing ("multiplayer solitaire"), building their engines without much negative player interaction, check out Ark Nova [5] or Terraforming Mars [6]. These might take 3-4 hours or so to finish, provided there's an experienced player to teach everyone the rules.
For another moderately complex strategy game with a little more player interaction, check out Brass: Birmingham [7]. Takes around 4.5 hours to finish a 4 player game, including the rules explanation. If you have a group that enjoys complex strategy games and wants something with spikier negative player interactions, where one player's actions can completely wreck another player's plans, check out Barrage [8].
This probably doesn't help "without spending much money"! One trick is to find or create a regular board gaming group where everyone brings along different games. That way if, everyone buys a new game or two every year there's a lot of variety without everyone needing to buy heaps of games.
This is the kind of positivity that I love finding find once going down the rabbit hole of board games today.
So make amazing suggestions in this list, including two of my favorites: Terraforming Mars and Brass Birmingham.
Just chiming an opinion that Brass Birmingham is high on the complexity scale for beginner board gamers. Or more specifically, high on a frustration scale because there are so many placement restrictions that there are often only 1-2 legal moves to play and figuring out what they are is quite a challenge for people playing the first time. (From experience that we as well as several others we know had on their respective first times)
> Brass Birmingham [...] there are often only 1-2 legal moves to play and figuring out what they are is quite a challenge for people playing the first time.
Also, some of those legal moves will set up a board state that the player taking a turn immediately after you can exploit for a lot more benefit than you got, so not only are the legal builds hard to identify for new players, half of those legal moves are also traps! If new players aren't comfortable learning the hard way, the player who is teaching the game can always call these out, explain what is going to happen & give people the opportunity to redo their move.
An alternative strategy game that is less complex than Brass is Friedemann Friese's classic Power Grid (2004) [1]. It has some of the same elements (network expansion, building stuff to make money) and parts of it are highly interactive (auctions!) but it is less complex and doesn't feature so many negative player interactions. The main down side of Power Grid is that some of the "admin" rules are pretty fiddly, but provided there is an experienced player to teach the game & be responsible for the admin, players who are learning don't need to care about the details.
standard algorithms for single-source/single-dest pathfinding scale log-linearly in the size of the graph, so compared to other combinatorial optimisation problems, optimal pathfinding is incredibly easy for computers to do & scales pretty well to industrial-sized problems. computers can also do optimal pathfinding for problems that humans would not be able to solve easily (because the graphs don't easily embed in 2d or 3d, say, so we can't bring our vision systems to bear)
other combinatorial optimisation problems - like the traveling salesman you mention - are much harder than pathfinding to solve optimally or even approximately
what a great talk with many little highlights: code archaeology of a codebase passed between many teams & companies with the full revision history being lost somewhere along the way, detective work figuring out there had been an accidental regression in floating point precision when SIMD was enabled, obtaining higher performance by specialising/simplifying the 2d geometry algorithm to axis-aligned rectangular obstacles instead of the prior convex hull code, automatically fuzz-testing the proposed "obviously valid" algorithm by AI vs AI matches & using logging/invariants to identify and harvest nasty counterexamples, growing a unit test suite of 100 harvested nasty counterexamples while fixing the identified defects in the new algorithm, finally shipping it and receiving player feedback
My first job out of school was on one of the HD expansions at SkyBox Labs. It was mostly grunt feature work and desync fixes, but I remember that some of the handcoded ASM from the legacy pathfinding had been one-to-one translated to C++.
I always wondered if that contributed to the pathfinding regressions that were talked about online. Or, you learn about compiler-induced accidental UB in school, and part of me wondered if something was happening there.
The underlying argument this article seems to be making is that an appropriate algorithm for any given application isn't always the one with the most efficient asymptotic performance for sufficiently large n -- for a given application (in this case routing), we have data on typical values of n that appear in reality and we can choose an algorithm that offers good enough (or optimal) performance for n in that constrained range, as well as potentially having other benefits such being simpler to implement correctly in a short amount of engineering time.
This argument is very much in line with Mike Acton's data-driven design philosophy [1] -- understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
It's common that new algorithms or data-structures with superior asymptotic efficiency are less performant for smaller problem sizes vs simpler alternatives. As always, it depends on the specifics of any given application.
> understand the actual specific problem you need to solve, not the abstract general problem. Understand the statistical distribution of actual problem instances, they'll have parameters in some range. Understand the hardware you're building writing software for, understand its finite capacity & capability.
Very much in line with what James Coplien and colleagues described with "Commonality and Variability Analysis" and "Family-oriented Abstraction, Specification, and Translation" (FAST) for Software Engineering in the 90's. Coplien's PhD thesis titled Multi-Paradigm Design and book titled Multi-Paradigm Design for C++ is based on this approach.
I came to realize that logs don't matter. O(log(n)) and O(1) are effectively the same thing.
The reason is that real computers have memory, accessing memory takes time, and bigger n needs more memory, simply to store the bits that represent the number. Already, we have a factor of O(log(n)) here.
But also each bit of memory takes physical space, we live in a 3D world, so, best case, on average, the distance to a memory cell is proportional to the cube root of the amount of memory cells we will have to access. And since the speed of light is finite, it is also a cube root in time.
So "constant time" is actually cube root on a real computer. And I am generous by saying "real". Real real computers have plenty of stuff going on inside of them, so in practice, complexity analysis at the O(log(n)) level is meaningless without considering the hardware details.
Going from log(n) to log2/3(n) is just an exercise in mathematics, it may lead to practical applications, but by itself, it doesn't mean much for your software.
Hash tables are indeed O(1) in theory while binary search is O(log(N)) in theory, no problem with that.
But in practice, accessing a physical memory cell is not O(1), it is O(cuberoot(N)).
For a binary search, you need O(log(N)) operation, but each operation (accessing a memory cell) is O(cuberoot(N')) where N' is the number of elements up to the level of the tree you are at, ending at N for the last iteration. So that's O(sum(0, log(N), cuberoot(N/exp(n))), which simplifies to O(cuberoot(N)), exactly the same!
In real practice, the choice of binary search vs hash table depends on considerations like cache behavior. Hybrid methods combining hash tables with hierarchical data structures are often used when N is large. But that's my point, you won't choose a hash table over a binary search because one is O(1) and the other is O(log(n)), you chose one over the other because of how it fits your hardware architecture.
Contrast with, say, O(n.log(n)) over O(n^2). When n is large, barring space-time tradeoffs, you should always pick the O(n.log(n)) algorithm, because the reduction in complexity will always beat any sane hardware considerations.
Please explain to me how you can hash n distinct strings into O(n) buckets in O(1) time. Please note that this process needs to work as n goes to infinity.
Hash tables are O(log n) structures when you don't hand-wave away the "compute a hash" part. The thing is, search trees are far worse than that in practice and you aren't hand-waving away the "compare two elements" part. That's where the real speed savings come from.
What I think you are saying is that computing the hash needs to process the entire string, and the length of that string roughly corresponds to log n, therefore it's O(log n). Not sure I am entirely convinced by that reasoning, but let's roll with it for now.
Because if you apply it to binary search, you need to compare the strings at every step, and by that logic, each of these operations is O(log n), which means your binary search is now O(log^2 n).
I guess the crux is that we are still comparing apples to oranges (or multiplication operations to comparison operations), and at the end what probably makes hashing faster is that we are not branching.
Still I don't think it makes sense to think of both hash tables and binary search as O(log n).
Most of this is very true, except for the one caveat I'll point out that a space complexity O(P(n)) for some function P implies at least a O(cubedroot(P(n))) time complexity, but many algorithms don't have high space complexity. If you have a constant space complexity this doesn't factor in to time complexity at all. Some examples would be exponentiation by squaring, miller-rabin primality testing, pollard-rho factorization, etc.
Of course if you include the log(n) bits required just to store n, then sure you can factor in the log of the cubed root of n in the time complexity, but that's just log(n) / 3, so the cubed root doesn't matter here either.
Heh funny you post this. My example of poor pathfinding is OpenRA but I quit following that project when it was clear they were heavily biased towards their gameplay and focused on odd features over making the basics (pathfinding) work.