It's easy to remember if you apply the operation to whole number line instead of trying to remember the cases.
There are only 2 cases either you truncate to zero or you floor to negative infinity.
The zero behavior breaks the symmetry to help remember.
n * truncate(a/n) + rem(a,n) = a
n * floor(a/n) + mod(a,n) = a
Now suppose that "a" is any real, a/n shared by both. however truncate and floor will do different things to the negatives.
truncate "moves" numbers to zero
..............................................................
>>.>>.>>.>>.>>.>>.>>.>>.>>.>>0<<.<<.<<.<<.<<.<<.<<.<<.<<.<<.<<
while floor "moves" number to -inf
..............................................................
<<.<<.<<.<<.<<.<<.<<.<<.<<.<<0<<.<<.<<.<<.<<.<<.<<.<<.<<.<<.<<
when we then multiply by n its clear the positive "a" is the same however for negative a floor < truncate and so mod is positive while rem is negative
for a < 0 : floor (a/n) < truncate(a/n) unless n divides a then they are =
The other case is for negative n, but that just flips the division and multiplication twice which has the effect of changing floor to ceiling and truncate to zero be truncate away from zero which flips the mod sign, but leaves rem alone, because its symmetric about zero.
There are only two cases... except when there are four, as the article covers. With wildly different conclusions by different languages, so there is no one interpretation / sane mental model that can be applied.
Mod is "fun". I agree, I simply avoid negative numbers entirely, and handle them more explicitly when needed. That way it's language independent.
There are actually as many cases as there are rounding modes, which is seven, and for most of them unlike `mod` and `rem` there's no common name for these remainder operations:
- RoundNearest
- RoundNearestTiesAway
- RoundNearestTiesUp
- RoundToZero — rem, div
- RoundFromZero
- RoundUp
- RoundDown — mod, fld (i.e. floor(x/y) but without incorrect corner cases)
Most languages have no way of doing most of these, but then again, they're mostly pretty useless. They're really only useful when you're pairing division with a specific kind of rounding with a remainder that needs to match. Example of whacky remainder behavior in the "familiar" RoundNearest mode (default rounding mode for floating point):
Wild, huh? Output range for modulus 4 is -2:2 and whether you get -2 or 2 alternates with each cycle around the ring. (Of course, Julia has comprehensive support for all of these because we're a bunch of nerds for this kind of thing.)
The `rem`, `div` and `divrem` functions also all optionally take a rounding mode argument that lets you have the behavior that matches any rounding mode, where `RoundToZero` matches `div` and `RoundDown` matches `mod`, but there are actually a total of seven rounding modes. Most of them are pretty useless, but if you need some style of division and the remainder to match, this is very helpful.
Rust's rem_euclid is still wrong for negative moduli. To see why, remember that we want n = remainder + quotient * divisor. Now look at the horrible mess that is div_euclid:
> In other words, the result is self / rhs rounded to the integer q such that self >= q * rhs. If self > 0, this is equal to round towards zero (the default in Rust); if self < 0, this is equal to round towards +/- infinity.
Python gets it right. Division is simply flooring division, and modulo is defined to satisfy n = remainder + quotient * divisor. That immediately gets you all the nice behavior such as (-1) % 10 = 9, and 1 % -10 = -9.
The operator precedence is wrong on the second one: to take -1 mod 10, you'd want to write (-1_u32).rem_euclid(10), which gives the expected result of 9.
Because always returning a positive result from the remainder is incredibly unnatural. You can get this behavior through Python's definition trivially using `a % abs(b)`, making rem_euclid less flexible as well.
The reason it's incredibly unnatural is not obvious when looking at the remainder in isolation. But the remainder is always one half of a pair: remainder and quotient. The quotient associated with rem_euclid is batshit insane.
The quotient associated with Python's modulo definition is simply floor(a/b).
Note that I'm wording it very strongly when I say it is wrong. It follows its spec faithfully and isn't 'bugged', its spec is just poorly and unnaturally chosen.
But 13 is also equivalent to -2 and 1, mod 3. Why are you doing the modulo operation in the first place?
From a mathematical standpoint, I've never seen a use for returning -2 here. I want something unique for the group, so I can ask e.g. `x mod 3 == y mod 3` and have it work regardless of whether x or/and y are negative.
Just as a caveat, ((a % b) + b) can overflow. For example, with 32-bit signed integer arithmetic, mod(1000000000, 2000000000) using that formula will yield -1294967296 instead of 1000000000.
According to the Wikipedia page (linked in the article), Perl uses the floored definition of mod, which is promoted by Knuth, and also used by APL, Fortran and Common Lisp.
The one that returns a negative value for negative first argument has never done anything useful for me, and always made things that are actually useful harder to do.
The only useful behavior is that crossing 0 does not change the pattern. Change my mind.
Is it really faster? If you want to modulo with powers of 2, you can use bit-and with a power of two minus 1, that has the good behavior and is definitely faster than what the one outputting negative values does.
Or do you mean if based on integer division? I don't know what is fastest to implement in hardware for that one to be honest.
At least that’s the reason I’ve heard for preferring C-style mod. Or rather, round-to-zero integer division it faster than round-to-negative-infinity integer division and arguably, you should choose / and % such that a = (a / b) * b + (a % b) always holds. This then forces the strange convention for %.
> round-to-zero integer division it faster than round-to-negative-infinity integer division
Do you have a source for this? Obviously, the big instruction sets natively implement round-to-zero division because that's what everyone already uses, but I've always attributed that to historical accident rather than first principles.
I'd even argue that flooring division is also more useful than round to zero and is also what binary shifting does. At least more useful for my use-cases (things like dealing with coordinates in game worlds, ...), but unlike for modulo, at least here I can imagine use cases where the other option, round to zero, is also useful (e.g. financial, ...)
divMod is for when you want Z_n, you actually care about the equivalence classes and their "canonical" representative.
quotRem is for symmetry. The values of the remainders are the same, but you get some sign flip for free if you want it.
So for example. Suppose you had some number of hours and you wanted to "convert to days", if you want symmetric outputs then quotRem will do that and divMod won't.
Your terminology is a little weird. The idea that 0 <= remainder < divisor is part of the Division Algorithm, not modular arithmetic. And modular arithmetic, unlike remainders, is completely fine with negative numbers. But you're calling the function that gives you a correct remainder "divMod" and the one that only holds itself to modular equivalence "quotRem".
If you divide -13 by 3, the "quotient" is -5 and the "remainder" is 2.
If you divide -13 by 3, the "quotient" is -5 and the "remainder" is 2.
You have this exactly backwards. If 13 divided by 3 answers the question how often does 3 fit into 13 and the answer is 4, then -13 / -3 should obviously also be 4. And -13 / 3 and 13 / -3 should be -4 as it makes no sense to say that 3 fits into 13 4 times but into -13 -5 times. Changing the sign of the argument changes the sign of the result, not the magnitude, or equivalently integer division rounds towards zero. The remainders follow from this and are all +1 or -1.
Modular arithmetic on the other hand partitions the integers into equivalence classes and represents each equivalence class with a canonical member, usually the smallest non-negative one. So the integers modulo 3 - and of course also modulo -3 - form the following three equivalence.
You see how Division Algorithm is capitalized in my comment? I wasn't just having a seizure. The Division Algorithm is a famous theorem (the name is very old) that is generally taken to define the concepts of "quotient" and "remainder".
By the standard approach, there is no such thing as a negative remainder.
> Modular arithmetic on the other hand partitions the integers into equivalence classes and represents each equivalence class with a canonical member
This is just false. Modular arithmetic doesn't represent each equivalence class with a canonical member. You either work with raw numbers ("1"), or you represent the equivalence classes directly ("[-2]").
You see how Division Algorithm is capitalized in my comment?
Missed that. But this just shows that Euclidean division is a bad choice if you are dealing with dividing signed quantities, that sign changes cause magnitude changes makes no sense in that case.
This is just false. Modular arithmetic doesn't represent each equivalence class with a canonical member. You either work with raw numbers ("1"), or you represent the equivalence classes directly ("[-2]").
I don't really understand what you mean. If you say [13] - [24] = [-11] mod 3 that is certainly true, but what's the point then? Wouldn't you at least want [13] - [24] = [1] mod 3 even if you skip reducing [13] - [24] to [1] - [0] explicitly?
The clearest solution is to disambiguate the two as Zig does.