So we all know that dividing by zero is an error in programming, but do you know why?
It turns out that it not really an error as much as a math problem, and not so much a problem as much as a thing. Here’s how it goes according to my understanding:
- A function y= 1 * x has a domain of all real numbers
- A function y= 1 / x has a domain of x != 0, zero is an asymptote
If you look at the graph of the function, y = 1 / x, and you think about it, 1 / x approaches infinity as x moves toward zero, but it never actually reaches zero. Obviously we can conceive to write, y = 1 / 0, but as an actual operation, it doesn’t make sense as 0 is outside the domain of 1 / x. Why it is outside the domain of x, I’m not entirely sure, but it is, and that’s how it is. At the end of the day, it seems that x=0 and y= 1 / x each draw a separate, non-intersecting line.
y = 1 / x and x = 0
https://www.desmos.com/calculator/2bw3hvkpap
If the C++ compiler detects that you’ve done a/ 0, it will consider it undefined behavior, and make the entire thing a no-op. If it makes it a no-op, it may not tell you, and this could be a source of a bug. If you encounter a/0 during run-time, you may get some sort of crash or exception.
With the way the compiler can optimize, it can reorder some source operations to affect performance. It’s possible for the compiler to place a divide by zero error before a statement you need or would expect to be executed. You would experience undefined behavior ( x / 0 ) at a place that is not indicated via source code, and that’s bad.
Avoiding this behavior requires a check to detect division by zero before the function is attempted.