Floating-point math is not inherently “broken”, but it does have some limitations and inaccuracies due to the way that floating-point numbers are represented and stored in computer systems.
The main limitation of floating-point numbers is that they have a finite number of digits, which means that some numbers cannot be represented exactly. This can lead to rounding errors and inaccuracies when performing mathematical operations with floating-point numbers.
For example, the decimal number 0.1 cannot be represented exactly as a binary floating-point number, which means that its representation in a computer system will be an approximation of 0.1. This can cause problems when comparing floating-point numbers for equality or when performing arithmetic operations with large numbers of decimal places.
Despite these limitations, floating-point math is still widely used in computer systems and is considered a sufficient approximation for many practical purposes. If the limitations of floating-point math are a concern, you can use alternative numerical representations, such as decimal or fractional numbers, or use libraries that provide higher-precision arithmetic operations.
In conclusion, floating-point math is not “broken”, but it has some limitations and inaccuracies that must be taken into account when working with floating-point numbers.