Reasonably interesting, they're largely making the point that for a number of recurrences floating point arithmetic converges to the wrong value, and that it doesn't matter what precision you use: any finite precision representation can converge to the wrong value. They also show that differing levels of precision can change the computation in surprising ways: there's an example of a recurrence that converges correctly in float32, but incorrectly when using float64.
Just to clarify (as the paper does make this distinction) when I say "correct" here I mean the mathematical sense, not the correctly rounded value from floating point arithmetic.
the initial value x1 := 61/11 is not representable in binary floating point format in any precision, so that for the input data stored in the computer, the limit 100 is correct.
Maybe some day we'll invent computers that are good at math.
The problem is not that computers are bad are math (they are arguably perfect at doing math for the operations definition they are given), it is that we think that we are using real numbers and the associated operations when we are using IEEE-754 floating point numbers and the associated operations.
IEEE-754 is actually quitte good at what it does (fast computations, on numbers covering a wide range, while staying accurate accross a large number of operations despite being encoded in finite precision) but it is not real number and it might not be optimal for a given application (maybe you want perfect precision and don't care about speed, maybe you want speed and don't care that much about the result: IEEE-754 is somewhere in the middle, used by people who care enough about speed to run their code on supercomputers and want precision good enough to predict whether their plane will fly given the simulation results).
The main key being the operations on them do not represent the same operations on the reals: adding two floats (both real) gives you another float (which is also real), but doesn't necessarily give you the same result as adding two reals.
You can do math with purely rational numbers (in Python and Haskell, for instance, they are included in the standard library). That would represent 61/11 precisely, along with other rational numbers. You can do simple arithmetics in this domain, like summing a series.
There is no good general representation for irrational numbers, though.
Irrationals are properly represented as a "symbol plus algorithm" to compute a value to any degree. E.g. pi is "pi" plus it's Taylor expansion. This is effectively what textbooks and papers mean by pi, so it makes sense that software would use the defition, too.
Yes, there are neat representations for some irrationals, like pi, or sqrt(2). But in the general case, there is none. You can, with an increasing degree of inconvenience, operate on expression trees that use rationals at leaf nodes to represent computations involving irrational numbers. You have a limited ability to do even simple math with them though; if you use infinite series to represent two of them exactly, you can't even always add them, to say nothing of multiplication.
And our ability to run numeric stuff like integration or gradient descent is limited to integers and floating-point approximations of real numbers.
In the general case, you need infinite bits to represent an irrational number. 100% of all irrational numbers are not computable by a Turing machine, because the set of all algorithms on a Turing machine is countable and irrationals are not. So it seems a bit silly to talk about "the general case" of irrational numbers; it's extremely unlikely you care about an irrational number that is not computable. If an irrational number doesn't have a "neat representation", then it doesn't come up in your problem.
It would be reasonable to have floats with a base ten exponent that would let you write a number like 0.3 correctly.
To do a rational with arbitrary factors like 11 on the bottom is a hassle because the denominator almost always gets bigger when you do math
11/2 + 61/11 = 243/22
I have been thinking about writing a program that uses interval math to generate fractal pictures, hoping I can somehow improve the image quality enough to reveal the structure of the chaotic areas of Hamiltonian maps.
Decimal floating point isn't necessarily the same as BCD.
For instance you can represent integers as binary, base 3, BCD, or any base and they are still the same integers. That is you can write the same number 75412 in any of those systems. The notation for the number is different but the same numbers exist in all of those systems.
Floats on the other hand are a subset of rationals. In the case of binary floats the denominator is 2^N, in the case of decimal floats it is 10^N.
In this case it is not just a difference of notation. The number 1/4 exists in both the decimal and binary floats, but 1/10 exists in the decimal floats but not the binary floats. Thus you have the problem
0.1 + 0.2 != 0.3
because (a) none of the numbers involved really exist in the floats, (b) if you ask for 0.1 you get some other number that is close to 0.1 and prints out as 0.1 but, (c) isn't really 0.1 so what should be an equality is an inequality.
You could represent the mantissa (the numerator) of decimal floats with base 2 or base 10 which makes a difference in coding efficiency, how fast calculations are, how fast decimal input/output are, etc. It's the base 2 vs base 10 exponent that makes binary floats appear defective.
Just to clarify (as the paper does make this distinction) when I say "correct" here I mean the mathematical sense, not the correctly rounded value from floating point arithmetic.