18A accuracy of a numeric representation in a computer is limited by the number bits used to store the number. For example the binary format for a double precision floating point number occupies 64 bits and its significand (mantissa) has a precision of 53 bits, or about 16 decimal digits.