Compute the shortest decimal representation of a IEEE 754 double-precision binary floating-point number



The format is a 64 bit value x representing a IEEE-754 double-precision binary floating-point number (a.k.a "a double"). x is assumed to represent a real number (not a NaN or Infinity).


The goal is to print/output the shortest string containing a decimal representation for which x is the closest IEEE 754 representation. Scientific notation is allowed. You can choose whether or not to allow a missing 0 in front of the ".".

x = 0x4000CCCCCCCCCCCD could be represented as "2.10000000000000008881784197001" but also as "2.1", which is much shorter. x = 0x3DDB7CDFD9D7BDBB can be represented as "1e-10" which is much shorter than alternative representation "1.0000000000000000364321973155E-10"


In addition to being correct, the program must run in a reasonable amount of time on a modern CPU, ruling out some forms of exhaustive search.

This being code-golf entries should be judged by concision, but voters should feel free to attribute points based on a discretionary judgment of elegance, and efficiency.

A few test cases

In these test cases, I represent the input as an hexadecimal number to make it clear that a IEEE 754 double 64 is passed. You can pass it as a double if the language supports it, or as a 64 bit integer, at your convenience.

f(0x4000CCCCCCCCCCCD) == "2.1" 
f(0x3DDB7CDFD9D7BDBB) == "1e-10"
f(0x40934A456D5CFAAD) == "1234.5678" // N.B. not 1.2345678e3
f(0x3FEFFFFFFAA19C47) == "0.99999999"
f(0xC088A8CF13CEE9DD) == "-789.101112"`

Arthur B

Posted 2014-12-02T12:08:26.187

Reputation: 310

Pretty much. Very efficient solutions require long pieces of code, but there are some simple ones which can be implemented concisely. – Arthur B – 2014-12-15T20:36:12.767

If by "C will parse" you're talking about C compilers, the standard says that "the result is either the nearest representable value, or the larger or smaller representable value immediately adjacent to the nearest representable value, chosen in an implementation-defined manner." If you're talking about scanf there is still implementation-specific behaviour with regards to rounding. To make this question answerable, I think it needs to be rewritten to talk not about C's parsing but about the string-to-double conversion of the language used in the answer. – Peter Taylor – 2014-12-02T12:49:55.867

The cleanest way of expressing it is probably to require that the answer be a function (whose length is counted for the score) and also include a test framework (length not counted) which demonstrates that the string returned by the function round-trips to the original value. – Peter Taylor – 2014-12-02T12:51:46.963

OK I'll edit in a few hours with a few test cases.

The requirement isn't compiler specific, it's a standard. But I'll replace with what the standard actually says... – Arthur B – 2014-12-02T13:10:28.927


Can I submit an entry for Python 3.1 with f=str? Because Python will always choose the shortest representation.

– kennytm – 2014-12-02T15:51:22.180

I suppose that technically counts, though I'd prefer to see an implementation of Dragon4 – Arthur B – 2014-12-02T16:25:25.147

@KennyTM, no.

– Peter Taylor – 2014-12-02T21:59:22.943

@PeterTaylor: All 3 current answers are effectively just f=str though. – kennytm – 2014-12-03T08:34:06.217

@ArthurB: Also may I suggest adding a test case for f(0x400921fb54442d18) == "3.141592653589793"? – kennytm – 2014-12-03T08:38:17.157

2I don't think this challenge is worthwhile unless solutions that are effectively calls to printf are disqualified. – hobbs – 2014-12-06T07:17:56.330



C - 47 45 characters

Counting just the function f, and not including #includes and main.

In C, the %g fprintf specifier will always choose the shortest format. Add a precision specifier equal to the usual maximum precision of double and a bit of casting hocus-pocus to coerce int64 to double and Bob's your uncle.

#include <assert.h>
#include <stdint.h>
#include <stdio.h>

// The following line is what I'm counting...
f(int64_t u){printf("%.14g\n",*(double*)&u);}

int main()
    assert(sizeof(int64_t) == sizeof(double));


    return 0;

Output is as per specs:


Summary: C rocks!


Posted 2014-12-02T12:08:26.187


You may pass the test set, but not the general requirement. %g picks the shortest of %f or %e, but %f will not always reproduce all the digits necessary to represent the number. – Arthur B – 2014-12-15T20:38:38.953

@ArthurB - %14g will propagate the 14 to %f or %e. /YR – None – 2014-12-15T21:23:57.243

Which doesn't help you particularly. – Arthur B – 2014-12-17T22:53:25.093

1Maximum (round-tripping) precision of double is 15 decimal places, and to print unambiguous representation of every given binary number you need up to 17. – Ruslan – 2017-01-26T19:19:32.223

1You can save a character by using uint64_t in place of long long, or even 2 characters if you use int64_t. – Paul R – 2014-12-02T17:32:15.093

@PaulR - Thank you, will try that out! – None – 2014-12-02T18:19:28.730


Common Lisp - 138 bytes

 (ql:quickload'(ieee-floats cl-ppcre))(lambda(x)(#1=ppcre:regex-replace"e0$"(#1#"\\.?d"(format()"~a"(ieee-floats:decode-float64 x))"e")""))

(138 = 101 for the actual function + 37 for quickload'ing libraries)


The default representation of floating-points in CL is already quite close to the requested output. However, there are some discrepancies. For example, double-precisions numbers are written with a d exponent character (e.g., 2.10d0). That is why I modify the output of the formatted string with regexes:

(ql:quickload '(ieee-floats cl-ppcre))
(lambda (x)
  (cl-ppcre:regex-replace  ; remove "e0" at end of string
     (cl-ppcre:regex-replace ; change ".d" and "d" to "e"
       (format nil "~a" (ieee-floats:decode-float64 x))


;; NB: * is set by the REPL to the last returned value. 
;; Here, we store the previously defined lambda:
(defparameter *fun* *) 

;; Test cases
(defparameter *sample*

;; Default representation
(mapcar #'ieee-floats:decode-float64 *sample*)
=> (2.1d0 1.d-10 1234.5678d0 0.99999999d0 -789.101112d0)

;; Custom conversion
(mapcar *fun* *sample*)
=> ("2.1" "1e-10" "1234.5678" "0.99999999" "-789.101112")


Here, we convert the printed representations back to IEEE-754 doubles.

;; read values as double-precision numbers
(let ((*read-default-float-format* 'double-float))
  (mapcar #'read-from-string *))
=> (2.1 1.e-10 1234.5678 0.99999999 -789.101112)

;; Encode
(mapcar #'ieee-floats:encode-float64 *)
=> (4611911198408756429 4457293557087583675 4653144502051863213 4607182418709945415 13873524259458836957)

;; Check
(equal * *sample*)
=> T


Posted 2014-12-02T12:08:26.187

Reputation: 6 292


Julia, 20 characters


Function to test the results for correctness:

function decrep()
    tests=[0x4000CCCCCCCCCCCD 0x3DDB7CDFD9D7BDBB 0x40934A456D5CFAAD 0x3FEFFFFFFAA19C47 0xC088A8CF13CEE9DD]
    results=[2.1 1e-10 1234.5678 0.99999999 -789.101112]

    for i=1:length(tests)
        println("input   : $(hex(tests[i]))")
        println("result  : $(r(tests[i]))")
        println("expected: $(results[i])")
        println("$(r(tests[i]) == results[i])")

Executing the test function:

julia> decrep()
input   : 4000cccccccccccd
result  : 2.1
expected: 2.1
input   : 3ddb7cdfd9d7bdbb
result  : 1.0e-10
expected: 1.0e-10
input   : 40934a456d5cfaad
result  : 1234.5678
expected: 1234.5678
input   : 3feffffffaa19c47
result  : 0.99999999
expected: 0.99999999
input   : c088a8cf13cee9dd
result  : -789.101112
expected: -789.101112

If the output absolutely has to be a string, then the solution would be:


with 28 characters.

julia> h=0x4000cccccccccccd

julia> s(h)=string(hex2num(hex(h)))
s (generic function with 1 method)

julia> s(h)

Summary: Julia rocks, too ;)


Posted 2014-12-02T12:08:26.187

Reputation: 2 865