Create the slowest growing function you can in under 100 bytes

23

3

Your job is to create the slowest growing function you can in no more than 100 bytes.

Your program will take as input a nonnegative integer, and output a nonnegative integer. Let's call your program P.

It must meet the these two criterion:

  • Its source code must be less than or equal to 100 bytes.
  • For every K, there is an N, such that for every n >= N, P(n) > K. In other words, lim(n->∞)P(n)=∞. (This is what it means for it to be "growing".)

Your "score" is the growth rate of your program's underlying function.

More specifically, program P is grows slower than Q if there is a N such that for all n>=N, P(n) <= Q(n), and there is at least one n>=N such that P(n) < Q(n). If neither program is better than the other, they are tied. (Essentially, which program is slower is based on the value of lim(n->∞)P(n)-Q(n).)

The slowest growing function is defined as the one that grows slower than any other function, according to the definition in the previous paragraph.

This is , so the slowest growing program wins!

Notes:

  • To assist in scoring, try to put what function your program calculates in the answer.
  • Also put some (theoretical) input and outputs, to help give people an idea of how slow you can go.

PyRulez

Posted 2017-06-08T18:39:49.743

Reputation: 6 547

3Related. – Martin Ender – 2017-06-08T18:47:34.610

3An effective strategy is to write a fast-growing function and take its inverse, i.e. find the smallest input to it that produces at least the required value. Perhaps this is a dupe? – xnor – 2017-06-08T18:59:52.913

A third of the "More specifically" paragraph was missing because Markdown thinks a < followed by a letter is the start of an HTML tag. Preview your questions before you post them please :P – ETHproductions – 2017-06-08T20:11:54.387

1What large cardinal axioms can we assume? – Peter Taylor – 2017-06-08T20:46:21.927

@Peter Taylor uhm, whichever you want I guess – PyRulez – 2017-06-08T20:47:28.517

1Is the time machine to test our answers provided? – Magic Octopus Urn – 2017-06-09T16:20:52.097

Your definitions of "growing" and "slowest growing" are exceedingly hard for me to understand, and that is made much worse by using both n and N as different variables. It's gibberish when read aloud. – David Conrad – 2017-06-23T00:14:32.860

@DavidConrad essentially, it's asymptotic – PyRulez – 2017-06-23T00:20:31.623

How is this any different than saying "For every K there is an N such that P(N) > K"? Why did you introduce two N's? Just to be confusing? Sorry, I guess I'm just not smart enough to understand this. But naming both terms "N", distinguished only by case, is definitely a bad, bad idea. – David Conrad – 2017-06-23T02:04:32.623

@DavidConrad I can change it if you like (I didn't try reading it out loud). – PyRulez – 2017-06-25T05:19:09.547

Can we ignore 0 as an input? – MilkyWay90 – 2019-04-26T02:48:17.267

Answers

13

Haskell, 98 bytes, score=fε0−1(n)

_#[]=0
k#(0:a)=k#a
k#(a:b)=1+(k#(([1..k]>>fst(span(>=a)b)++[a-1])++b))
f n=[k|k<-[0..],k#[k]>n]!!0

How it works

This computes the inverse of a very fast growing function related to Beklemishev’s worm game. Its growth rate is comparable to fε0, where fα is the fast-growing hierarchy and ε0 is the first epsilon number.

For comparison with other answers, note that

  • exponentiation is comparable to f2;
  • iterated exponentiation (tetration or ↑↑) is comparable to f3;
  • ↑↑⋯↑↑ with m arrows is comparable to fm + 1;
  • The Ackermann function is comparable to fω;
  • repeated iterations of the Ackermann function (constructions like Graham’s number) are still dominated by fω + 1;
  • and ε0 is the limit of all towers ωωωω.

Anders Kaseorg

Posted 2017-06-08T18:39:49.743

Reputation: 29 242

I like the description here better.

– PyRulez – 2017-06-09T21:00:55.177

You could put in a link to Googology Wiki's introduction to the fast growing hierarchy – MilkyWay90 – 2019-06-28T03:05:39.853

18

Brachylog, 100 bytes

llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll

Try it online!

This is probably nowhere near the slowness of some other fancy answers, but I couldn't believe noone had tried this simple and beautiful approach.

Simply, we compute the length of the input number, then the length of this result, then the length of this other result... 100 times in total.

This grows as fast as log(log(log...log(x), with 100 base-10 logs.

If you input your number as a string, this will run extremely fast on any input you could try, but don't expect to ever see a result higher than 1 :D

Leo

Posted 2017-06-08T18:39:49.743

Reputation: 8 482

8+1 just for pure insanity :o Fun fact: Also works in Jelly if you make it all caps. :P – HyperNeutrino – 2017-06-11T21:34:50.910

5The first number that outputs 2 is 10↑↑99. – Post Rock Garf Hunter – 2017-06-22T13:49:43.010

11

JavaScript (ES6), Inverse Ackermann Function*, 97 bytes

*if I did it right

A=(m,n)=>m?A(m-1,n?A(m,n-1):1):n+1
a=(m,n=m,i=1)=>{while(A(i,m/n|0)<=Math.log2(n))i++;return i-1}

Function A is the Ackermann function. Function a is supposed to be the Inverse Ackermann function. If I implemented it correctly, Wikipedia says that it will not hit 5 until m equals 2^2^2^2^16. I get a StackOverflow around 1000.

Usage:

console.log(a(1000))

Explanations:

Ackermann Function

A=(m,n)=>                           Function A with parameters m and n
         m?                   :n+1  If m == 0, return n + 1; else,
           A(m-1,n?        :1)       If n == 0, return A(m-1,1); else,
                   A(m,n-1)          return A(m-1,A(m,n-1))

Inverse Ackermann Function

a=(m,n=m,i=1)=>{                                                Function a with parameter m, with n preinitialized to m and i preinitialized to 1
                while(A(i,m/n|0)<=Math.log2(n))                 While the result of A(i, floor(m/n)) is less than log₂ n,
                                               i++;             Increment i
                                                   return i-1}  Return i-1

Stephen

Posted 2017-06-08T18:39:49.743

Reputation: 12 293

2Isn't Stack Overflow good though? – NoOneIsHere – 2017-06-21T16:49:09.737

Your statement that it won't hit 5 until m=2^^7 is wrong. It won't hit 5 until m=2^^7-3, but at 2^^7-1, it *is* 5. I know that -3 is *very* small compared to 2^^7, but 5A5=2^^7-3<2^^7. (^^ represents tetration) – user75200 – 2018-01-13T17:46:03.507

8

Pure Evil: Eval

a=lambda x,y:(y<0)*x or eval("a("*9**9**9+"x**.1"+",y-1)"*9**9**9)
print a(input(),9**9**9**9**9)//1

The statement inside the eval creates a string of length 7*1010101010108.57 which consists of nothing but more calls to the lambda function each of which will construct a string of similar length, on and on until eventually y becomes 0. Ostensibly this has the same complexity as the Eschew method below, but rather than relying on if-and-or control logic, it just smashes giant strings together (and the net result is getting more stacks...probably?).

The largest y value I can supply and compute without Python throwing an error is 2 which is already sufficient to reduce an input of max-float into returning 1.

A string of length 7,625,597,484,987 is just too big: OverflowError: cannot fit 'long' into an index-sized integer.

I should stop.

Eschew Math.log: Going to the (10th-)root (of the problem), Score: function effectively indistinguishable from y=1.

Importing the math library is restricting the byte count. Lets do away with that and replace the log(x) function with something roughly equivalent: x**.1 and which costs approximately the same number of characters, but doesn't require the import. Both functions have a sublinear output with respect to input, but x0.1 grows even more slowly. However we don't care a whole lot, we only care that it has the same base growth pattern with respect to large numbers while consuming a comparable number of characters (eg. x**.9 is the same number of characters, but grows more quickly, so there is some value that would exhibit the exact same growth).

Now, what to do with 16 characters. How about...extending our lambda function to have Ackermann Sequence properties? This answer for large numbers inspired this solution.

a=lambda x,y,z:(z<0)*x or y and a(x**.1,z**z,z-1)or a(x**.1,y-1,z)
print a(input(),9,9**9**9**99)//1

The z**z portion here prevents me from running this function with anywhere close to sane inputs for y and z, the largest values I can use are 9 and 3 for which I get back the value of 1.0, even for the largest float Python supports (note: while 1.0 is numerically greater than 6.77538853089e-05, increased recursion levels move the output of this function closer to 1, while remaining greater than 1, whereas the previous function moved values closer to 0 while remaining greater than 0, thus even moderate recursion on this function results in so many operations that the floating point number loses all significant bits).

Re-configuring the original lambda call to have recursion values of 0 and 2...

>>>1.7976931348623157e+308
1.0000000071

If the comparison is made to "offset from 0" instead of "offset from 1" this function returns 7.1e-9, which is definitely smaller than 6.7e-05.

Actual program's base recursion (z value) is 101010101.97 levels deep, as soon as y exhausts itself, it gets reset with 10101010101.97 (which is why an initial value of 9 is sufficient), so I don't even know how to correctly compute the total number of recursions that occur: I've reached the end of my mathematical knowledge. Similarly I don't know if moving one of the **n exponentiations from the initial input to the secondary z**z would improve the number of recursions or not (ditto reverse).

Lets go even slower with even more recursion

import math
a=lambda x,y:(y<0)*x or a(a(a(math.log(x+1),y-1),y-1),y-1)
print a(input(),9**9**9e9)//1
  • n//1 - saves 2 bytes over int(n)
  • import math,math. saves 1 byte over from math import*
  • a(...) saves 8 bytes total over m(m,...)
  • (y>0)*x saves a byte over y>0and x
  • 9**9**99 increases byte count by 4 and increases recursion depth by approximately 2.8 * 10^x where x is the old depth (or a depth nearing a googolplex in size: 101094).
  • 9**9**9e9 increases the byte count by 5 and increases recursion depth by...an insane amount. Recursion depth is now 1010109.93, for reference, a googolplex is 1010102.
  • lambda declaration increases recursion by an extra step: m(m(...)) to a(a(a(...))) costs 7 bytes

New output value (at 9 recursion depth):

>>>1.7976931348623157e+308
6.77538853089e-05

Recursion depth has exploded to the point at which this result is literally meaningless except in comparison to the earlier results using the same input values:

  • The original called log 25 times
  • The first improvement calls it 81 times
    • The actual program would call it 1e992 or about 10102.3 times
  • This version calls it 729 times
    • The actual program would call it (9999)3 or slightly less than 101095 times).

Lambda Inception, score: ???

I heard you like lambdas, so...

from math import*
a=lambda m,x,y:y<0and x or m(m,m(m,log(x+1),y-1),y-1)
print int(a(a,input(),1e99))

I can't even run this, I stack overflow even with a mere 99 layers of recursion.

The old method (below) returns (skipping the conversion to an integer):

>>>1.7976931348623157e+308
0.0909072713593

The new method returns, using only 9 layers of incursion (rather than the full googol of them):

>>>1.7976931348623157e+308
0.00196323936205

I think this works out to be of similar complexity to the Ackerman sequence, only small instead of big.

Also thanks to ETHproductions for a 3-byte savings in spaces I didn't realize could be removed.

Old answer:

The integer truncation of the function log(i+1) iterated 20 25 times (Python) using lambda'd lambdas.

PyRulez's answer can be compressed by introducing a second lambda and stacking it:

from math import *
x=lambda i:log(i+1)
y=lambda i:x(x(x(x(x(i)))))
print int(y(y(y(y(y(input()))))))

99 100 characters used.

This produces an iteration of 20 25, over the original 12. Additionally it saves 2 characters by using int() instead of floor() which allowed for an additional x() stack. If the spaces after the lambda can be removed (I cannot check at the moment) then a 5th y() can be added. Possible!

If there's a way to skip the from math import by using a fully qualified name (eg. x=lambda i: math.log(i+1))), that would save even more characters and allow for another stack of x() but I don't know if Python supports such things (I suspect not). Done!

This is essentially the same trick used in XCKD's blog post on large numbers, however the overhead in declaring lambdas precludes a third stack:

from math import *
x=lambda i:log(i+1)
y=lambda i:x(x(x(i)))
z=lambda i:y(y(y(i)))
print int(z(z(z(input()))))

This is the smallest recursion possible with 3 lambdas that exceeds the computed stack height of 2 lambdas (reducing any lambda to two calls drops the stack height to 18, below that of the 2-lambda version), but unfortunately requires 110 characters.

Draco18s no longer trusts SE

Posted 2017-06-08T18:39:49.743

Reputation: 3 053

FYI, I count 103 bytes in the top program – ETHproductions – 2017-06-08T22:17:06.830

@ETHproductions oh oops. I probably did a count without the int conversion and thought I had some spares. – Draco18s no longer trusts SE – 2017-06-08T22:23:39.717

I think you can remove the space after import and the space after y<0. I don't know much Python though so I'm not certain – ETHproductions – 2017-06-08T22:24:41.463

Also, perhaps y<0and x or m(m,m(m,log(x+1),y-1),y-1) to save another byte (assuming x is never 0 when y<0) – ETHproductions – 2017-06-08T22:25:28.637

@ETHproductions That in fact does work. I'd looked at removing the space, but the editor I was using dropped the syntax highlighting so I thought it wasn't valid. Aaand... I can drop a space from the import. – Draco18s no longer trusts SE – 2017-06-08T22:27:37.357

Instead of math.log(x+1), how about x.bit_length()? It's built-in so you don't need to import math, and and it returns an int so there's no need for //1. – mathmandan – 2017-06-09T23:35:02.080

.bit_length() is really quite long and is essentially log2(x), which has comparable growth to the shorter x**.1 that I'm using now. – Draco18s no longer trusts SE – 2017-06-10T01:09:12.387

(Well, certainly any logarithm grows more slowly than the tenth root, which is what x**.1 is.) – mathmandan – 2017-06-10T01:35:11.333

@mathmandan Yes, but the difference isn't relevant when stacked a billion billion billion times. Heck, that may as well be a /2 for all the difference it makes. The power of the algorithm lies in its recursion depth. At the maximum recursion depth I can run on my PC a /2 results in stripping 93 decimal digits off the input. That is, the function recursed about 3.310^93 times on the inputs of just (x,9,4) and my answer here is pumping in (x, 9, 9^9^9^9). That's a stack of [four nines*](http://www.wolframalpha.com/input/?i=9%5E9%5E9%5E9). For reference, (x,9,3) strips fifteen digits.

– Draco18s no longer trusts SE – 2017-06-10T02:07:20.107

u=lambda n,p=3:p and u(n.bit_length(),p-1)or n is 46 bytes. That means, if you replace the 3 with any integer X that takes 55 or fewer bytes to evaluate, you can iterate the log2 function X times. (In that amount of space, you can get a lot more than 9**9**9**9 iterations.) Indeed, this approach basically transforms the question into "write the biggest number you can in 55 bytes or less," – mathmandan – 2017-06-10T02:26:50.897

@mathmandan I'm not enough of a mathemetician to show that it is slower, unfortunately. – Draco18s no longer trusts SE – 2017-06-10T03:39:25.160

@mathmandan you know what I can do in 55 bytes, though? eval("a("*p**p**p+"x**.1"+",y-1)"*p**p**p) plus 9**9**9**9**9 and that merely computes the size of the next call which is 10^10^10^10^10^10^8.57 deep, each one of which is going to create almost as many more. – Draco18s no longer trusts SE – 2017-06-10T04:06:22.880

2Well...log(x) grows more slowly than ANY positive power of x (for large values of x), and this isn't hard to show using L'Hopital's rule. I am pretty sure your current version does (...(((x**.1)**.1)**.1)** ...) a whole bunch of times. But those powers just multiply, so it's effectively x**(.1** (whole bunch)), which is a (very small) positive power of x. That means that it actually grows faster than a SINGLE iteration of the log function (although, granted, you'd have to look at VERY large values of x before you'd notice...but that's what we mean by "going to infinity"). – mathmandan – 2017-06-10T07:49:52.803

@mathmandan As I've already said, I'm at the limits of my mathematical understanding. I cannot ascertain for myself that what you're saying is true nor can I provide a counter argument. – Draco18s no longer trusts SE – 2017-06-10T16:02:31.223

4

Haskell, 100 bytes

f 0 a b=a^b
f c a b=foldr(f$c-1)a$[0..b]>>[a]
i=length.show
0#x=i x
y#x=i$(y-1)#x
g=(f(f 9 9 9)9 9#)

Try it online!

This solution does not take the inverse of a quickly growing function instead it takes a rather slowly growing function, in this case length.show, and applies it a bunch of times.

First we define a function f. f is a bastard version of Knuth's uparrow notation that grows slightly faster (slightly is a bit of an understatement, but the numbers we are dealing with are so large that in the grand scheme of things...). We define the base case of f 0 a b to be a^b or a to the power of b. We then define the general case to be (f$c-1) applied to b+2 instances of a. If we were defining a Knuth uparrow notation like construct we would apply it to b instances of a, but b+2 is actually golfier and has the advantage of being faster growing.

We then define the operator #. a#b is defined to be length.show applied to b a times. Every application of length.show is an approximately equal to log10, which is not a very swiftly growing function.

We then go about defining our function g that takes and integer in and applies length.show to the integer a bunch of times. To be specific it applies length.show to the input f(f 9 9 9)9 9. Before we get into how large this is lets look at f 9 9 9. f 9 9 9 is greater than 9↑↑↑↑↑↑↑↑↑9 (nine arrows), by a massive margin. I believe it is somewhere between 9↑↑↑↑↑↑↑↑↑9 (nine arrows) and 9↑↑↑↑↑↑↑↑↑↑9 (ten arrows). Now this is an unimaginably large number, far to big to be stored on any computer in existence (in binary notation). We then take that and put that as the first argument of our f that means our value is larger than 9↑↑↑↑↑↑...↑↑↑↑↑↑9 with f 9 9 9 arrows in between. I'm not going describe this number because it is so large I don't think I would be able to do it justice.

Each length.show is approximately equal to taking the log base 10 of the integer. This means that most numbers will return 1 when f is applied to them. The smallest number to return something other than 1 is 10↑↑(f(f 9 9 9)9 9), which returns 2. Lets think about that for a moment. As abominably large as that number we defined earlier is, the smallest number that returns 2 is 10 to its own power that many times. Thats 1 followed by 10↑(f(f 9 9 9)9 9) zeros.

For the general case of n the smallest input outputting any given n must be (10↑(n-1))↑↑(f(f 9 9 9)9 9).

Note that this program requires massive amounts of time and memory for even small n (more than there is in the universe many times over), if you want to test this I suggest replacing f(f 9 9 9)9 9 with a much smaller number, try 1 or 2 if you want ever get any output other than 1.

Post Rock Garf Hunter

Posted 2017-06-08T18:39:49.743

Reputation: 55 382

Meh, I don't think anyone cares about how long it takes or how much memory is required for the program to run on these sorts of questions. – Simply Beautiful Art – 2017-07-20T23:11:09.093

3

APL, Apply log(n + 1), e^9^9...^9 times, where the length of the chain is e^9^9...^9 of length of chain minus 1 times, and so on.

⌊((⍟1+⊢)⍣((*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣(*⍣⊢)))))))))))))))))))))9))⊢

Uriel

Posted 2017-06-08T18:39:49.743

Reputation: 11 708

Is there a way I can run this? – Draco18s no longer trusts SE – 2017-06-08T22:13:47.757

7@Draco18s get a quantum computer with virtually-infinite memory, install a decent APL distribution, and spend the time you wait for it to create an aging preventing serum, cos you'll have to sit tight for a couple centuries. – Uriel – 2017-06-08T22:19:49.423

Haha. Ok then. :p – Draco18s no longer trusts SE – 2017-06-08T22:24:25.127

Are you sure this approaches infinity? – PyRulez – 2017-06-08T23:07:31.780

@PyRulez it's just the same as the other solutions, only with a lot more iterations on the log. but more iteration is still the same closing - defied by exponenting that much as well. I was not sure about the e^n^n...^n part so I turned it into constant, but it might be true – Uriel – 2017-06-08T23:18:34.847

Yeah, I think that if you increase the iterations based on n, it will stay near 0. – PyRulez – 2017-06-08T23:19:43.917

"for a couple centuries" More like a giggolplex. – user75200 – 2018-01-13T17:37:49.540

3

MATL, 42 bytes

iXI:`2*.]X{oXH1H/16L+XKxI:`Yl.]K+XKXdXzXGx

Try it online!

This program is based on the harmonic series with the use of Euler–Mascheroni constant. As I was reading @LuisMendo documentation on his MATL language (with capital letters, so it looks important) I noticed this constant. The slow growth function expression is as following: enter image description here

where εk ~ 1/2k

I tested up to 10000 iterations (in Matlab, since it is too large for TIO) and it scores below 10, so it is very slow.

enter image description here

Explanations:

 iXI      % ask user input (number of iterations)

:`2*.]    % do...while loop, multiply by 2

X{        % convert numeric array into cell array

o         % convert to double precision array 

XH1H/     % copy to clipboard H and divide by 1: now we have an array of 1/2k

16L       % Euler–Mascheroni constant 

+         % addition (element-wise, singleton expansion)

XKxI:`    % save, clear the stack, do...while loop again

  Yl      % logarithm 

  .]      % break, ends the loop

K+XK      % paste from clipboard K, sum all

Xd        % trim: keep the diagonal of the matrix 

Xz        % remove all zeros

XG        % plot (yes it plots on-line too!)

x         % clear the stack
          % (implicit) display

Empirical proof: (ln k) + 1 in red always above lnk + γ + εk in blue.

enter image description here

The program for (ln k) + 1 was made in

Matlab, 47 18 14 bytes

n=input('')
A=(1:n)
for k=1:n
A(k)=log(k)+1
end

Interesting to note that the elapsed time for n=100 is 0.208693s on my laptop, but only 0.121945s with d=rand(1,n);A=d*0; and even less, 0.112147s with A=zeros(1,n). If zeros is a waste of space, it saves speed! But I am diverging from the topic (probably very slowly).

Edit: thanks to Stewie for helping reducing this Matlab expression to, simply:

 @(n)log(1:n)+1

J Doe

Posted 2017-06-08T18:39:49.743

Reputation: 81

+1 for not just being the inverse of a fast function – PyRulez – 2017-06-09T19:26:45.533

1An interesting SO-post about your interesting note. :) – Stewie Griffin – 2017-06-20T21:28:20.580

By the way, golfing the script in the bottom (since you included the byte count): The last MATLAB script is simply: n=input('');A=log(1:n)+1, or as an unnamed anonymous function (14 bytes): @(n)log(1:n)+1. I'm not sure about MATLAB, but A=log(1:input(''))+1 works in Octave... – Stewie Griffin – 2017-06-20T21:31:56.657

thank you @Stewie n=input('');A=log(1:n)+1 works, @(n)log(1:n)+1 don't (indeed a valid function with handle in Matlab, but input is not asked), A=log(1:input(''))+1 works and can be shortened log(1:input(''))+1 – J Doe – 2017-06-22T13:25:09.160

What I meant with the anonymous function was this. That's the "normal" way to save bytes (on this site at least) by requiring the input is given as function arguments (meta-post) instead of command line. Also, the f= doesn't need to be counted, since it's possible to just to: @(n)log(1:n)+1 followed by ans(10) to get the first 10 numbers.

– Stewie Griffin – 2017-06-22T13:50:02.200

2

The floor of the function log(i+1) iterated 14 times (Python)

import math
x=lambda i: math.log(i+1)
print int(x(x(x(x(x(x(x(x(x(x(x(x(x(x(input())))))))))))))))

I don't expect this to do very well, but I figured its a good start.

Examples:

  • e^n^n^n^n^n^n^n^n^n^n^n^n^n^n -> ~n (approximately n)

PyRulez

Posted 2017-06-08T18:39:49.743

Reputation: 6 547

If you use int instead of floor, you can fit in another x( – Beta Decay – 2017-06-09T17:47:31.640

@BetaDecay Okay I updated it. – PyRulez – 2017-06-09T20:55:05.530

1Shouldn't the expression be e^e^e^e...^n? Also, why is there a space after the :? – CalculatorFeline – 2017-06-11T17:10:28.917

@CalculatorFeline because this isn't code golf, it just needs to be under 100 bytes. – Cyoce – 2017-06-17T03:54:53.400

So? What's so bad about saving a byte so you can add another x() call? – CalculatorFeline – 2017-06-17T19:29:53.700

2

Python 3, 100 bytes

The floor of the function log(i+1) iterated 99999999999999999999999999999999999 times.

One can use exponents to make the above number even larger...

from math import *
s=input()
exec("s=log(s+1);"*99999999999999999999999999999999999)
print(floor(s))

Try it online!

Leaky Nun

Posted 2017-06-08T18:39:49.743

Reputation: 45 011

2Do solutions have to actually work? This throws an OverflowError. – ETHproductions – 2017-06-08T20:15:23.700

2@ETHproductions in problems like this, it is commonly accepted that solutions only need to be theoretically viable, on a machine with infinite memory and cpu. If you want to try this, cut the 99999...999 down to just 999 or so – Sparr – 2017-06-08T23:24:48.433

3So why not use 9**9**9**...**9**9e9? – CalculatorFeline – 2017-06-11T17:18:15.273

2

Ruby, 100 bytes, score-1 = fωω+1(n2)

Basically borrowed from my Largest Number Printable, here's my program:

->k{n=0;n+=1 until(H=->z,a=[0]*z{b,*c=a;z.times{z+=b ?H[z,b==1?c:[b>1?b-1:z]*z+c]:z};z};H[n*n]>k);n}

Try it online

Basically computes the inverse of fωω+1(n2) in the fast growing hierarchy. The first few values are

x[0] = 1
x[1] = 1
x[2] = 1
x[3] = 1
x[4] = 2

And then it continues to output 2 for a very long time. Even x[G] = 2, where G is Graham's number.

Simply Beautiful Art

Posted 2017-06-08T18:39:49.743

Reputation: 2 140

But what about g(f<sub>ω9001CK</sub>3) where f is FGH? – user75200 – 2018-01-06T19:58:02.847

@user75200 the fgh isn't well defined for uncomputable ordinals. – Simply Beautiful Art – 2018-01-07T13:09:23.947

FGH *is* well-defined for uncomputable ordinals, as they have fundamental sequences. It's just uncomputable. – user75200 – 2018-01-13T17:33:04.823

@user75200 No. Fundamental sequences are very arbitrary. I could define ω9001CK[x] = x to have a fundamental sequence of length ω9001CK, which is computable for finite x, but very likely not what you wanted. By "well-defined," I meant there isn't a standard fundamental sequence for uncomputable ordinals that everyone can agree on. – Simply Beautiful Art – 2018-01-13T18:48:20.127

While it’s true that fundamental sequences aren’t unique, a fundamental sequence for a countable ordinal is supposed to be of length ω. – Anders Kaseorg – 2018-02-18T02:22:16.833

@AndersKaseorg Usually yes, but not necessarily. I was merely making a point. – Simply Beautiful Art – 2018-02-18T02:36:05.197

@user75200 f_ω9001CK may be computable btw. For example, take (ω9001CK[n])[k] = k for all k ≤ n. Then f_ω9001CK = f_ω, nothing too special. – Simply Beautiful Art – 2018-02-18T02:37:49.670

0

Mathematica, 99 bytes

(assuming ± takes 1 byte)

0±x_=1±(x-1);y_±0=y+1;x_±y_:=(y-1)±x±(x-1);(i=0;NestWhile[(++i;#±#±#±#±#±#±#±#)&,1,#<k&/.k->#];i)&

The first 3 commands define x±y to evaluate Ackermann(y, x).

The result of the function is the number of times f(#)=#±#±#±#±#±#±#±# need to be applied to 1 before the value get to the value of parameter. As f(#)=#±#±#±#±#±#±#±# (that is, f(#)=Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[Ackermann[#, #], #], #], #], #], #], #]) grow very fast, the function grow very slow.

user202729

Posted 2017-06-08T18:39:49.743

Reputation: 14 620

0

Javascript (ES6), 94 bytes

(p=j=>i=>h=>g=>f=>x=>x<2?0:1+p(p(j))(j(i))(i(h))(h(g))(g(f))(f(x)))(_=x=>x)(_)(_)(_)(Math.log)

Explanation:

Id refers to x => x in the following.

Let's first take a look at:

p = f => x => x < 2 ? 0 : 1 + p(p(f))(f(x))

p(Math.log) is approximately equal to log*(x).

p(p(Math.log)) is approximately equal to log**(x) (number of times you can take log* until the value is at most 1).

p(p(p(Math.log))) is approximately equal to log***(x).

The inverse Ackermann function alpha(x) is approximately equal to the minimum number of times you need to compose p until the value is at most 1.

If we then use:

p = g => f => x => x < 2 ? 0 : 1 + p(p(g))(g(f))(f(x))

then we can write alpha = p(Id)(Math.log).

That's pretty boring, however, so let's increase the number of levels:

p = h => g => f => x => x < 2 ? 0 : 1 + p(p(h))(h(g))(g(f))(f(x))

This is like how we constructed alpha(x), except instead of doing log**...**(x), we now do alpha**...**(x).

Why stop here though?

p = i => h => g => f => x => x < 2 ? 0 : 1 + p(p(i))(i(h))(h(g))(g(f))(f(x))

If the previous function is f(x)~alpha**...**(x), this one is now ~ f**...**(x). We do one more level of this to get our final solution.

es1024

Posted 2017-06-08T18:39:49.743

Reputation: 8 953

"p(p(x => x - 2)) is approximately equal to log**(x) (number of times you can take log* until the value is at most 1)". I don't understand this statement. It seems to me that p(x => x - 2) should be "the number of times you can subtract 2 until the value is at most 1". That is, p(x => x - 2)should be the "divide by 2" function. Thereforep(p(x => x - 2))should be "the number of times you can divide by 2 until the value is at most 1"...that is, it should be thelogfunction, notlogorlog*`. Perhaps this could be clarified? – mathmandan – 2017-06-16T19:32:16.197

@mathmandan looks like I made a typo on that line, it should be p = f => x => x < 2 ? 0 : 1 + p(p(f))(f(x)), where p is passed p(f) like in the other lines, not f. – es1024 – 2017-06-17T03:39:07.177

0

Clojure, 91 bytes

(defn f (apply +(for[n(range %)](/(loop[r 1 v n](if(< v 1)r(recur(* r v)(Math/log v))))))))

Kind of calculates the sum 1/(n * log(n) * log(log(n)) * ...), which I found from here. But the function ended up 101 bytes long so I had to drop the explicit number of iterations, and instead iterate as long as the number is greater than one. Example outputs for inputs of 10^i:

0 1
1 3.3851305685279143
2 3.9960532565317575
3 4.232195089969394
4 4.370995106860574
5 4.466762285601703
6 4.53872567524327
7 4.595525574477128
8 4.640390570825608

I assume this modified series still diverges but do now know how to prove it.

The third series actually requires a googolplex numbers of terms before the partial terms exceed 10.

NikoNyrh

Posted 2017-06-08T18:39:49.743

Reputation: 2 361