Makers5x5

Click here or on the picture above to play the game. Via Cory Doctorow on BoingBoing:

As part of the ongoing serialization of my forthcoming novel MAKERS, Tor.com has commissioned Idiots’ Books to produce 81 CC-licensed, interlocking illustrations, one for each installment. Periodically, Tor is adding these to a little Flash-toy that lets you rotate and realign the images like tiles (each has edge-elements that matches up with the others). They’ve just put up the 5X5 grid, which I’m finding addictively fun.

Makers Tile Game 5×5

The Idiots’ Books illustrations are the work of Robbi Behr and Matthew Swanson ’97. In a previous thread on this series, we attempted to use the collective brains of the Eph community to solve the question of how many total illustrations could be produced by the 81 interlocking tiles. You can read the whole discussion here. The best answer we received, from Nick, is presented below.

Nick says:

My girlfriend pointed me to this problem. When the 9×9 square restriction is removed, it becomes an instance of the polyomino counting problem:

http://en.wikipedia.org/wiki/Polyomino
http://mathworld.wolfram.com/Polyomino.html

Specifically, it’s the fixed polyomino counting problem, because we consider a rotation of a given arrangement to be a new arrangement. This problem is more or less unsolved. The only known way to figure it out is essentially to just enumerate them all and count. While there are fancy computer algorithms to speed this up a bit, the problem gets out of hand rather quickly and to my knowledge no one has ever managed to get up to 81.

This list shows some smaller results:
http://www.research.att.com/~njas/sequences/table?a=1168&fmt=4

The left column is the number of tiles, and the right is the number of arrangements.

Note that the polyomino counting problem does not take into account the question of rotating the individual tiles, or the fact that individual tiles are different from one another. It only counts the shapes into which they may be arrange. So, the number of arrangements of n tiles is n!*P(n)*4^n, where P(n) is the number of fixed polyominoes of that size.

A quick jump over to Wolfram Alpha reveals this to be 26980519328800085374834342152755878331730212833210662912000000, or about 2.7×10^61 for 28, the highest values given in the above list. For comparison, the observable universe is estimated to contain 10^80 atoms. It seems quite reasonable to say, then, that the number of free-form tile arrangements when using all 81 exceeds by many orders of magnitude the number of atoms in the universe.

In response to that mind blowing analysis:

dm ’10 says:

Nick: thanks for putting a name to the problem. It’s good to hear that the reason we were having trouble is that it’s actually an open problem, rather than just a simple trick we were missing.

In comparison to atoms in the known universe, you can ignore the polyomino considerations completely (assume a 9×9 square) and you still have 4^81*(81!) configurations, which is roughly 10^170. So even if every atom in the universe had its own individual sub-universe with its own 10^80 atoms, the number of configurations within a 9×9 square would still be larger than the total number of atoms in that universe-of-universes. If you do multiply by the P(81) term (whatever that might be) to account for the number of 81-tile polyominos, you’d probably be able to add a few more levels of universes-within-universes to that example.

And these numbers are still relatively small by mathematical standards, especially once you start playing around with Ackermann or busy beaver functions.

Matthew Swanson says:

“Busy beaver functions?” The layman readership demands explanation.

Nick followed up:

The busy beaver function is a bit technical, but I’ll give it a go.

So, in order to understand the busy beaver function, we need to understand Turing Machines, which are hypothetical computing devices proposed by Alan Turing. The machine consists of a few parts: a read/write head, a memory bank with some rules, a memory bank maintaining the current ’state’ of the machine and an infinte ticker tape on which we can write symbols. For simplicity, we’ll use 0 and 1 as our alphabet of symbols. This means that every spot on the ticker tape can be either a 0 or a 1, or can be blank. We’ll say that everything starts out as a blank, but you can also use the starting contents of the tape as input to the machine. The neat thing about these is that they are capable of any computation of which any computing device is capable. They might take a bit longer, but if the super-computer down in Los Alamos can solve it, so can a humble Turing Machine.

The Turing machine process goes like this:

Read the symbol currently under the head.
Consult the rule table, and find the entry corresponding to the current state, and the current symbol.
That rule will say to write a new symbol, move either left or right, and enter a new state. One of the potential new states is the HALT state, which means its done computing, and the answer is whatever’s left on the tape.

So, here’s an example rule

If I’m in state 0, and read a BLANK, write a 1, move RIGHT, and enter state 0.

This machine will simply fill the entire tape with an endless string of 1’s. It effectively counts in unary. With a little more effort, we could make it count in binary.

An interesting question to ask about Turing Machines is whether they eventually enter the HALT state. The one above never does. The general form of the question is, if I give you an arbitrary Turing Machine (meaning any set of rules), will it halt eventually? Suppose that, hoping to solve this, you came up with a way to translate the settings of a Turing Machine into 0’s and 1’s. You could then feed these translations into another Turing Machine. That Turing Machine’s set of rules would be such that it would read the other Machine, do a bunch of figuring, and then leave a 0 or a 1 on the tape, telling you whether the first Machine would have halted.

Is such a Turing Machine possible? As it turns out, no. It’s clearly possible to make one that always correctly identifies the ones that do halt. Just simulate the execution of the machine, and halt with the YES output when it halts. But what if the answer is no? That strategy would then mean that your second machine never halts either. This is called the Halting Problem, and Turing proved that its undecidable. The proof is a bit tricky, so I’ll leave it as an exercise for the reader.

So, this brings us at last to the busy beaver function. Suppose I take the set of all possible Turing Machines with N states (meaning that they can enter states 1…N and HALT). If I run them all, and take only the ones that eventually halt, what is the maximum number of steps that will be taken before a halt? (Some define it as the maximum number of 1’s, but I like this version better).

For example, with only 1 state (call it STATE 0), it’s not too hard. Compare these two rules:

If I read a BLANK, and I’m in STATE 0, write a 1, move RIGHT and enter STATE 0.

and

If I read a BLANK, and I’m in STATE 0, write a 1, move RIGHT and HALT.

The first rule goes on forever, and the second rule halts after 1 step. Every other 1 state machine is a variation on these two, except with moving LEFT or writing 0’s. They all either never halt, or halt after 1 move. Therefore BusyBeaver(1) = 1.

The Busy Beaver numbers are the maximum number of moves I can make with an N state Turing Machine before it halts.

Now, suppose I had a handy way to figure out the Busy Beaver numbers. What if I made a Turing Machine to do this:

-Read in the 0’s and 1’s translations of another Turing Machine.
-Count how many states it uses, call this number N.
-Simulate the machine for BusyBeaver(N) steps.
-If it halts, return YES, if it hasn’t halted yet, return NO.

This machine solves the Halting Problem. If BusyBeaver(N) is the maximum number of steps it could take before halting, and it hasn’t halted after that many steps, we can say for certain that it doesn’t halt ever.

But, as we saw earlier, the Halting Problem can’t be solved by Turing Machines. This means that our assumption that we can figure out in general the value of BusyBeaver(N) is false. This means BusyBeaver(N) is uncomputable. If we could compute it, we could solve the Halting Problem.

What if instead of BusyBeaver(N), we tried to compute a function called ReallyReallyBig(N). We pick a function that’s so astronomically huge, that it’s bigger than BusyBeaver(N) for all N, and compute that instead. Well, this is no good either. Substitute ReallyReallyBig for BusyBeaver, and run for that many steps instead. That would solve it too.

The result is that BusyBeaver(N) is bigger than any computable function. As we saw, the number of arrangements of the tiles is computable. That means that BusyBeaver(N) is so huge that it dwarfs even that.

For more on really big numbers, including what is probably a better explanation of BusyBeaver:

http://www.scottaaronson.com/writings/bignumbers.html

To which we can only say, with sophmom: Wow. Or as Matthew commented on his blog: Little did we know we were stumbling into a proposition of such cosmic proportions.

Facebooktwitter
Print  •  Email