World's most popular travel blog for travel bloggers.

[Answers] Turing Completeness of System Which Randomly Fails to Complete Calculations

, ,
Problem Detail:

If one were to create a variant of a turing complete language which upon completing a calculation randomly changes the answer by one, would it be Turing complete? For example, say I had a Python implementation which, upon every variable assignment, with probability $\frac13$ decreased the value by $1$, with probability $\frac13$ increased the value by $1$ and otherwise kept the value constant, would it be Turing complete? This seems to be able to perform all of the calculations which Python is able to perform, just not reliably.

I've gone around and around on this one. It would seem that we've landed on "No, it would not be Turing Complete, although we might be able to approximate it with a bounded probability of error on either accuracy or returning in finite time."

In your post, you said "just not reliably"... that is, in fact, a crucial part of the definition. To be Turing-complete, we have to be able to exactly simulate a single taped Turing machine -- getting it right "sometimes" isn't any better than flipping a coin to determine if it's raining outside with "50% accuracy".

In the given example, you seem to suggest that the random failures only occur on assignment, not for other operations in the computer. Does this affect assignments performed by the programmer alone, or internal assignments performed as intermediate steps by the compiler? If it only affects assignments performed by the programmer, many programs could be written (in horrendously terrifying complexity and near impossible to debug) without creating and assigning any variables at all. In this case, I suppose it would be similar to a Turing tarpit.

If the programmer HAD to assign variables, they should be able to use something in the vein of the old Heinlein "Tell me 3 times", assigning the variable in an array repeatedly (say 100 times, or until 3 distinct values have been saved) and then use the median value (since all values are x, x-1, or x+1, the median value, not median result, must be the correct one). This would cut into performance HORRIBLY, but should, typically, arrive at correct values. (In fairness to @DavidRicherby, I concede that we cannot ensure both that we arrive at an answer and that it is correct with 100% certainty; for a look at how we could arrive at 3 distinct values without additional assignments, consider the following approach:

# function sum : x,y $\rightarrow$ z var a = x + y;   // first assignment, may be x, x-1, or x+1 var b = x + y; while (a == b)   b = x + y;     // if this loop finishes, b is now distinct from a var c = x + y; while ((a == c) || (b == c))   c = x + y;     // if this loop finishes, c is now distinct from a and b if ((a < b) && (b < c))   return b; if ((b < a) && (a < c))   return a; // repeat for the other combinations; there should be 6 total if statements. 

)

If the compiler can introduce intermediate assignment operations that the programmer did not that also suffer from this drawback (say, at the end of every elementary computation), or if there are other operations affected by this glitch (the killer one in my above example being the comparison operators), then we cannot simulate a Turing-complete language (or machine) at any speed -- we will only be right 1 out of every $3^n$ times on average, where $n$ is the number of operations we can't correct for.

The Short Version of My Answer:

I can try to check for errors internally myself in the midst of computation, which may or may not work depending on the details of your system. If I can't have or emulate a verifying system, then I cannot be Turing complete -- I'm just guessing at solutions.

Given that popular consensus is that such a verifying system implemented could not guarantee both returning in finite time AND returning an accurate answer, it would appear no, although we might be able to approximate it a reasonable percentage of the time.