Proofs

To prove that an algorithm is correct, we have to prove that its execution on every input is correct. Let \(f\) be the computational problem solved by \(A\). Then, we have to prove a statement such as:

\(\forall x. f(x) = A(x)\)

where \(x\) is chosen from the domain of all valid inputs. Here, we denote by \(f(x)\) the required outputs and effects of the problem on \(x\), and by \(A(x)\) the outputs and effects produced by \(A\) on \(x\).

Algorithms without loops #

If an algorithm doesn’t have loops, then there are only finitely many distinct instruction sequences possible in its execution trace. For such algorithms, we can usually do a proof by cases to prove the correctness of the algorithm. For example, given the following algorithm for computing the maximum of three numbers:

int max3(int a, int b, int c)
{
    if (a >= b) {
        if (a >= c) { return a; }
        return c;
    }
    if (b >= c) { return b; }
    return c;
}

We can prove it’s correctness as follows:

The following cases are exhaustive:

The above proof is easily obtained by observing all possible execution traces.

Algorithms with loops #

However, when an algorithm has loops, it most often has infinitely many distinct execution traces. A proof by cases cannot list them all. We introduce the following terminology to talk about loops. Every loop has a condition expression, a sequence of statements called the body, and a statement that immediately follows the loop. An iteration of the loop begins with the condition and then may optionally include the execution of the body. Specifically, the last iteration of any loop execution only executes the condition statement which will be false. A loop invariant is a statement that is true before every iteration of the loop. To prove that a statement is an invariant, we typically use a proof by induction. This has two parts.

We also have to prove that the loop achieves it’s goal.

Typically, the condition is an expression that doesn’t have any effects on the storage. So we can also state the above as:

Euclid’s algorithm #

Let us see a more advanced algorithm for computing the gcd. The algorithm is called Euclid’s algorithm and relies on non-trivial properties of the gcd.

int euclid(int n, int m)
{
    if (n < m) {
        SWAP(n, m);
    }

    while (m) {
        int t = n;
        n = m;
        m = n % m;
    }

    return n;
}

Before proving the correctness of Euclid’s algorithm, we need a proof of the following theorem. We use \((n, m)\) to denote the gcd of \(n\) and \(m\).

(GCD Theorem) If \(m > 0\), then \((n, m) = (m, n \text{ rem } m)\).

We name \(r = n \text{ rem } m\) for convenience. Let \(g = (n, m)\) and \(g’ = (m, r)\). Our goal is to show that \(g = g’\). By the definition of gcd, \(g\) divides \(n\) and \(m\). Now, \(r = n - qm\) for some natural number \(q\). Therefore, \(g\) divides \(r\) as well. Since \(g\) is a common divisor of \(m\) and \(r\), we must have \(g’ \geq g\). Again, by the definition of gcd, \(g’\) divides \(m\) and \(r\). Therefore, \(g’\) divides \(qm + r = n\) as well. Since \(g’\) is a common divisor of \(n\) and \(m\), we have \(g \geq g’\). So \(g = g’\).

Now, we prove Euclid’s algorithm correctly computes the gcd of any two numbers.

The loop invariant is:

The gcd of the variables \(n\) and \(m\) remains the same in each iteration and is the same as the gcd of the inputs.

It is not obvious that this loop will always terminate. So we have to prove that the loop terminates and upon termination, the goal of the loop is achieved. The goal is:

The variable \(n\) contains the gcd of the inputs.

To see that the loop terminates eventually. Observe that \(r = n \text{ rem } m < m\) for any \(n\). So \(m\) must eventually become \(0\).

When the loop terminates \(m = 0\), we know that \((n, 0) = n\) for any \(n\). And, the loop invariant guarantees that the value in variable \(n\) is also the gcd of the inputs.

The most interesting fact about Euclid’s algorithm is that not only does it correctly compute the gcd, it does it fast. Again, to establish this fact, we provide a proof that Euclid’s algorithm. First, we prove that the naive algorithm is inefficient.

The worst-case time complexity of gcd(n,m) is \(\Omega(n)\).

Let \(m = n-1\) and n > 3. The for loop in the naive algorithm executes at least \(n-2\) times.

Say, the inputs are \(n = 2^{100}\) and \(m = n-1\). Then, suppose in a second, we can execute \(2^{34}\) iterations (This is faster than a 16Ghz processor). Such a machine would need \(2^{66}\) seconds or more than 2000 billion years to compute the gcd using the naive algorithm!

The worst-case time complexity of euclid(n,m) is \(O(\log(n+m))\).

For values \(n\) and \(m\), we define the weight as \(n+m\). We define \(r = n \text{ rem } m\) and observe that \(m+r < 2m\).

Observe that in all cases \(m+r \leq \lceil \frac{4}{5}(n+m) \rceil\). Suppose \(w\) is the weight of the input. Then, the maximum time taken by Euclid’s algorithm on inputs of weight \(w\) satisfies:

Solving the recurrence, we obtain \(t(w) = O(\log(w)) = O(\log(n+m))\).

An interesting sorting algorithm #

We now turn our attention to an unusual sorting algorithm. Usually, comparison based sorting algorithms work by swapping elements to remove inversions. An inversion is a pair \((i, j)\) such that \(i < j\) and \(a[i] > a[j]\). The following algorithm could swap elements even when they are not inverted. But, it still correctly sorts the input.

void simple_sort(int a[], int n)
{
    for (int i = 0; i < n; ++i) {
        for (int j = 0; j < n; ++j) {
            if (a[i] < a[j]) {
                int t = a[i];
                a[i] = a[j];
                a[j] = t;
            }
        }
    }
}

It is easy to see that the running time is \(O(n^2)\). The tricky part is to understand why this algorithm works correctly.

Algorithm simple_sort sorts the input array in ascending order.

The outer loop maintains the invariant:

Just before any iteration, the subarray \(a[0\dotsc i-1]\) is in ascending order.

Here, since the outer loop is the last statement of the function, the goal is to ensure that the array is in ascending order.

Exercise #

  1. Write an algorithm to compute the minimum element in an array. Prove that your algorithm is correct by stating the correct invariant.