Algorithms
An algorithm describes to a computational device the steps needed to solve a computational problem. The set of instructions that a machine understands is called its instruction set. An algorithm for this machine is then a sequence of instructions where every instruction is from this set. Even in modern computers, the exact instruction set varies from one manufacturer to the other. In order to avoid tying algorithms to a specific type of device, high-level languages such as C or Python may be used to describe algorithms. We will describe our algorithms in the C programming language to avoid any dependency on specific machines.
The first algorithm #
Consider the problem of computing the greatest common divisor of two natural numbers.
- Name
- GCD
- Input
- Two numbers \(n\) and \(m\).
- Output
- The largest number \(d\) that divides both \(n\) and \(m\) evenly.
The first algorithm for this problem, which we call the naive gcd algorithm, tests each possible divisor starting from the maximum.
int gcd(int n, int m)
{
if (n < m) {
int t = n;
n = m;
m = t;
}
for (int d = m; d > 0; --d) {
if (n % d == 0 && m % d == 0) return d;
}
return n;
}
We will test this algorithm for some sample inputs using the following code:
#include <assert.h>
#define LEN(a) (sizeof(a)/sizeof(a[0]))
int main()
{
int inputs[][3] = {
{ 298, 34, 2 },
{ 97, 25, 1 },
{ 303, 39, 3 },
{ 999, 111, 111 },
{ 990, 72, 18 },
};
for (int i = 0; i < LEN(inputs); ++i) {
int n = inputs[i][0];
int m = inputs[i][1];
int e = inputs[i][2];
int g = gcd(n, m);
assert(g == e);
}
return 0;
}
But, testing is not enough. We have to be sure that the code will work for all inputs. The best way to do this is to write a mathematical proof. We can prove that the algorithm is correct by applying the definition of gcd. First, observe that gcd is symmetric. Using this, we can ensure that \(n\) is at least \(m\). If the smallest number is zero, then the other number is the gcd. In this case, the condition d > 0
is false initially and we simply return the other number. Otherwise, the gcd is in the range \([1, m]\). We search for this number starting from the greatest one so that the first common divisor we find is the gcd.
The above analysis does not match reality. The C program above will have different behavior when executed on different machines. This is because the definition of an int
in C depends on the machine. This issue can be avoided by writing our algorithms in a truly machine independent language such as Python. However, such languages hide too many details of the underlying machine. The primary purpose of learning algorithms is to understand algorithms that execute efficiently in practice. So we want to write our algorithms in an efficient, machine-dependent language and analyze them as if they were machine-independent.
This “trick” of analyzing an idealized version of reality is common in science. In classical physics, we analyze motion of bodies assuming they are point particles, objects with non-zero mass that takes up no space. Such objects cannot exist in reality. But, the analysis of motion of point particles is good enough for many applications. Similarly, the analysis of algorithms on our idealized machine will be good enough for many applications.
We idealize by stating that the machine is capable of storing and operating on small integers in unit space and unit time respectively. The definition of small depends on the input size. For GCD, we have to assume that the input numbers are small enough to be processed in unit-time. We observe that the only other variable \(d\), also only holds integers that are smaller than the input numbers. So \(d\) must be small and we can store and operate on it as well.