The simplest form of recursion is linear recursion.
It occurs where an action has a simple repetitive structure
consisting of some basic step followed by the action again.
procedure ODSN
begin One dark and stormy night,
Three men sat in a cave,
And one said to another,
Dick, tell us a tale,
And this is how the tale began:
ODSN -- again!
end
This simple tale has amused generations of small children
precisely because it is so simple but never ends.
It seems somehow paradoxical.
Generally we want the recursion to terminate on some condition.
The song `Ten Green Bottles' does not go on forever:
The song above was generated by the program below:
function s(n) // happens to be a JavaScript program
{ if( n != 1 ) return 's';
else return '';
}
function verse(n)
{ if( 0 < n )
{ document.write(n + ' green bottle' + s(n)
+ ' hanging on the wall,<BR>\n');
document.write(n + ' green bottle' + s(n)
+ ' hanging on the wall,<BR>\n');
document.write('And if 1 green bottle'
+ ' should accidentally fall,<BR>\n');
document.write('There\'ll be ' + (n-1)
+ ' green bottle'+s(n-1)+' hanging on the wall.<BR><BR>\n');
verse(n-1);
}
}
verse(10); // or however many bottles it is
This illustrates the power of recursion nicely.
The routine worries not so much about the whole song
as about the basic step, one verse.
It calls itself to generate the rest of the song.
Note the terminating condition, i.e. the base case
(n<=0) and test for it.
The parameter `n' is decreasing in the recursive calls `verse(n-1)'
so eventually n reaches zero, and the program terminates.
Factorial
The factorial function is a popular mathematical example:
f(0) = 1 --base case
f(n) = n*f(n-1), if n > 1 --general case
If the base case of the recursion takes time `a'
and the recursive step takes time `b'
then a linear recursive routine making `n'
recursive calls takes time a+b*(n-1) in total; this is O(n).
For the factorial function, the number of calls is the value of its parameter.
Exponentiation
In the case of the following exponentiation function
which calculates x^{n}(i.e. x**n),
the number of calls is not equal to the parameter n but to its logarithm.
Pre: x > 0
expn(x, 0) = 1
expn(x, n) = 1 / expn(x, -n), if n<0
expn(x, n) = (expn(x, n div 2))^{2}, if n>0 and n is even
expn(x, n) = x * expn(x, n-1), if n>0 and n is odd
Note the precondition,
for if x is zero and n is negative then x^{n} implies division by zero.
It is fairly easy to see that this routine is correct
but we prove it here to illustrate general principles.
Firstly we show that
the routine terminates for all values of n.
If n equals zero it certainly terminates.
Otherwise, if n is greater than zero then a recursive call is made with
a new parameter n div 2,
or with n-1, which are strictly less than
the old value of n and still greater
than or equal to zero.
Therefore the successive values of parameter n remain
greater than or equal to zero and are decreasing.
Such an integer quantity cannot decrease for ever without reaching zero
when the algorithm terminates.
Note that a positive real valued quantity can
be reduced repeatedly without
ever reaching zero so such a proof
cannot be based on a real quantity:
eg. 1, 1/2, 1/4, 1/8, ... .
If n is negative the routine calls itself with a new value,
minus n, and subsequently terminates by the above.
This exhausts all cases.
Secondly, we show that the algorithm calculates the correct result
for all n>=0:
If n>0:
(i) x^{-n} = 1/x^{n} if x~=0
(ii) x^{0} = 1
(iii) x^{2n} = (x^{n})^{2}
(iv) x^{2n+1} = x*x^{2n}
Finally the time complexity of the routine fits
the form for logarithmic complexity:
T_{0} = b
T_{n} = T_{n/2}+a if n>0
which has solution
T_{n} = a * log(n) + b + a, n>=1
Many languages provide
a numerical exponentiation operator `**'
in which case the above algorithm is not required for numbers.
However some languages do not provide this operator as standard.
In addition other objects, such as matrices,
can also be multiplied and therefore exponentiated.
With modification, this algorithm can also be used for the latter purpose.
The chapter on lists
contains more examples of linear recursion.
Note that a linear recursive routine usually has a
simple iterative equivalent.
The recursive step is placed within a loop
and the terminating condition is used to exit from the loop.