Algorithms and Data Structures 2004: 1st Term-time Test Results

The first term time test was sat on February 16th. A copy of the test can be found here. This is in postscript format. The machines in the ITL are set up to display files in this format. If you want to view postcript files from elsewhere an application called Ghostscript is available for free to do it.

Each question was marked out of 20, for a total of a maximum of 80 marks for the paper. However, for the purpose of a final mark for the whole course, the paper will be treated as if it is marked out of 75. Given that the test counts 15% of the final marks, this means that every five marks on the test will count 1% towards the final assessment of the course.

A table giving the marks for each student, but with students given only by examination number, can be found here.

Below is some discussion on the test together with solutions.

Question 1

Explain how the "merge sort" algorithm works. You do not need to give any Java code for it.

A complete and correct answer to this question would be something like:

To sort an array using merge sort, if the array is of length 1 it is sorted. Otherwise, find the mid-point of the array, and divide it into two parts, all elements up to and including the mid-point, and all elements beyond the mid-point, copying those parts into separate arrays. Sort both of those arrays using recursion. Then merge the two sorted arrays back into the original array. This is done by repeatedly taking whichever is the smallest of the two leftmost items from the sorted arrays not yet copied into the main array and copying it into the next free position in the main array, filling up the main array one element at a time left to right.

The main mistake most people made in answering this question was to be far too vague in describing it. One of the most important things in Computer Science is precision, and in describing an algorithm even though we are using English rather than a programming language we must be precise so that anyone who reads the description would have no trouble changing that description to computer code.

Many people did not specify that the sorting of the two arrays given by the division should be done recursively. A few people actually went as far as to say it should be done by another sort algorithm such as insertion sort, which is wrong. A common cause of error was to attempt to describe this recursive algorithm as if it were iterative using language like "You keep on dividing the array into half until it is of length 1". This makes it sound as if the array is divided into half, with the other half being thrown away, which is wrong.

A lot of people got the bit about dividing the array into two correct, but then said nothing or nearly nothing about how the parts were merged together. They gave the impression that all you had to do was divide the array up into single elements and then somehow magically it would all be sorted. In fact with merge sort, most of the work in the sorting is done on the way back up from the recursive calls as the two sorted arrays are merged into one.

Some people gave extra information, such as stating that merge sort is O(log N). Since this was not asked for in the question, there was no point in writing it. There are no bonus points in this test or any other test or exam I set for writing things you have memorised but which aren't asked for in the question.

Question 2

Write a static method which takes as its arguments an integer and an array of integers, and changes the array destructively so that every integer in the array is multiplied by the integer argument. Then write a static method which performs the same operation but does it constructively.

Correct answers to this question would look very much like:


static void dmult(int[] a,int n)
{
for(int i=0; i<a.length; i++)
a[i]=a[i]*n;
}

static int[] cmult(int[] a,int n)
{
int[] b = new int[a.length];
for(int i=0; i<a.length; i++)
b[i]=a[i]*n;
return b;
}
About a third of the class gave completely correct or near correct answers to this question, but about another third gave answers which suggested they were completely unfamiliar with Java programming.

Anyone who has reached the mid-point of the second semester of our degree programme in Computer Science ought to have done a lot of Java programming. You have had a complete course on introductory Java programming in the first semester, and this question uses no material that wasn't in that course. That course was lab-based and involved reading and writing a lot of code, as does this course and the Object Oriented programming course you are doing alongside it. The rather large number of people who have reached this point and yet submitted answers which suggest they does not know how to declare and initialise an array variable, or do not know what is meant by "a method which takes as its arguments an integer and an array variable", or do not know which way round the variable and expression go in an assign statement is a matter of great concern.

The question as given on the paper was accompanied by an example of the effect the method should have if it is given one particular array and one particular integers. This did not mean, as quite a few people supposed, that you should write code which multiplies everything in an array by 5 because 5 happened to have been used in the example, or that you should write code that only deals with arrays of length 6 because the array given as an example was of length 6. If precision is the most important thing in Computer Science, the second most important thing is abstraction which means moving from examples to the general case. Being able to distinguish between a problem and an example given to illustrate the problem is a very basic form of abstraction and again it is a matter of great concern that there are so many students having trouble with this.

There were quite a few people who, while having got the basics of this problem, have not grasped that a destructive method has its effect by changing a data structure passed to it as an argument, so should not return a data structure, generally it will have return type void, while a constructive method must return a new data structure, so its return type will be the type of the data structure (int[]) in this case, and no changes should be made to the data structure passed as an argument.

A number of people seem not to have grasped how a return statement works. A return statement always means terminate the method being executed, and return the value following the return as the value of the method when you resume execution of the code that called the method. That is so even if the return statement is in a loop so

// THIS IS BUGGY CODE!!!
for(int i=0; i<a.length; i++0) {
b[i] = a[i]*n;
return b[i];
}
will not return the value of array b piecemeal, it will always halt the method on the first time round the loop and return the value in b[0] (actually, since this is of type int it will give a compiler error unless the return type of the method is wrongly given as int as well).

Quite a few people seem to think that the symbol [] must be attached to an array variable whenever you use it to refer to the whole array, so wrongly wrote return b[] rather than just return b in the constructive method. The symbol [] is attached to a type name, so that if Thing is the type name, then Thing[] is the name of the type "array of Thing", it is not used for any other purposes. Similarly, any type name, such as int is only used when you first introduce a variable, it is wrong to attach it to a variable every time you use it.

These are very basic things which ought not to be causing bother to any students at this stage, but I mention them because they obviously are. Like any other practical skill, programming requires practice, and once you have had practice writing lots of simple code dealing with arrays and methods, getting these things right will come as second nature to you.

As a minor point, Java uses * for multiplication, and <= for "less than or equals". There isn't even a × (multiplication symbol) or < (less than or equal symbol) on your keyboard, so these symbols don't appear in Java code.

Poor layout was also a feature of quite a few answers. A for loop with some statements inside it, followed by a statement outside the for loop may be laid out as:

   for(...;...;...) {
statement1;
...
statementn;
}
following statement;
or as:
   for(...;...;...) 
{
statement1;
...
statementn;
}
following statement;
or as
   for(...;...;...) 
{
statement1;
...
statementn;
}
following statement;
Different authors suggest different styles, but they all have common features. Note that in all these cases, the following statement is indented the same number of spaces as the for, and the statements inside the for loop are indented all the same number of spaces which is more than the surrounding code. The closing bracket } always appears on a line of its own to the left of the code block which it finishes off. NO author recommends a style in which a closing bracket occurs after a statement at the right of a line:
// THIS IS AN EXAMPLE OF POOR LAYOUT!!
for(...;...;...) {
statement1;
...
statementn; }
following statement;
and it is even worse if the following statement is wrongly indented making it look at first glance that it is supposed to be inside the loop:
// THIS IS AN EXAMPLE OF VERY POOR LAYOUT!!
for(...;...;...) {
statement1;
...
statementn; }
following statement;
so it is surprising that so many people gave answers in which the code was laid out like this. If poor layout is done consistently, it will result in a deduction of marks, since it suggests you are not properly familiar with the structure of the code you write. It may even mean that the marker misses what is actually correct code because the poor layout makes it look incorrect.

Question 3

In each case below, write in the box the order of computation in time of a call p(n) using the big-O notation, and assuming that the calls dosomething() and saysomething() always take one time unit.

Some people in answering this question clearly did not know what was meant by "big-O notation" or "the order of computation". The question was not asking you, as some supposed, to put the various parts of it in some order amongst themselves. The order of some computation is an expression giving an approximation of the time it will take to complete in terms of the size of the data given to it. For example, if a computation on an array of length N will take a time approximately proportional to the square of N, we will say its computation time is "of the order of N squared", which is written O(N2). Since a large part of one lecture was given over to the big-O notation, and it is written up in the notes and lecture summary, there really was no excuse for anyone to take this test and not know at least that the answers to the questions should be of the form O(...), with the ... being a function of N. People should also have got as far as realising that the function of N should contain no additions or subtractions (since in the big-O notation, only the most significant terms are considered) and no constant multiplying factors (as these are ignored in the big-O notation).

Part a) of the question gave the following code for p:

void p(int n)
{
for(int i=n; i>0; i--)
dosomething();
}
Most people who understood the question gave the correct answer, which was O(N). The method involves a single loop causing the method dosomething() to be called n times. The fact that the index variable starts at n and goes downwards to 0 rather than the other way round makes no difference to the order of the computation.

Part b) of the question gave the following code for p:

void p(int n)
{
for(int i=0; i<n; i++)
dosomething();
for(int i=0; i<n; i++)
saysomething();
}
Here the correct answer is also O(N). The reason for this is that there are two separate loops, the first causes dosomething() to be executed n times, the second is executed when that has finished and causes saysomething() to be executed n times. So, ignoring the time taken for updating the loop variable, a total of 2n time units are take. However, the order of the computation is not O(2N) because the 2 is a constant multiplying factor, and as such is ignored. Neither is the order of the computation O(N2), as many people put, because it's not the case that one loop is inside the other. If the method were
void p(int n)
{
for(int i=0; i<n; i++)
{
dosomething();
for(int i=0; i<n; i++)
saysomething();
}
}
then it would be the case that one loop is inside the other. In this case, each time round the outer loop dosomething() and the inner loop is executed, and the inner loop executes saysomething() n times, so saysomething() is executed a total of n2 times. So ignoring the time taken to check and update the loop variables, a total of n2+n times units are taken. Since in the big-O notation we ignore all but the most significant term, this would make it O(N2).

Part c) of the question gave the following code for p:

void p(int n)
{
if(n>0)
{
for(int i=n; i>0; i--)
{
dosomething();
saysomething();
}
p(n-1);
}
}
This is O(N2). You can see that p(n) calls dosomething() and then saysomething() n times, and then makes a recursive call with argument n-1. So that's 2n units of time, plus the time for the recursive call. The recursive call will call dosomething() and then saysomething() n-1 times, so that's 2(n-1) time units, then it will make its recursive call. This will continue until the call of p(0) is made. So in total 2((n-1)+(n-2)+...(n-n)) time units are taken. In big-O notation, we ignore constant multiplying factors, so the initial 2 is irrelevant. It can be seen we have n lots of n, which is n2 minus (1+2+3...+n), but we ignore smaller terms, so we call this O(N2). You might also just have remembers that this is very similar to the argument that selection sort is O(N2). Another way of thinking of it is that it's tail recursive, and is equivalent to:
void p(int n)
{
while(n>0)
{
for(int i=n; i>0; i--)
{
dosomething();
saysomething();
}
n=n-1;
}
}
which is a loop within a loop, with both loops controlled by a count decreasing by 1, hence O(N2).

There were quite a few strange answers to this question. Many people seemed to think the answer must involve a log somewhere. This is not the case because nowhere in it is there anything which divides the size of the problem or a loop control variable by 2 (or by any other number). Note that answers like O(log N2) will never be correct. That is because the definition of logarithms means that log(a×b) is equal to log(a)+log(b), so log(N2) is equal to log(N)+log(N), or 2×log(N). But in the big-O notation we ignore constant multiplying factors, so this would be . Similarly, we can't have O(log (N/2)), since log(N/2) is equal to log(N)-log(2), but in the big-O notation, we ignore addition and subtraction of terms, so that's still O(log N).

Part d) of the question gave the following code for p:

void p(int n)
{
if(n>0)
{
dosomething();
p(n/2);
}
}
This is O(log N). Here there is a single call to dosomething(), and then we halve the size of the problem and repeat, so the total number of calls to dosomething() is the number of times we can halve n. We know this is log2n, and this gives us O(log N) as the order for computation.

Again, we could view the code as tail recursion, and convert it to an equivalent loop:

void p(int n)
{
while(n>0)
{
dosomething();
n = n/2;
}
}
and view it as O(log N) because it has a loop where the loop control variable is halved each time round the loop. We could also see that it has the same pattern as binary search, which we know to be O(log N).

Part e) of the question gave the following code for p:

void p(int n)
{
if(n>0)
{
p(n/2);
for(int i=0; i<n i++)
dosomething();
}
}
This is O(N). This may have been a little more tricky to work out, because we haven't considered before the order of a computation of this pattern. Many people wrongly thought it would be O(N log N), perhaps because it seems to be similar to algorithms like merge sort. However, merge sort makes two recursive calls on a size of the problem half that of the original. So you can think of it as halving the size of the problem but doubling the number of problems at each level of recursion,, and this was what we used to argue that there would be log(n) levels of recursion, each doing n computations, making the order of computation O(N log N). If the code in this part of the question involved two recursive calls:
void p(int n)
{
if(n>0)
{
p(n/2);
p(n/2);
for(int i=0; i<n i++)
dosomething();
}
}
then it would be O(N log N) but it doesn't and it isn't.

With a single recursive call which halves the size of the problem, we can see it does this then makes n calls to dosomething(). So there it will make n/2 calls as well as calling the method recursively this time with size n/4. So it can be seen that the total number of calls to dosomething() is n+n/2+n/4+n/8+... down to the point where dividing n by the next power of two gives zero (using integer division). We know that if we add a half to a quarter to an eighth to ..., as we keep on adding smaller portions we come closer to one. So as n becomes larger, the number of calls to dosomething() becomes approximately 2n. As we ignore constant multiplying factors in the big-O notation, this gives us our O(N). Very few people got this right and I wonder if some of those who did get it right did so more through guessing than actually working it out.

Question 4

Question 4 involved some short descriptions. This ought to have been a very easy question. It was a straight recollection of the material in the lecture of 3rd February, less than two weeks before you took the test. This lecture was written up and put in the lecture summary here. This is not exactly a huge amount of material to master, all that is needed for this question is there in 23 lines. I would have expected that the very minimum preparation for this test would have been to read the lecture summaries carefully. Yet, the question was done quite badly, particularly parts c) and d) which hardly anyone answered satifactorily.

Part of the problem was, as in question 1, precision. In many cases it was impossible to tell whether the person answering the question really understood it or not. When you are asked to write a description of something, it might help to imagine you are a lecturer trying to explain what it is you have to describe to students who are on the course and have not yet covered that topic. Now turn round, and think of yourself as a student again, and consider whether what you wrote in answer to the question would make any sense. Maybe if you do this, you will see that writing notes for teaching isn't such an easy task as it looks!

Part a) of the question was

Describe briefly the difference between a data structure and an abstract data type .

The answer should have looked something like this:

A data structure is seen in terms of how it is actually stored on the computer, while an abstract data type is seen only in terms of the operations which may done on it which are given as methods.

Part b) of this question was

Describe briefly the difference between a constructive and destructive implementation of an abstract data type

The answer should have looked something like this:

An abstract data type has operations which when implemented constructively mean a new object is created to represent the change, but the object the change method was called on remains unchanged. When implemented destructively, the method changes the object it is called on.

A lot of people when answering this part of the question confused "object" with "abstract data type". A type is a description of things, but an object is the actual thing. Therefore it was wrong to write things like "when a destructive method is called, the abstract data type is changed". Objects are created and changed when a program is run, but abstract data types are created and changed when the program is being designed.

Part c) of the question was

The class giving a constructive implementation of an abstract data type may need to have a private constructor. Say why this is the case.

The answer should have looked something like this:

In a constructive implementation, a new data structure representing the changed object is constructed, and a new object holding that new data structure is returned as the result of the method representing the operation. Meanwhile, the data structure in the original object is unchanged. In order to create the new object, the class for the constructive implementation of an abstract data type needs a private constructor which takes as its argument(s) the new data structure.

This is exactly what is written in the lecture summary for February 3rd, which everyone should have read. However, nearly everyone who answered this part of the question missed the point, and instead described how Java's private access modifier works. They then went on to write further material which oin most cases was completely wrong. For example, many stated that an object created using a private constructor would have different properties from one created using a public constructor. It would not.

Part d) of the question was

A method in a class giving a constructive implementation of an abstract data type may return a value written in Java as this. Explain what this means.

The answer should have looked something like this:

If an operation made no change to an object (for example, adding an element to a set when that element is already in the set), the constructive method called on the object could return a reference to the same object. In Java this is done through the keyword this. At any point in Java where a method call is made on an object, say obj.meth(...), when the code for the method meth(...) (which cannot be a static method) is executed, the word this in that code refers to the object obj.

Again, this is taken directly from the lecture summary on the web. Despite this, very few people got this part correct. Many people thought that this referred to "the method", sometimes without saying what method they meant. Many people wrote explanations in terms of using this to distinguish between fields of the object and arguments of the same name in methods or constructors. But that did not answer the question that was set.


Matthew Huntbach

Last modified: 18 February 2005