is worst case running time but it does not occur very often. The behavior of this algorithm is very sensitive to the particular input instance. Because of the amount of storage space that the heap can use before the optimal schedule is found the program runs out of memory often for large lists of tasks.
The above set of plotted points are the running times in seconds of my
program for various task lengths. The vertical axis is in a
logarithmic scale, doing this easily shows the exponential running time
the program displays. Obviously with 160 tasks taking about 20
seconds this algorithm is very fast, but what you don't see here is
the amount of storage needed for the heap. To store the 100,000 nodes
that running needed it took 25MB of memory, and half of the 20 second
running time was not on the CPU but waiting for virtual memory from
disk. This becomes a big problem for my program for larger and larger
lists of tasks more and more memory is needed, it doesn't take long to
find a schedule, but there may not be enough memory avalible to find
it. In the worst case analysis the storage space required will be
that is a very unreasonable growth rate. the
amount of space to store the heap for 55 tasks exceeds the number of
particle in the universe.