The time required for `ode`

to solve numerically a system of
ordinary differential equations depends on a great many factors. A
few of them are: number of equations, complexity of equations (number
of operators and nature of the operators), and number of steps taken
(a very complicated function of the difficulty of solution, unless
constant stepsizes are used). The most effective way to gauge the time
required for solution of a system is to clock a short or imprecise run
of the problem, and reason as follows: the time required to take two
steps is roughly twice that required for one; and there is a
relationship between the number of steps required and the relative error
ceiling chosen. That relationship depends on the numerical scheme being
used, the difficulty of solution, and perhaps on the magnitude of the
error ceiling itself. A few carefully planned short runs may be
used to determine this relationship, enabling a long but imprecise run
to be used as an aid in projecting the cost of a more precise run over
the same region. Lastly, if a great deal of data is printed, it is
likely that more time is spent in printing the results than in computing
the numerical solution.

Go to the first, previous, next, last section, table of contents.