This course is a one semester general introduction to modern methods for numerical computation. Numerical computation is an essential part of both computer science and applied mathematics. It has been a driving force behind advances in computer architecture and in mathmatical theory. As computing power increases, in engineering and science guesswork and experiment are being replaced by numerical modelling and control, a trend making these techniques more and more widely used. In the course we will first consider the lowest level and look at how a computer represents numbers and carries out arithmetic and the implications of this for practical computation. Then we will cover the implementation and analysis of a variety of computer techniques for solving some important mathematical problems with wide applications. In some cases students will implement algorithms from scratch; in others they will use available numerical software.

There will be homework assignments about once a week, usually consisting of problems to solve or programming. You may discuss homework orally among yourselves, but what you hand in should represent your own work. I will accept late homework up to 5 days but will count off about 20 % for each day late. If programming is assigned, you should turn in a listing of the program and the output produced by the program. For most programming assignments, you may use any reasonable computer to which you have access, or we will provide you with a login on a university machine. There will be a midterm and a final exam. The course grade will be weighted approximately 20-30% on weekly assignments and 70-80 % on exams. Assignments will be handed out in class and be available on this web page. Supplemental information on assignments may be sent by email. The essential prerequisites are programming ability in a language such as C or C++ ; linear algebra, and two semesters of calculus. The main topics to be covered are:

- Finite precision arithmetic and error, ill conditioning, numerical stability.
- Solving systems of linear equations. Gaussian elimination, matrix factorization, methods for special matrices, condition of linear equations, error analysis, methods for large sparse systems.
- Least squares approximation. rationale for least squares, normal equations conditioning.
- Solving equations in one variable. bisection, fixed point iteration, Newton's and secant method, effects of rounding error.
- Interpolation and approximation. polynomial interpolation, spline interpolation,
- Differentiation and integration. trapezoidal and Simpson's rules, accuracy, extrapolation methods.
- Basic methods for ordinary differential equations

The text for the course is " Numerical Analysis" by Timothy Sauer

Mid-term exam: March 8, in class

Final Exam: April 30, 4:30-7:00

Lectures: Tuesday and Thursday, 2:00-3:15 in ECCR 155

Instructor: Richard Byrd ECOT 620, richard.byrd@colorado.edu

Office hours: Tuesday 10-11, Wednesday 2-4

Matlab Tutorial Information is in:
Elements of Matlab.pdf
**Assignments** (pdf)

Assignment 1 . due January 20

Assignment 2 . due January 27

Assignment 3 . due February 3

Assignment 4 . due February 10

Assignment 5 . due February 17

Assignment 6 . due February 23

Assignment 7 . due March 3

Assignment 8 . due March 17

Assignment 9 . due March 31

Assignment 10 . due April 7

Assignment 11 . due April 14

Assignment 12 . due April 21

Assignment 13 . due April 28