As I mentioned in class, the lighting effects should ideally be computed in world coords. Thus you would use the original position of each polygon, light source and the viewer. Assuming you have added a routine to your lib3d that lets a user add a light source at a point S, your code knows the original location of each polygon (the arguments to polygon_3d() are world coords) and can therefore compute the original normal, and in fact can compute all of the quantities needed for lighting BEFORE transforming the coords by the viewing transform. You can then store the computed shading information in the polygon data structure, and from then on forget all about world coords. When depth sort splits a polygon in pieces, the stored shading information can be copied.
Remember that the normal you currently store in your polygon data structure (as well as the coords you store) are probably the transformed ones since that is what depth sort uses. You could store both normals and compute the lighting much later, when displaying a polygon after depth sort, but you would need to remember the COP and DOP in world coords to make that work. It is more efficient to simply compute the correct shading information at the start, then store only that. Note however that you cannot compute the scaled shading until you have processed ALL polygons - since you need to compute Imax and Imin. But as each poly_3d is called, you can a) compute its I (or Ired Igreen Iblue) and simulataneously update Imin and Imax so that when the last poly_3d is called you lnow Imax and Imin. Then you can compute the scaled I for each polygon in a special loop over all polygons that does nothing else but rescale I - or you could do that scaling at display time when calling polygon_2d.
One can actually do the scaling in two ways. One could argue that polygons that are not visible really should not participate in scaling. They may be brightly lit, but if invisible to the viewer who cares. They can therefore distort the lighting (or maybe I should say they don't distort the lighting!). Scaling is an artificial step needed to accommodate the limited dynamic range of a screen. So you probably want to have the brightest spot on the screen be 100% bright, and the darkest be 0% bright. This is a distortion of the real view, where the brightest part might be at the back.
If you want to implement this distortion, then you should only compute Imax and Imin for the final polygons that are to be displayed after hidden surface removal. Just add a loop at that point that runs through the list computing Imax and Imin.
You could allow your code to do it either way (compute Imax,Imin before depth sort or after depth sort) and have a user-controllable flag that decides which way to do it. That way you can compare the effects. There are scenes that could come out completely wrong because of the second approach - for example if all of the light falls on the back of the teapot and there is no ambient light! There are other scenes that will look very dull (very little color change) unless you make the distortion.