home · mobile · calendar · defenses · 2009-2010 · 

Thesis Defense - Mytkowicz

Supporting Experiments in Computer Systems Research
Computer Science PhD Candidate
11/20/2009
9:00am-11:00am

Systems research is an experimental science. Most research in computer systems follows the trend of innovate (e.g. build a novel garbage collector) and then evaluate (e.g. does it significantly speed up our programs). Researchers use experiments to drive their work; they use experiments to identify performance bottlenecks and then again to determine if their ideas for addressing those bottlenecks are effective. If their experiments are not carried out properly, a researcher may draw an incorrect conclusion; they may end up wasting time on something that is not really a problem and may conclude their idea is beneficial even when it is not.

Complicating experimentation in computer systems is that computer systems are nonlinear dynamical systems, capable of complex and even chaotic behavior. A hallmark of chaos is a sensitive dependence on initial conditions -- small changes to the system lead to a large effect on its overall behavior. This sensitivity complicates both observations of our systems and evaluations of our innovations. It complicates our observations because our measurement tools operate from within the system they measure and cannot help but impact those observations. It complicates our evaluations because small changes in seemingly innocuous aspects of an experimental setup -- or the environment in which we carry out our experiments -- can cause large and dramatic changes in overall system behavior. Indeed, as we demonstrate in this dissertation, our measurement tools are often inaccurate and our state of the art evaluation methodologies often lead us astray. As a consequence, progress in our domain suffers.

In this dissertation, we argue that the systems research community needs to support experimentation with tools that allow a researcher to accurately observe her system and methodologies that allow researchers to accurately evaluate the impact of their innovations. To support our argument, we introduce two tools that allow researchers to accurately observe their application's behavior and one methodology that allows researchers to accurately evaluate the impact of their innovations.

Committee: Amer Diwan, Associate Professor (Chair)
Elizabeth Bradley, Professor
Manish Vachharajani, Assistant Professor
Dirk Grunwald, Professor
Matthias Hauswirth, Università della Svizzera Italiana
Peter Sweeney, IBM Research
Department of Computer Science
University of Colorado Boulder
Boulder, CO 80309-0430 USA
webmaster@cs.colorado.edu
www.cs.colorado.edu
May 5, 2012 (14:20)
XHTML 1.0/CSS2
©2012