home · mobile · calendar · defenses · 1999-2000 · 

Thesis Defense - Siewert

A Real-Time Execution Performance Agent Interface for Confidence-Based Scheduling
Computer Science PhD Candidate
7/3/2000
9:00am-11:00am

The use of microprocessors and software to build real-time applications is expanding from traditional domains such as digital control, data acquisition, robotics, and digital switching, to include emerging domains like multimedia, virtual reality, optical navigation, and audio processing. These emerging real-time application domains require much more bandwidth and processing capability than the traditional real-time systems applications. Furthermore, at the same time, the potential performance and complexity of microprocessor and I/O architectures is also rapidly evolving to meet these new application demands (e.g. a super-scalar, pipelined architecture with multilevel cache with burst transmission I/O bus). Finally, the complexity of typical real-time system algorithms is increasing given functions such as image processing, rule-based fault protection, and intelligent sensor processing.

The foundation of real-time systems theory is the recognition that bandwidth and processing resources will always be constrained (a more demanding application always exists that can make use of increased resources as they become available). Given this reality, the question is how does an engineer formally ensure, given resource constraints, that the system will not only function correctly, but function correctly and meet timing deadlines.

Since the introduction of Liu and Layland's rate-monotonic analysis and the development of the formal theory of hard real-time systems, significant progress has been made on extending this theory and developing an engineering process for it. The problem is that the current hard real-time theory and process assumes full reliability and overly constrains systems by requiring either deterministic use of resources or worst-case models of such usage. Real-time systems engineering requires translation of requirements into a system meeting cost, performance, and reliability objectives.

If deadline performance was the only consideration in the engineering process, and there were no cost or reliability requirements, then current hard real-time theory is mostly sufficient. In reality though, it is clear that cost and reliability must be considered, especially since emerging application domains may be more cost and reliability sensitive than traditional hard real-time domains.

Typically a direct trade can be made between cost and reliability for a given performance level. There are three main problems that exist with application of current hard real-time theory to systems requiring a balance of cost, reliability and performance. First, there is no formal approach for the design of systems for less than full reliability. Second, the assumptions and constraints of applying hard real-time theory severely limit performance. Finally, safe mixing of hard and soft real-time execution is not supported. Without a better framework for mixed hard and soft real-time requirements implementation, the engineer must either adapt hard real-time theory on a case by case basis, or risk implementing a best effort system which provides no formal assurance of performance.

Soft real-time quality-of-service frameworks are also an option. However, not only are these approaches not fully mature, more fundamentally, they do not address the concept of mixed hard and soft real-time processing, nor is it clear that any of these approaches provide concretely measurable reliability.

In this thesis we present an alternative framework for the implementation of real-time systems which accommodates mixed hard and soft real-time processing with measurable reliability by providing a confidence-based scheduling and execution fault handling framework. This framework, called the RT EPA (real-time execution performance agent), provides a more natural and less constraining approach to translating both timing and functional requirements into a working system.

The RT EPA framework is based on an extension to deadline monotonic theory. The RT EPA has been evaluated with simulated loading, an optical navigation test-bed, and the RT EPA monitoring module will be flown on an upcoming NASA space telescope in late 2001. The significance of this work is that it directly addresses the shortcomings in the current process for handling reliability and provides measurable reliability and performance feedback during the implementation, systems integration, and maintenance phases of the process.

Committee: Gary Nutt, Professor (Chair)
Kenneth Anderson, Assistant Professor
Elizabeth Bradley, Associate Professor
Elaine Hansen, Colorado Space Grant College
Renjeng Su, Department of Electrical Engineering
Department of Computer Science
University of Colorado Boulder
Boulder, CO 80309-0430 USA
webmaster@cs.colorado.edu
www.cs.colorado.edu
May 5, 2012 (14:20)
XHTML 1.0/CSS2
©2012