skip to main content
Department of Computer Science University of Colorado Boulder
cu: home | engineering | mycuinfo | about | cu a-z | search cu | contact cu cs: about | calendar | directory | catalog | schedules | mobile | contact cs
home · graduate program · financial aid · 
 

Graduate Travel Awards

 

There are several sources of financial support for travel for students presenting a technical paper at a computer science conference. These include the Computer Science Department itself, as well as programs sponsored by the Graduate School.

Computer Science Department Travel Support

Each year the Department allocates money to be awarded to support graduate student presentations at technical conferences. The Graduate Committee makes the awards. In recent years most awards were $300 - $600, depending on the quality of the conference. You should first apply for travel support through the Graduate School. See Graduate School Student Travel Grant below.

Beginning AY 2009-2010, the Committee will consider requests for travel support on a rolling basis throughout the year and will fund future events only. Once students have their papers accepted to any event(s), they need to fill out the Application for Department Conference Travel Stipend and submit it to Jacqueline (Jackie) DeBoard. These awards will be made as long as funds are available. Once we are done with the funds, students will be notified. Also, it is important for Graduate Students to keep in mind that they need a Travel Authorization issued for any university related trips, regardless of whether they are using University funds or not. Therefore, students need to make sure to inform either Stephanie Morris or Bobbie Atkinson whenever the student has an approved trip.

If you are not awarded any support from the department, other awards may be available:

  • You can apply for graduate school travel support. For more information on this award see Graduate School Student Travel Grant.

  • The United Government of Graduate Students (UGGS) also offers student travel grants to attend conferences, whether or not you are presenting. Everyone is invited to apply; however, preference will be given to those attending conferences for the first time. Also, grad student-sponsored events promoting university-wide community building are eligible to receive grants to aid in their event. See UGGS Sponsored Funding and Awards for more information and forms.

Clive Fraser Baillie Memorial Travel Award

Clive Baillie was a Postdoc and Assistant Professor in the Computer Science Department from 1990-1996. After his untimely death, a Travel Fund was established in his honor. Each semester the Computer Science Department will award $300-$600 from the fund to help a student attend a conference or workshop in High Performance Computing or related areas. The award winner will be chosen from the students who applied for Departmental travel support (see above). Past award winners are

Guy Cobb (Fall 2011)
Janus: Co-Designing HPC Systems and Facilities
2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) -- Seattle, Washington

The design and procurement of supercomputers may require months, but the construction of a facility to house a supercomputer can extend to years. This paper describes the design and construction of a Top-50 supercomputer system and a fully-customized pre-fabricated facility to house it. The use of a co-design process reduced the time from conception to delivery to three months, commensurate with the amount of time it currently takes to deliver the computer system alone. Moreover, the facility was designed to provide efficient datacenter space for a 15-year lifespan. The design targets an expected yearly average power usage effectiveness (PUE) of 1.2, with a measured PUE of 1.1 to date. Leveraging the rapid deployment technologies in use by industry allowed the procurement of the complete environment, including the facility and the resource, in significantly less time than a machine room renovation and years less than a new building.

Dmitry Duplyakin (Fall 2011)
2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) -- Seattle, Washington

This conference will give me a chance to see the current trends in the area of High Performance Computing, as well as potentially find a topic for my future research and find collaborators from other Universities and national labs.

Andrew Kessel (Fall 2011)
2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) -- Seattle, Washington

The intent for attending SC11 is to form an understanding of the current issues surrounding the application of general processing on GPUs (GPGPU) to the domain of High Performance Computing. My upcoming work with Professor Elizabeth Jessup will be concerned with applying these lessons to exploring methods and creating tools for the HPC community that will allow easier adoption and more effective use of GPGPU and heterogeneous architectures, something that is likely needed if we are to reach exascale computing in the foreseeable future. This conference will be an essential stepping stone for my research as it will give me invaluable exposure to the HPC community and will provide me with a clearer picture of the work that needs to be done in this field.

Paul Marshall (Fall 2011)
Using and Building Infrastructure Clouds for Science
2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) -- Seattle, Washington

Infrastructure-as-a-service (IaaS) cloud computing (sometimes also called "infrastructure cloud computing") has recently emerged as a promising outsourcing paradigm: it has been widely embraced commercially and is also beginning to make inroads in scientific communities. Although popular, the understanding of its benefits, challenges, modes of use, and general applicability as an outsourcing paradigm for science are still in its infancy, which gives raise to many myths and misconceptions. Without specific and accurate information it is hard for the scientific communities to understand whether this new paradigm is worthwhile -- and if so, how to best develop, leverage, and invest in it. Our objective in this tutorial is to facilitate the introduction to infrastructure cloud computing to scientific communities and provide accurate and up-to-date information about features that could affect its use in science: to conquer myths, highlight opportunities, and equip the attendees with a better understanding of the relevance of cloud computing to their scientific domain. To this end, we have developed a tutorial that mixes the discussion of various aspects of cloud computing for science, such as performance, privacy and standards, with practical exercises using infrastructure clouds and state-of-the-art tools.

Theron Voran (Fall 2011)
2011 International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) -- Seattle, Washington

I will be meeting with collaborators from Intel, giving updates on our work with their prototype Many-Integrated Core system, as well as getting information on the plans for the next generations of this system. The rest of the conference will provide insight and updates on the current state of High Performance Computing software, hardware, and other technology, particularly with respect to my dissertation.

Paul Marshall (Spring 2010)
Elastic Site: Using Clouds to Elastically Extend Site Resources
10th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2010) -- Melbourne, Australia

In the paper we develop a model of an "elastic site" that efficiently adapts to services provided within a site, such as batch schedulers, storage archives, or Web services to take advantage of elastically provisioned resources in the Cloud. We developed a resource manager, built on the Nimbus toolkit and Torque, to dynamically and securely extend existing physical clusters into the Cloud.

John Giacomoni (Spring 2008)
FastForward for Efficient Pipeline Parallelism: A Cache-Optimized Concurrent Lock-Free Queue
13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'08) -- Salt Lake City, Utah

Low overhead core-to-core communication is critical for efficient pipeline-parallel software applications. This paper presents FastForward, a cache-optimized single-producer/single-consumer concurrent lock-free queue for pipeline parallelism on multicore architectures, with weak to strongly ordered consistency models. Enqueue and dequeue times on a 2.66 GHz Opteron 2218 based system are as low as 28.5 ns, up to 5x faster than the next best solution. FastForward's effectiveness is demonstrated for real applications by applying it to line-rate soft network processing on Gigabit Ethernet with general purpose commodity hardware.

Jason Cope (Fall 2006)
An Extensible Service Development Toolkit to Support Earth Science Grids
2nd IEEE International Conference on e-Science and Grid Computing (e-Science 2006) -- Amsterdam, The Netherlands

This paper describes the development and use of an extensible service provider toolkit (ESP) for an Earth Science service-oriented architecture (SOA). Grid-enabled Earth Science applications and tools are beginning to use web services to integrate distributed resources and legacy applications. Unfortunately, each application requires substantial effort often repeated to implement base Grid functionality before addressing application-specific requirements. In an effort to help Earth Science application developers more rapidly develop these web services, we have created an extensible service provider toolkit. The toolkit provides the foundation to develop specialized services for Earth Science Grids, including legacy application and computational resource services. To demonstrate the functionality of ESP, we redeveloped several existing web services and illustrate ESP's benefits, including reduced software development time and software reuse.

Serguei Ovtchinnikov (Spring 2006)
Algorithms for Solving a Model Magnetohydrodynamics Problem in Two-Dimensional Space
SIAM Conference on Parallel Processing for Scientific Computing 2006 -- San Francisco, California

In this presentation we discuss parallel fully implicit Newton-Krylov-Schwarz algorithms for solving a model magnetohydrodynamics problem in two-dimensional space. Current density sheets become nearly singular in the process of magnetic reconnection. This behavior of the solution limits time step sizes in explicit schemes. Our approach is a fully implicit time integration using Newton-Krylov techniques with one- and two-level additive Schwarz preconditioning. We study parallel convergence of the implicit algorithms on fine meshes as implemented and run on an IBM BG/L supercomputer with one to nine hundred processors.

Jason Cope (Fall 2005)
Grid-BGC: A Grid Enabled Terrestrial Carbon Cycle Modeling System
European Conference on Parallel Processing (Euro-Par) 2005 -- Lisbon, Portugal

Grid-BGC is a Grid-enabled terrestrial biogeochemical cycle simulator collaboratively developed by the National Center for Atmospheric Research (NCAR) and the University of Colorado (CU) with funding from NASA. The primary objective of the project is to utilize Globus Grid technology to integrate inexpensive commodity cluster computational resources at CU with the mass storage system at NCAR while hiding the logistics of data transfer and job submission from the scientists. We describe a typical process for simulating the terrestrial carbon cycle, present our solution architecture and software design, and describe our implementation experiences with Grid technology on our systems. By design the Grid-BGC software framework is extensible in that it can utilize other grid-accessible computational resources and can be readily applied to other climate simulation problems which have similar workflows. Overall, this project demonstrates an end-to-end system which leverages Grid technologies to harness distributed resources across organizational boundaries to achieve a cost-effective solution to a compute-intensive problem.

Tipp Moseley (Spring 2005)
Dynamic Runtime Architecture Techniques for Enabling Continuous Optimization
Computing Frontiers 2005 -- Ischia, Italy

This paper describes techniques for runtime optimization that leverage performance counters on modern architectures. The primary discussion centers around using performance counters to do dynamic phase analysis of programs to dynamically adjust the operating system scheduling policy to optimize contention between threads on a multithreaded (specifically, HyperThreading) processor.

Matthew Woitaszek (Spring 2004)
NCAR Community Climate System Model Performance
5th International Conference on Linux Clusters: The HPC Revolution 2004 -- Austin, Texas

In this paper, we examine the performance of two components of the NCAR Community Climate System Model (CCSM) executing on clusters with a variety of microprocessor architectures and interconnects. Specifically, we examine the execution time and scalability of the Community Atmospheric Model (CAM) and the Parallel Ocean Program (POP) on Linux clusters with Intel Xeon and AMD Opteron processors, using Dolphin, Myrinet, and Infiniband interconnects, and compare the performance of the cluster systems to an SGI Altix and an IBM p690 supercomputer.

Martin Hirzel (Fall 2003)
Connectivity-Based Garbage Collection
18th Annual ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 2003) -- Anaheim, California

We introduce a new family of connectivity-based garbage collectors (CBGC) that are based on potential object-connectivity properties. The key feature of these collectors is that the placement of objects into partitions is determined by performing one of several forms of connectivity analyses on the program. This enables partial garbage collections, as in generational collectors, but without the need for any write barrier.

The contributions of this paper are 1) a novel family of garbage collection algorithms based on object connectivity; 2) a detailed description of an instance of this family; and 3) an empirical evaluation of CBGC using simulations. Simulations help explore a broad range of possibilities for CBGC, ranging from simplistic ones that determine connectivity based on type information to oracular ones that use run-time information to determine connectivity. Our experiments with the oracular CBGC configurations give an indication of the potential for CBGC and also identify weaknesses in the realistic configurations. We found that even the simplistic implementations beat state-of-the-art generational collectors with respect to some metrics (pause times and memory footprint).

Ernesto Prudencio (Spring 2003)
A Parallel Full Space SQP Lagrange-Newton-Krylov-Schwarz Method for Flow Control Problems
Seventh U.S. National Congress on Computational Mechanics (USNCCM7) -- Albuquerque, New Mexico

Optimization problems constrained by nonlinear equality partial differential equations have been the focus of intense research in scientific computation lately. The state-of-the-art methods for the parallel numerical solution of such problems involve sequential quadratic programming (SQP), with either reduced or full space approaches.

In this talk we propose a class of parallel full space SQP Lagrange-Newton-Krylov-Schwarz (LNKSz) algorithms. In LNKSz, a Lagrangian functional is formed and differentiated to obtain an optimality system of nonlinear equations. Inexact Newton's method with line search is then applied and at each Newton's iteration the Karush-Kuhn-Tucker system is solved with a Krylov subspace method preconditioned with overlapping additive Schwarz.

We apply LNKSz to some boundary control problems of steady-state flows of viscous incompressible fluids described by Navier-Stokes equations in velocity-vorticity formulation. We propose the application of LNKSz to flow control problems as a natural extension to the successful application of NKSz to flow simulations. We report the results of a PETSc based implementation of LNKSz for different combinations of Reynolds numbers, grid sizes and number of processors.

Soraya Ghiasi (Fall 2002)
Microarchitectural Denial of Service Attacks - Insuring Microarchitectural Fairness
35th International Symposium on Microarchitecture (MICRO-35) -- Istanbul, Turkey

Simultaneous multithreading processors are appearing on the market and hold great potential for improving the performance of modern processors, as measured by throughput, with a small increase in chip real estate. They accomplish this by sharing resources that are not typically shared in current processors including the execution engine. Resources that are shared can be attacked. We investigated the types of attacks that can occur, demonstrated that they do occur on real hardware (an Intel Pentium 4 Xeon), will continue to get worse in future generations, and identified microarchitectural mechanisms that can be employed to detect the attacks and prevent them from effectively stalling legitimate use of the processor. Attack detection and prevention do not improve processor performance, but they do allow the full utilization of the processor for its intended processing purposes.

San Skulrattanakulchai (Spring 2002)
Δ-List Vertex Coloring in Linear Time
Eighth Scandinavian Workshop on Algorithm Theory (SWAT 2002) -- Turku, Finland

We present a new proof of a theorem of Erdös, Rubin and Taylor, which states that the list chromatic number (or choice number) of a connected, simple graph that is neither complete nor an odd cycle does not exceed its maximum degree Δ. Our proof yields the first-known linear-time algorithm to Δ-list-color graphs satisfying the hypothesis of the theorem. Without change, our algorithm can also be used to Δ-color such graphs. It has the same running time as, but seems to be much simpler than, the current known algorithm, due to Lovász, for Δ-coloring such graphs.

We also give a specialized version of our algorithm that works on subcubic graphs (ones with maximum degree three) by exploiting a simple decomposition principle for them.

Coloring Algorithms on Subcubic Graphs
Eighth Annual International Computing and Combinatorics Conference (COCOON 2002), Singapore

We present efficient algorithms for three coloring problems on subcubic graphs (ones with maximum degree 3). These algorithms are based on a simple decomposition principle for subcubic graphs. The first algorithm is for 4-edge coloring, or more generally, 4-list-edge coloring. Our algorithm runs in linear time, and appears to be simpler than previous ones. As evidence we give the first randomized EREW PRAM algorithm that uses O(n/log n) processors and runs in O(log n) time with high probability, where n is the number of vertices of the input graph. The second algorithm is the first linear-time algorithm to 5-total-color subcubic graphs. The third algorithm generalizes this to the first linear-time algorithm to 5-list-total-color subcubic graphs.

Maria Murillo (Spring 2001)
Parallel Algorithm and Software for Solving Time-Dependent Nonlinear Bidomain Equations
First SIAM Conference on Computational Science and Engineering -- Washington, DC

We present our preliminary results from applying the Newton-Krylov-Schwarz method for the simulation of the electrical activity of the heart in two dimensions. We compare both the monodomain and the bidomain nonlinear equations, using a fully implicit time discretization scheme, and solving the resulting large system of equations with a Newton based algorithm at each step. The results are obtained on a cluster of workstations, using PETSc of the Argonne National Laboratory.

Robert Cooksey (Fall 2000)
Content-based Prefetching: Initial Results
2nd Workshop on Intelligent Memory Systems -- Boston, Massachusetts

Memory prefetching attempts to reduce the memory latency by moving data from memory closer to the processor. Different prefetching mechanisms attempt to model access patterns that may be used by programs. For example, a stride or stream prefetcher assumes that programs will access memory in a linear pattern.

This paper explores content-based prefetching, which is an attempt to prefetch "pointer chasing" references. Content-based prefetching works by examining the content of data as it is moved from memory to the caches. Data values that are likely to be addresses are then translated and pushed to a prefetch buffer. Content-based prefetching should be able to prefetch sparse data structures, including graphs, lists and trees. This paper records our early experience with content-based prefetching and the problems that must be overcome for it to be useful.

Roderick Bloem (Spring 2000)
Efficient Buchi Automata from LTL Formulae
12th International Conference on Computer-Aided Verification (CAV'00) -- Chicago, Illinois
To formally and automatically verify the correctness of a program, specifications need to be written in a formal specification language. LTL is an important, intuitive temporal logic that is often used to specify properties of nonterminating programs. Such a specification could be "Whenever a request arrives, it is eventually acknowledged".

Correctness of LTL specifications cannot be checked directly. Rather, the specification must be translated into a Buechi automaton, first. The most practical algorithm to check correctness of specifications given as a Buechi automaton has a complexity that is quadratic in the size of the automaton. Therefore, it is important to have small automata. Unfortunately, finding the smallest automaton is an PSPACE-hard problem.

This paper presents a heuristic procedure to translate LTL specifications into automata. It relies on rewriting the LTL specification, employing boolean optimization techniques to yield a small automaton, and applying simulation-based techniques to reduce the size of the automaton. The boolean translation routine is optimal in a class of translation algorithms that contains all previously published ones. We evaluated the algorithm on both specifications from practice and on random specifications. On both groups, it yields automata that are about one third the size of those yielded by the last published algorithm (presented at CAV'99).

Martin Burtscher (Fall 1999)
Exploring Last n Value Prediction
International Conference on Parallel Architectures and Compilation Techniques (PACT '99) -- Newport Beach, California

We evaluate the trade-off between tall and slim versus short and wideload value predictors of the same total size, i.e., between retaining a few values for a large number of load instructions and many values for a proportionately small number of loads. The results enabled us to implement a last four value predictor that significantly outperforms other predictors from the literature.

Matthew Seidl (Spring 1999)
Segregating Heap Objects by Lifetime and Reference Behavior
ASPLOS VIII - Architectural Support for Programming Languages and Operating Systems -- San Jose, California

This paper covers our work in profiling programs to gather information about how dynamic memory is allocated and used. We then automatically build a custom allocator with the gathered information in order to improve the virtual memory and TLB cache performance of the program.

Scott Brandt (Fall 1998)
Resource Management for a Virtual Planning Room
3rd International Workshop on Multimedia Information Systems -- Como, Italy

This paper discusses the research we have been doing on the Virtual Planning Room. It includes a discussion of the VPR and the resulting demands that the application places on the operating system, as well as several research efforts we have begun to address those requirements including my research in Soft Real-Time Scheduling and Sam Siewert's research in In-Kernel Pipes. The presentation itself will also include some information about Adam Griff's work in Distributed Object Management.

Artur Klauser (Spring 1998)
Selective Eager Execution on the PolyPath Architecture
25th Annual International Symposium on Computer Architecture (ISCA'98) -- Barcelona, Spain

Control-flow misprediction penalties are a major impediment to high performance in wide-issue superscalar processors. In this paper we present Selective Eager Execution (SEE), an execution model to overcome mis-speculation penalties by executing both paths after diffident branches. We present the micro-architecture of the PolyPath processor, which is an extension of an aggressive superscalar, out-of-order architecture. The PolyPath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences.

Performance results of our detailed execution-driven, pipeline-level simulations show that the SEE concept achieves a potential average performance improvement of 48% on the SPECint95 benchmarks. A realistic implementation with a dynamic branch confidence estimator can improve performance by as much as 36% for the go benchmark, and an average of 14% on SPECint95, when compared to a normal superscalar, out-of-order, speculative execution, monopath processor. Moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic.

Graduate School Student Travel Grant

The Graduate School offers partial funding for graduate students to present findings at meetings or conferences. The Graduate School provides a travel grant of $200 for domestic conferences and $300 for international conferences. Funds will be applied directly to the student's tuition account. If the account balance is zero, a refund check will be disbursed by the Bursar's office. The grant is treated like a fellowship and reported to the Office of Financial Aid.

Beverly Sears Graduate Student Grants

Beverly Sears Graduate Student Grants are competitive awards sponsored by the Graduate School that support the research, scholarship and creative work of graduate students from all departments. All funding is provided by alumni donations. Grants range from $100 to a maximum of $1,000. The Beverly Sears Graduate Student Grants competition is held once each year in the spring semester.

 
See also:
Department of Computer Science
College of Engineering and Applied Science
University of Colorado Boulder
Boulder, CO 80309-0430 USA
Questions/Comments?
Send email to

Engineering Center Office Tower
ECOT 717
+1-303-492-7514
FAX +1-303-492-2844
XHTML 1.0/CSS2 ©2012 Regents of the University of Colorado
Privacy · Legal · Trademarks
December 6, 2011 (06:56)
 
.