## Probabilistic Program Analysis ProjectUncertainty is an integral part of many different types of systems. This project studies programs that manipulate uncertain variables using random value generators. The goal is to develop static and dynamic analysis techniques that can reason about properties of probabilistic programs as stochastic processes. There are many applications for such a study:- Analysis of uncertainty in decision making procedures used in medicine. We are especially interested in how the presence of uncertainty in patient data can affect the results of the study.
- Analysis of models used in machine learning: how do the resulting classifiers fare under the presence of noise in the inputs?
- Analysis of randomized computation: can we extend program analysis to systematically establish properties of randomized algorithms.
## Reasoning about Probabilistic ProgramsThe main problems that we are focussing on include:**Stochastic Safety:**Given a probabilistic program P and a set of states U, what is the probability that an execution of the program reaches U within N steps?**Temporal Properties:**On the flip side, we are interested in more interesting temporal logic properties that hold*almost surely*or with probability 1. These include, almost sure termination (the program terminates with probability 1), almost sure recurrence (the program's execution visits a set of states infinitely often), almost sure persistence (the program's execution visits a target set and eventually stays inside this set forever), and an entire hierarchy of temporal properties.
## Symbolic Reasoning about Probabilistic Programs**Symbolic Approach:**Symbolic approaches to reasoning about probabilistic programs reason about the*abstract*semantics of the programs using symbolic decision procedures. Examples of symbolic approaches considered thus far include:- Our PLDI 2013 paper shows how to use symbolic execution and volume computation to examine finitely many program paths but conclude properties for whole program. This is possible in the presence of an interval-based volume computation approach that provides guaranteed upper and lower bounds to the probability of each path. This allows us to establish both upper and lower bounds on the probability that a particular set of states will be reached.
- A key drawback of our PLDI 2013 approach involves reasoning about loops: In some probabilistic models, loops model recurring computation that seldom stops. For instance, as we run a controller, we would like to study the probability that the system state is within a bounds after N steps of execution, but as a function of N. Our CAV 2013 paper shows how inferring super-martingales for loops along with well-known concentration of measure inequalities can provide useful bounds. These bounds are conservative, often orders of magnitude larger than the actual value. But in some cases, we can establish upper bounds as small as $latex 10^{-20}$: such bounds are hard to deduce through our earlier symbolic execution approach, for instance.
- Our SAS 2014 paper explores the relation between martingales and so called "inductive-expectation invariants". Traditionally, invariants in programs are used to characterize sets of states. For instance, a loop invariant $latex i < 10$ at a program point says that for any execution of the program reaching that point, the value of i at a point is less than 10. On the other hand, we can imagine assertions like "the average value of i reaching a program point is less than 10". In our SAS 2014 paper, we provide a means to generate such invariants.
- Our ongoing work is extending our work on Martingales with classical ideas in Markov process theory to prove recurrence and persistence properties of probabilistic programs. We wish to extend this to a proof system for establishing temporal properties almost surely.
## SoftwareWe are releasing the code for our PLDI 2013 paper along with benchmarks for the PLDI 2013 paper as a single tarball. The compilation instructions are available as part of the release. The code bundles up two tools using a perl script:**Volume Computation Tool (volComp):**We are using it to compute polyhedral volumes. The version used in our PLDI paper has some known limitations that are being improved for release as a standalone tool. If you are interested in using volComp as a standalone tool for your research, please contact us and we are happy to provide you with a preview of our stand alone tool and a user manual for this part.**Simple Front-End for an imperative language:**The current frontend is implemented in OCaml and is used for our benchmarks and experiments. It is rather minimal as a language. The front-end implements the symbolic execution and incorporates property directed backward slicing with other improvements that make a big difference in the overall performance.
## Benchmark ProgramsAll the benchmarks as a single tarball: tar.gz We will provide a detailed description of a few benchmarks below.## (A) Kidney disease EGFR score calculationEGFR stands for estimated glomerular filtration rate. It is a standard score that indicates the possibility of kidney disease. More information about this calculation is available here. Here is a online calculator. The CKD-EPI formula is a standard formula for estimating kidney disease risk based on the patient's age, body weight, gender (male/female), ethnicity (isAfricanAmerican?) and their serum creatinine levels (ccr). Of course, there is some randomness to lab tests that measure serum creatinine levels and mistakes in electronic medical records can change the other patient data in a stochastic manner. The question is what is the risk of a mistake causing a major variation in the EGFR calculation. The benchmark program example-ckd-epi.imp models the population wide distribution of the inputs to the CKD EPI formula and the possible noise that can affect the inputs. It then asks for the probability that the difference between the actual and estimated values exceed a threshold. Since the CKD-EPI formula for EGFR calculation is log-linear, our tool models the log of the EGFR rather than the EGFR itself.## (B) Filling a ContainerConsider a manipulator that fills containers in an assembly line with a liquid. Our goal is to fill each container with some fixed volume. However, sensor errors lead us to an estimate of the volume to be filled that can be off and actuator errors ensure that the rate at which the container is being filled can be slightly different from the commanded rate. The actuator can be commanded to fill the container in one of three modes "fast", "medium" or "slow". The benchmark program example-vol.imp models this as a probabilistic program. We ask questions that pertain to: what is the overall error in the filling up of the container (actual volume of liquid - desired volume). How fast can the container be filled? and so on.## (C) Framingham study heart disease risk calculatorThe Framingham heart study is a large medical study involving many participants over a period of decades. As a part of this study, there have been many published calculators for risk of developing various diseases related to the cardio-vascular system including a risk calculator for cardiac failure and chronic hypertension. We wish to study the robustness of the risk calculators to the noise in the input data. Benchmarks framingham.imp and framingham-hypten.imp attempt to model these risks and ask questions that relate to the robustness in the presence of input noise.## (D) Inverted Pendulum Controller under Stochastic Disturbances.The study of control systems under stochastic disturbances is an important aspect of control design. We made up an example based on a LQR (Linear Quadratic Regulator) controller for a simple inverted pendulum and injected stochastic disturbances in the dynamics. The benchmark program example-invPend.imp asks whether the presence of disturbances can cause the controller to violate a safety specification. Besides the examples described above, we have collected a set of interesting benchmarks: The reader can browse them or download them as a singletarball.
## DownloadsWe are in the process of writing instruction manuals and updating our tools to enhance usability. In the meantime, we are happy to share our tools upon request. 1. Probabilistic Program Analyzer for our PLDI 2013 paper (please email Sriram Sankaranarayanan or Aleksandar Chakarov if the release does not compile or work on your setup). Our release includes the tool volComp, a polyhedral volume estimator. volComp is being improved for a future release as a standalone tool. 2. Martingale and Super Martingale Synthesis tool for our CAV 2013 paper (please email Sriram Sankaranarayanan if you are interested in the tool).## Publications
## PeoplePrincipal Investigator: Sriram Sankaranarayanan (Asst. Professor, University of Colorado Boulder) Graduate Students:- Aleksandar Chakarov (CSCI PhD Student).
- Dimitrios Economou (CSCI MS Student).
- Dr. Yuen-Lam (Vris) Voronin.
- Dr. Yan Zhang (Postdoctoral Research, National University of Singapore).
- Dr. Amin Ben Sassi (TU Vienna).
## FundingWe acknowledge support from the US National Science Foundation under awards CCF-1320069 (primary source of support), CNS-0953941, CNS-1016994 and CPS-1035845. All opinions expressed are those of the authors and not necessarily of the NSF |