Publications of George Avrunin

RD Dias, MA Zenati, HM Conboy, LA Clarke, LJ Osterweil, GS Avrunin, and SJ Yule. Dissecting cardiac surgery: A video-based recall protocol to elucidate team cognitive processes in the operating room. Annals of Surgery, 2019. To appear.

Leon J. Osterweil, Heather M. Conboy, Lori A. Clarke, and George S. Avrunin. Process-model-driven guidance to reduce surgical procedure errors: An expert opinion. Seminars on Thoracic and Cardiovascular Surgery, 31:453–457, 2019. [ .pdf ]

This paper explains how a detailed, precise surgical process model can help reduce errors by fostering better understanding, providing guidance during surgery, helping train newcomers, and by supporting process improvement. It describes the features that a process-modeling language should have in order to support the precise specification of such models.

George S. Avrunin, Stefan C. Christov, Lori A. Clarke, Heather M. Conboy, Leon J. Osterweil, and Marco A. Zenati. Process driven guidance for complex surgical procedures. In American Medical Informatics Association Annual Symposium, pages 175–184, November 2018. [ .pdf ]

Surgical team processes are known to be complex and error prone. This paper describes an approach that uses a detailed, validated model of a medical process to provide the clinicians who carry out that complex process with offline and online guidance to help reduce errors. Offline guidance is in the form of a hypertext document describing all the ways the process can be carried out. Online guidance is in the form of a context-sensitive and continually updated electronic “checklist” that lists next steps and needed resources, as well as completed steps. In earlier work, we focused on providing such guidance for single-clinician or single-team processes. This paper describes guiding the collaboration of multiple teams of clinicians through complex processes with significant concurrency, complicated exception handling, and precise and timely communication. We illustrate this approach by applying it to a highly complex, high risk subprocess of cardiac surgery.

RD Dias, HM Conboy, JM Gabany, LA Clarke, LJ Osterweil, D Arney, JM Goldman, G Riccardi, GS Avrunin, SJ Yule, and MA Zenati. Intelligent interruption management system to enhance safety and performance in complex surgical and robotic procedures. In D Stoyanov, Z Taylor, D Sarikaya, and et al., editors, OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, volume 11041 of LNCS, pages 62–68. Springer Nature International, September 2018. (Best Paper award).

Heather M. Conboy, George S. Avrunin, Stefan C. Christov, Lori A. Clarke, Leon J. Osterweil, and Marco A. Zenati. Process-model-driven guidance to reduce surgical procedural errors. Abstract presented at American Association for Thoracic Surgery Surgical Patient Safety Course, June 2018.

Roger Daglius Dias, Heather Conboy, Jennifer Gabany, Lori Clarke, Leon Osterweil, George S. Avrunin, David Arney, Julian Goldman, Giuseppe Riccardi, Steven Yule, and Marco A Zenati. Development of an interactive dashboard to analyze cognitive workload of surgical teams during complex procedural care. In 2018 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), pages 77–82, 2018. [ .pdf ]

In the surgical setting, team members constantly deal with a high-demand operative environment that requires simultaneously processing a large amount of information. In certain situations, high demands imposed by surgical tasks and other sources may exceed team member’s cognitive capacity, leading to cognitive overload which may place patient safety at risk. In the present study, we describe a novel approach to integrate an objective measure of team member’s cognitive load with procedural, behavioral and contextual data from real-life cardiac surgeries. We used heart rate variability analysis, capturing data simultaneously from multiple team members (surgeon, anesthesiologist and perfusionist) in a real-time and unobtrusive manner. Using audio-video recordings, behavioral coding and a hierarchical surgical process model, we integrated multiple data sources to create an interactive surgical dashboard, enabling the analysis of the cognitive load imposed by specific steps, substeps and/or tasks. The described approach enables us to detect cognitive load fluctuations over time, under specific conditions (e.g. emergencies, teaching) and in situations that are prone to errors. This in-depth understanding of the relationship between cognitive load, task demands and error occurrence is essential for the development of cognitive support systems to recognize and mitigate errors during complex surgical care in the operating room.

George S. Avrunin, Lori A. Clarke, Heather M. Conboy, Leon J. Osterweil, Roger D. Dias, Steven J. Yule, Julian M. Goldman, and Marco A. Zenati. Toward improving surgical outcomes by incorporating cognitive load measurement into process-driven guidance. In Proceedings of the 2018 IEEE/ACM International Workshop on Software Engineering in Healthcare Systems, pages 2–9, 2018. [ .pdf ]

This paper summarizes the accomplishments and recent directions of our medical safety project. Our process-based approach uses a detailed, rigorously-defined, and carefully validated process model to provide a dynamically updated, context-aware and thus, “Smart” Checklist to help process performers understand and manage their pending tasks [7]. This paper focuses on support for teams of performers, working independently as well as in close collaboration, in stressful situations that are life critical. Our recent work has three main thrusts: provide effective real-time guidance for closely collaborating teams; develop and evaluate techniques for measuring cognitive load based on biometric observations and human surveys; and, using these measurements plus analysis and discrete event process simulation, predict cognitive load throughout the process model and propose process modifications to help performers better manage high cognitive load situations.

This project is a collaboration among software engineers, surgical team members, human factors researchers, and medical equipment instrumentation experts. Experimental prototype capabilities are being built and evaluated based upon process models of two cardiovascular surgery processes, Aortic Valve Replacement (AVR) and Coronary Artery Bypass Grafting (CABG). In this paper we describe our approach for each of the three research thrusts by illustrating our work for heparinization, a common subprocess of both AVR and CABG. Heparinization is a high-risk error-prone procedure that involves complex team interactions and thus highlights the importance of this work for improving patient outcomes.

Jake Cyr, Georgios Karagkiaouris, Stefan C. Christov, Heather M. Conboy, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, Elizabeth A. Henneman, Jenna L. Marquard, and Nancy Famigletti. Web-based smart checklists for guiding performers of safety-critical human-intensive processes. Poster at American Society for Engineering Education Northeast Section (ASEE-NE) Annual Conference, April 2017.

Leon J. Osterweil, Matt Bishop, Heather M. Conboy, Huong Phan, Borislava I. Simidchieva, George S. Avrunin, Lori A. Clarke, and Sean Peisert. Iterative analysis to improve key properties of critical human-intensive processes: An election security example. ACM Transactions on Privacy and Security, 20(2):Article 5, 31 pages, March 2017. [ DOI | .pdf ]

In this article, we present an approach for systematically improving complex processes, especially those involving human agents, hardware devices, and software systems. We illustrate the utility of this approach by applying it to part of an election process and show how it can improve the security and correctness of that subprocess. We use the Little-JIL process definition language to create a precise and detailed definition of the process. Given this process definition, we use two forms of automated analysis to explore whether specified key properties, such as security and safety policies, can be undermined. First, we use model checking to identify process execution sequences that fail to conform to event-sequence properties. After these are addressed, we apply fault tree analysis to identify when the misperformance of steps might allow undesirable outcomes, such as security breaches. The results of these analyses can provide assurance about the process; suggest areas for improvement; and, when applied to a modified process definition, evaluate proposed changes.

Heather M. Conboy, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, Julian M. Goldman, Steven J. Yule, Marco A. Zenati, and Stefan C. Christov. Cognitive support during high-consequence episodes of care in cardiovascular surgery. In Proceedings of the 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, 2017. IEEE. [ .pdf ]

Despite significant efforts to reduce preventable adverse events in medical processes, such events continue to occur at unacceptable rates. This paper describes a computer science approach that uses formal process modeling to provide situationally aware monitoring and management support to medical professionals performing complex processes. These process models represent both normative and non-normative situations, and are validated by rigorous automated techniques such as model checking and fault tree analysis, in addition to careful review by experts. Context-aware Smart Checklists are then generated from the models, providing cognitive support during high-consequence surgical episodes. The approach is illustrated with a case study in cardiovascular surgery.

Stefan C. Christov, Jenna L. Marquard, George S. Avrunin, and Lori A. Clarke. Assessing the effectiveness of five process elicitation methods: A case study of chemotherapy treatment plan review. Journal of Applied Ergonomics, 59:364–376, 2017. [ DOI | .pdf ]

To reduce the probability of failures and to improve outcomes of safety-critical human-intensive processes, such as health care processes, it is important to be able to rigorously analyze such processes. The quality of that analysis often depends on having an accurate, detailed, and sufficiently complete understanding of the process being analyzed, where this understanding is typically represented as a formal process model that could then drive various rigorous analysis approaches. Developing this understanding and the corresponding formal process model may be difficult and, thus, a variety of process elicitation methods are often used. The work presented in this paper evaluates the effectiveness of five common elicitation methods in terms of their ability to elicit detailed process information necessary to support rigorous process analysis. These methods are employed to elicit typical steps and steps for responding to exceptional situations in a safety-critical health care process, the chemotherapy treatment plan review process. The results indicate strengths and weaknesses of each of the elicitation methods and suggest that it is preferable to apply multiple elicitation methods.

Stefan C. Christov, Heather M. Conboy, Nancy Famigletti, George S. Avrunin, Lori A. Clarke, and Leon J. Osterweil. Smart checklists to improve healthcare outcomes. In Proceedings of the 2016 International Workshop on Software Engineering in Healthcare Systems, pages 54–57, Austin, TX, 2016. [ .pdf ]

This paper presents an approach for automatically generating Smart Checklists—context-dependent, dynamically updated views of on-going medical processes based on current activities and previously validated process models of best practices. This approach addresses not only nominal scenarios but includes guidance when exceptional situations arise. The framework for creating these checklists is described, along with an example and discussion of issues.

Huong Phan, George Avrunin, Lori Clarke, and Leon Osterweil. Improving process robustness by identifying vulnerabilities using fault tree analysis. In Workshop on Methodologies for Robustness Injection into Business Processes, page 6 pages, Montreal, 2015. [ .pdf ]

This paper presents a systematic, incremental approach to identifying vulnerabilities in models of complex processes. The approach builds upon Fault Tree Analysis (FTA), but enhances it to deliver superior results. FTA has been applied to process models to determine what combinations of events can lead to a specified hazard —an undesired process outcome. Such combinations, called cut sets, provide valuable insights into process vulnerabilities, and thus, are useful for improving process robustness. Manual FTA can be tedious and error-prone. Results of previous automated FTA work, however, are incomplete as they fail to recognize some vulnerabilities. Also, they often do not provide enough information to understand the real vulnerabilities, or they provide too many cut sets to be helpful, especially when processes are large and complex. Our approach takes into account processes data and control dependences to generate more thorough results. It supports selective vulnerability exploration by initially presenting users with cut sets for a high-level, and thus smaller, fault tree; users can then select a cut set for more detailed analysis, culminating in concrete scenarios which show how events in the cut set could lead to the hazard. Our approach also produces more precise results by automatically eliminating inconsistent and spurious cut sets.

Stefan C. Christov, George S. Avrunin, and Lori A. Clarke. Online deviation detection for medical processes. In American Medical Informatics Association Annual Symposium, pages 395–404, November 2014.

Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done.

Heather M. Conboy, Jason K. Maron, Stefan C. Christov, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, and Marco A. Zenati. Process modelling of aortic cannulation in cardiac surgery: Toward a smart checklist to mitigate the risk of stroke. In Proceedings of the Fifth Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI '14), page 10 pages, Cambridge, MA, September 2014. [ .pdf ]

Preventable adverse events related to surgery account for two thirds of hospital complications. Adherence to recommended processes of care has been suggested as a strategy to improve patient safety in surgery. This paper presents preliminary work that is exploring the use of a semantically rich process-modelling notation to describe and inform critical phases of common procedures in cardiac surgery. This work focuses on reducing stokes, a catastrophic and often preventable adverse event. The well-defined semantics of the process-modelling notation allow rigorous analysis techniques to be applied. In our work, model checking is applied to determine if the process as defined by the process model always adheres to event sequence requirements and fault-tree analysis is applied to determine where the process is vulnerable to performance failures. The results from these analyses lead to validated and improved process models that are then used to generate context-sensitive, dynamic “smart” checklists. Future work will evaluate whether the introduction of dynamic checklists based on these models will reduce the number and severity of errors in cardiac surgery.

Matt Bishop, Heather M. Conboy, Huong Phan, Borislava I. Simidchieva, George S. Avrunin, Lori A. Clarke, Leon J. Ostwerweil, and Sean Peisert. Insider threat identification by process analysis. In Workshop on Research for Insider Threat, IEEE Computer Society Security and Privacy Workshops (SPW14), San Jose, CA, 2014. [ .pdf ]

The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential “insider”). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.

Heather M. Conboy, George S. Avrunin, and Lori A. Clarke. Modal abstraction view of requirements for medical devices used in healthcare processes. In SEHC '13: Proceedings of the 2013 ICSE Workshop on Software Engineering in Health Care, pages 24–27, San Francisco, May 2013. [ .pdf ]

Medical device requirements often depend on the healthcare processes in which the device is to be used. Since such processes may be complex, critical requirements may be specified inaccurately, or even missed altogether. We are investigating an automated requirement derivation approach that takes as input a model of the healthcare process along with a model of the device and tries to derive the requirements for that device. Our initial experience with this approach has shown that when the process and device involve complex behaviors, the derived requirements are also often complex and difficult to understand. In this paper, we describe an approach for creating a modal abstraction view of the derived requirements that decomposes each requirement based on it.

Stefan C. Christov, George S. Avrunin, and Lori A. Clarke. Considerations for online deviation detection in medical processes. In SEHC '13: Proceedings of the 2013 ICSE Workshop on Software Engineering in Health Care, pages 50–56, San Francisco, May 2013. [ .pdf ]

Medical errors are a major cause of unnecessary suffering and even death. To address this problem, we are investigating an approach for automatically detecting when an executing process deviates from a set of recommended ways to perform that process. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before something bad happens. This paper presents the proposed deviation detection approach, identifies some of the major research issues that arise, and discusses strategies to address these issues. A preliminary evaluation is performed by applying the approach to a part of a detailed process model. This model has been developed in an in-depth case study on modeling and analyzing a blood transfusion process.

Wilson C. Mertens, Stefan C. Christov, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, Lucinda J. Cassells, and Jenna L. Marquard. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery. The Joint Commission Journal on Quality and Patient Safety, 38(11):497–505, November 2012.

Background: Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process.

Methods: The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process).

Results: The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008.

Discussion: Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

Huong Phan, George S. Avrunin, Matt Bishop, Lori A. Clarke, and Leon J. Osterweil. A systematic process-model-based approach for synthesizing attacks and evaluating them. In Proceedings of the 2012 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections, Bellevue, WA, August 2012. [ .pdf ]

George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, Julian M. Goldman, and Tracy Rausch. Smart checklists for human-intensive medical systems. In Proceedings of the Workshop on Open Resilient human-aware Cyberphysical Systems (WORCS-2012), June 2012. [ DOI | .pdf ]

Human-intensive cyber-physical systems involve software applications and hardware devices, but also depend upon the expertise of human participants to achieve their goal. In this paper, we describe a project we have started to improve the effectiveness of such systems by providing Smart Checklists to support and guide human participants in carrying out their tasks, including their interactions with the devices and software applications.

Heather M. Conboy, George S. Avrunin, and Lori A. Clarke. Process-based derivation of requirements for medical devices. In 1st ACM International Health Informatics Symposium, pages 656–665, Arlington, VA, November 2010. [ DOI | .pdf ]

One goal of medical device certification is to show that a given medical device satisfies its requirements. But the requirements that should be met by such a device depend on the clinical processes in which the device is to be used, and such processes are increasingly large and complex. Critical requirements may thus be specified inaccurately or incompletely, or even missed altogether, and the use of the devices may lead to harm. Thus, we investigated a process-based requirement derivation approach that inputs a model that captures a particular medical process and a requirement that should be satisfied by that process, and outputs a derived requirement of the medical device that is sufficient to prevent any violations of the process requirement. Our approach combines an approach for generating assumptions for assume-guarantee reasoning with one for interface synthesis to automate the derivation of the medical device requirements. The proposed approach iteratively performs the requirement derivation by employing a model checker and a learning algorithm. We implemented this approach and evaluated our approach by applying it to two small case studies. Our experiences showed that the proposed approach could be successfully applied to abstract models of portions of real medical processes and that the derived requirements of the medical devices appeared useful and understandable.

Lori A. Clarke, Leon J. Osterweil, and George S. Avrunin. Supporting human-intensive systems. In Proceedings of 2010 FSE/SDP Workshop on the Future of Software Engineering Research, pages 87–91, Santa Fe, NM, November 2010. ACM.

George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, Stefan C. Christov, Bin Chen, Elizabeth A. Henneman, Philip L. Henneman, Lucinda Cassells, and Wilson Mertens. Experience modeling and analyzing medical processes: UMass/Baystate medical safety project overview. In 1st ACM International Health Informatics Symposium, pages 316–325, Arlington, VA, November 2010. [ DOI | .pdf ]

This paper provides an overview of the UMass/Baystate Medical Safety project, which has been developing and evaluating tools and technology for modeling and analyzing medical processes. We describe the tools that currently comprise the Process Improvement Environment, PIE. For each tool, we illustrate the kinds of information that it provides and discuss how that information can be used to improve the modeled process as well as provide useful information that other tools in the environment can leverage. Because the process modeling notation that we use has rigorously defined semantics and supports creating relatively detailed process models (for example, our models can specify alternative ways of dealing with exceptional behavior and concurrency), a number of powerful analysis techniques can be applied. The cost of eliciting and maintaining such a detailed model is amortized over the range of analyses that can be applied to detect errors, vulnerabilities, and inefficiencies in an existing process or in proposed process modifications before they are deployed.

Danhua Wang, Jingui Pan, George S. Avrunin, Lori A. Clarke, , and Bin Chen. An automatic failure mode and effect analysis technique for processes defined in the Little-JIL process definition language. In 22nd International Conference on Software Engineering and Knowledge Engineering, pages 765–770, July 2010. [ .pdf ]

Many processes are safety critical and therefore could benefit from proactive safety analysis techniques that attempt to identify weaknesses of such processes before they are put into use. In this paper, we propose an approach that automatically derives Failure Mode and Effect Analysis (FMEA) information from processes modeled in the Little-JIL process definition language. Typically FMEA information is created manually by skilled experts, an approach that is usually considered to be time-consuming, error-prone, and tedious when applied to complex processes. Although great care must be taken in creating an accurate process definition, with our approach this definition can then be used to create FMEA representations for a wide range of potential failures. In addition, our approach provides a complementary Fault Tree Analysis (FTA), thereby supporting two of the most widely used safety analysis techniques.

Stefan Christov, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, and Elizabeth A. Henneman. A benchmark for evaluating software engineering techniques for improving medical processes. In Lori A. Clarke and Jens Weber-Jahnke, editors, SEHC '10: Proceedings of the 2010 ICSE Workshop on Software Engineering in Health Care, pages 50–56, Cape Town, South Africa, May 2010. [ DOI | .pdf ]

The software engineering and medical informatics communities have been developing a range of approaches for reasoning about medical processes. To facilitate the comparison of such approaches, it would be desirable to have a set of medical examples, or benchmarks, that are easily available, described in considerable detail, and characterized in terms of the real-world complexities they capture. This paper presents one such benchmark and discusses a list of desiderata that medical benchmarks can be evaluated against.

Leon J. Osterweil, Lori A. Clarke, and George S. Avrunin. An integrated collection of tools for continuously improving the processes by which health care is delivered: A tool report. In Third International Workshop on Process-Oriented Information Systems in Healthcare (ProHealth '09), Ulm, Germany, September 2009. [ DOI | .pdf ]

This report will present a collection of tools that supports the precise definition, careful analysis, and execution of processes that coordinate the actions of humans, automated devices, and software systems for the delivery of health care. The tools have been developed over the past several years and are currently being evaluated through their application to four health care processes, blood transfusion, chemotherapy, emergency department operations, and identity verification. The tools are integrated with each other using the Eclipse framework or through the sharing of artifacts so that the internal representations generated by one might be used to expedite the actions of others. This integrated collection of tools is intended to support the continuous improvement of health care delivery processes. The process definitions developed through this framework are executable and are intended for eventual use in helping to guide actual health care workers in the performance of their activities, including the utilization of medical devices and information systems.

Jenna L. Marquard, Stefan Christov, Philip L. Henneman, Lori A. Clarke, Leon J. Osterweil, George S. Avrunin, Donald L. Fisher, Elizabeth A. Henneman, Megan M. Campbell, Tuan A. Pham, and Qi Ming Lin. Studying rigorously defined health care processes using a formal process modeling language, clinical simulation, observation, and eye tracking. In Proceedings of NDM9, the 9th International Conference on Naturalistic Decision Making, pages 239–240, London, June 2009. [ .pdf ]

Motivation: The complex nature of health care processes requires new methods for describing, capturing and improving these processes. Research approach: We deployed a novel combination of methods—formal process modeling using a language called Little-JIL, simulations with embedded errors, observations, and eye tracking technology—to gauge how health care providers complete one complex process, patient identification. Findings/Design: These methods allowed us to thoroughly analyze how health care providers completed the patient identification process with and without embedded errors, and to record exactly what participants looked at during the simulations. Research limitations/Implications: We have used this set of methods to analyze only one type of health care process to-date. Originality/Value: We can use these approaches to inform health care provider training, process redesign, and the design of technologies to support health care providers as they verify patients' identities.

Stefan Christov, George S. Avrunin, Lori A. Clarke, Philip L. Henneman, Jenna L. Marquard, and Leon J. Osterweil. Using event streams to validate process definitions. Technical Report UM-CS-2009-004, Department of Computer Science, University of Massachusetts, January 2009. [ .pdf ]

This paper describes preliminary work on validating process definitions by comparing a process definition to event streams derived from an actual execution of that process. The goal of this work is to find and remove discrepancies in the process definition before using that definition as the basis for various forms of analysis and decision making. The paper outlines mportant issues that need to be addressed and suggests possible approaches. The example used in this paper is based on a process from the medical domain.

Huong Phan, George S. Avrunin, and Lori A. Clarke. Considering the exceptional: Incorporating exceptions into property specifications. Technical Report UM-CS-2008-32, Department of Computer Science, University of Massachusetts, September 2008. [ .pdf ]

Property specifications concisely describe aspects of what a system is supposed to do. It is important that these specifications be correct, describing all the desired behavior and ruling out undesired behavior. Our experience shows that properties are sometimes specified incorrectly because specifiers fail to take into account some exceptional behaviors of the system. In previous work we presented Propel, a tool that guides specifiers through the process of creating understandable yet precise property specifications. Here we describe extensions to Propel that allow specifiers of a property to indicate what exceptions should be considered and what impact those exceptions should have on the acceptability of system behavior. Although the description is given in terms of the framework provided by Propel, the issues specifiers must consider would apply to other specification formalisms.

Elizabeth A. Henneman, Rachel Cobleigh, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, and Philip L. Henneman. Designing property specifications to improve the safety of the blood transfusion process. Transfusion Medicine Reviews, pages 1–9, June 2008. [ DOI ]

Computer scientists use a number of well-established techniques that have the potential to improve the safety of patient care processes. One is the formal definition of the properties of a process. Even highly regulated processes, such as laboratory specimen acquisition and transfusion therapy, use guidelines that may be vague, misunderstood, and hence erratically implemented. Examining processes in a systematic way has led us to appreciate the potential variability in routine health care practical and the impact of this variability on patient safety in the clinical setting. The purpose of this article is to discuss the use of innovative computer science techniques as a means of formally defining and specifying certain desirable goals of common, high-risk, patient care processes. Our focus is on describing the specification of process properties, that is, the high-level goals of a process that ultimately dictate why a process should be performed in a given manner.

Lori A. Clarke, George S. Avrunin, and Leon J. Osterweil. Using software engineering technology to improve the quality of medical processes. In ICSE Companion '08: Companion of the 30th International Conference on Software Engineering, pages 889–898. ACM, May 2008. (Invited keynote address.). [ DOI | .pdf ]

In this paper, we describe some of the key observations resulting from our work on using software engineering technologies to help detect errors in medical processes. In many ways, medical processes are similar to distributed systems in their complexity and proneness to contain errors. We have been investigating the application of a continuous process improvement approach to medical processes in which detailed and semantically rich models of the medical processes are created and then subjected to rigorous analyses. The technologies we applied helped improve understanding about the processes and led to the detection of errors and subsequent improvements to those processes. This work is still preliminary, but is suggesting new research directions for medical process improvement, software engineering technologies, and the applicability of these technologies to other domains involving human-intensive processes.

Bin Chen, George S. Avrunin, Elizabeth A. Henneman, Lori A. Clarke, Leon J. Osterweil, and Philip L. Henneman. Analyzing medical processes. In ICSE '08: Proceedings of the 30th International Conference on Software Engineering, pages 623–632. ACM, May 2008. [ DOI | .pdf ]

This paper shows how software engineering technologies used to define and analyze complex software systems can also be effective in detecting defects in human-intensive processes used to administer healthcare. The work described here builds upon earlier work demonstrating that healthcare processes can be defined precisely. This paper describes how finite-state verification can be used to help find defects in such processes as well as find errors in the process definitions and property specifications. The paper includes a detailed example, based upon a real-world process for transfusing blood, where the process defects that were found led to improvements in the process.

Shangzhu Wang, George S. Avrunin, and Lori A. Clarke. Plug-and-play architectural design and verification. In Rogerio de Limos, Felicita Di Giandomenico, Cristina Gacek, Henry Muccini, and Marlon Vieira, editors, Architecting Dependable Systems V, number 5135 in LNCS, pages 273–297. Springer, 2008. [ DOI | .pdf ]

In software architecture, components represent the computational units of a system and connectors represent the interactions among those units. Making decisions about the semantics of these interactions is a key part of the design process. It is often difficult, however, to choose the appropriate interaction semantics due to the wide range of alternatives and the complexity of the system behavior affected by those choices. Techniques such as finite-state verification can be used to evaluate the impact of these design choices on the overall system behavior.

This paper presents the Plug-and-Play approach that allows designers to experiment with alternative design choices of component interactions in a plug-and-play manner. With this approach, connectors representing specific interaction semantics are composed from a library of predefined, reusable building blocks. In addition, standard interfaces for components are defined that reduce the impact of interaction changes on the components’ computations. This approach facilitates design-time verification by improving the reusability of component models and by providing reusable formal models for the connector building blocks, thereby reducing model-construction time for finite-state verification.

Stephen F. Siegel, Anastasia Mironova, George S. Avrunin, and Lori A. Clarke. Combining symbolic execution with model checking to verify parallel numerical programs. ACM Trans. Softw. Eng. Method., 17(2):1–34, 2008. (This is a revised and expanded version of a paper that appeared in ISSTA '06.). [ DOI | .pdf ]

We present a method to verify the correctness of parallel programs that perform complex numerical computations, including computations involving floating-point arithmetic. This method requires that a sequential version of the program be provided, to serve as the specification for the parallel one. The key idea is to use model checking, together with symbolic execution, to establish the equivalence of the two programs. In this approach the path condition from symbolic execution of the sequential program is used to constrain the search through the parallel program. To handle floating-point operations, three different types of equivalence are supported. Several examples are presented, demonstrating the approach and actual errors that were found. Limitations and directions for future research are also described.

Jamieson M. Cobleigh, George S. Avrunin, and Lori A. Clarke. Breaking up is hard to do: An evaluation of automated assume-guarantee reasoning. ACM Trans. Softw. Eng. Method., 17(2):1–52, 2008. (This is a revised and expanded version of a paper that appeared in ISSTA '06.). [ DOI | .pdf ]

Finite-state verification techniques are often hampered by the state-explosion problem. One proposed approach for addressing this problem is assume-guarantee reasoning, where a system under analysis is partitioned into subsystems and these subsystems are analyzed individually. By composing the results of these analyses, it can be determined whether or not the system satisfies a property. Because each subsystem is smaller than the whole system, analyzing each subsystem individually may reduce the overall cost of verification. Often the behavior of a subsystem is dependent on the subsystems with which it interacts, and thus it is usually necessary to provide assumptions about the environment in which a subsystem executes. Because developing assumptions has been a difficult manual task, the evaluation of assume-guarantee reasoning has been limited. Using recent advances for automatically generating assumptions, we undertook a study to determine if assume-guarantee reasoning provides an advantage over monolithic verification. In this study, we considered all two-way decompositions for a set of systems and properties, using two different verifiers, FLAVERS and LTSA. By increasing the number of repeated tasks in these systems, we evaluated the decompositions as they were scaled. We found that in only a few cases can assume-guarantee reasoning verify properties on larger systems than monolithic verification can, and in these cases the systems that can be analyzed are only a few sizes larger. Although these results are discouraging, they provide insight about research directions that should be pursued and highlight the importance of experimental evaluation in this area.

Stefan Christov, Bin Chen, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, David Brown, Lucinda Cassells, and Wilson Mertens. Formally defining medical processes. Methods of Information in Medicine, 47(5):392–398, 2008. [ DOI | .pdf ]

Stephen F. Siegel and George S. Avrunin. Verification of halting properties for MPI programs using nonblocking operations. In Franck Capello, Thomas Herault, and Jack Dongarra, editors, Recent Advances in Parallel Virtual Machine and Message Passing Interface: 14th European PVM/MPI Users' Group Meeting, volume 4757 of Lecture Notes in Computer Science, pages 326–334, Paris, September 2007. Springer-Verlag. [ DOI | .pdf ]

We show that many important properties of certain MPI programs can be verified by considering only a class of executions in which all communication takes place synchronously. In previous work, we showed that similar results hold for MPI programs that use only blocking communication (and avoid certain other constructs, such as MPI_ANYSOURCE); in this paper we show that the same conclusions hold for programs that also use the nonblocking functions MPI_ISEND, MPI_IRECV, and MPI_WAIT. These facts can be used to dramatically reduce the number of states explored when using model checking techniques to verify properties such as freedom from deadlock in such programs.

Leon J. Osterweil, George S. Avrunin, Bin Chen, Lori A. Clarke, Rachel L. Cobleigh, Elizabeth A. Henneman, and Philip L. Henneman. Engineering medical processes to improve their safety: An experience report. In J. Ralyte, S. Brinkemper, and B. Henderson-Seelers, editors, Situational Method Engineering: Fundamentals and Experiences, pages 267–282, Geneva, September 2007. Springer. [ DOI | .pdf ]

This paper describes experiences in using precise definitions of medical processes as the basis for analyses aimed at finding and correcting defects leading to improvements in patient safety. The work entails the use of the Little-JIL process definition language for creating the precise definitions, the Propel system for creating precise specifications of process requirements, and the FLAVERS systems for analyzing process definitions. The paper describes the details of using these technologies, employing a blood transfusion process as an example. Although this work is still ongoing, early experiences suggest that our approach is viable and promising. The work has also helped us to learn about the desiderata for process definition and analysis technologies that are intended to be used to engineer methods.

Stefan Christov, Bin Chen, George S. Avrunin, Lori A. Clarke, Leon J. Osterweil, David Brown, Lucinda Cassells, and Wilson Merten. Rigorously defining and analyzing medical processes: An experience report. In Models in Software Engineering: Workshops and Symposia at MoDELS 2007, Reports and Revised Selected Papers, number 5002 in LNCS, pages 118–131, Nashville, TN, September 2007. [ DOI | .pdf ]

This paper describes experiences in using the precise definition of a process for chemotherapy administration and as the basis for analyses aimed at finding and correcting defects, leading to improvements in efficiency and patient safety. The work is a collaboration between Computer Science researchers and members of the professional staff of a major regional cancer center. The work entails the use of the Little-JIL process definition language for creating the precise definitions, the PROPEL system for creating precise specifications of process requirements, and the FLAVERS systems for analyzing process definitions. The paper describes the details of using these technologies, by demonstrating how they have been applied to successfully identify defects in the chemotherapy process. Although this work is still ongoing, early experiences suggest that our approach is viable and promising. The work has also helped us to learn about the desiderata for process definition and analysis technologies that are expected to be more broadly applicable to other domains.

Elizabeth A. Henneman, Rachel Cobleigh, Kimberly Frederick, Ethan Katz-Bassett, George S. Avrunin, Lori A Clarke, Leon J. Osterweil, Chester Andrzejewski, Jr., Karen Merrigan, and Phillip L. Henneman. Increasing patient safety and efficiency in transfusion therapy using formal process definitions. Transfusion Medicine Reviews, 21(1):49–57, January 2007. [ DOI ]

The administration of blood products is a common, resource intensive, potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted towards quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (i.e., at the point of care) has essentially remained unchanged over the past several decades.

Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, while useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions.

One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported.

The purpose of this paper is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.

Rachel L. Cobleigh, George S. Avrunin, and Lori A. Clarke. User guidance for creating precise and accessible property specifications. In Proceedings of the 14th ACM SIGSOFT Symposium on the Foundations of Software Engineering, pages 208–218, Portland, OR, November 2006. [ DOI | .pdf ]

Property specifications concisely describe what a system is supposed to do. No matter what notation is used to describe them, however, it is difficult to represent these properties correctly, since there are often subtle, but important, details that need to be considered. Propel aims to guide users through the process of creating properties that are both accessible and mathematically precise, by providing templates for commonly-occurring property patterns. These templates explicitly represent these subtle details as options. In this paper, we present a new representation of these templates, a Question Tree notation that asks users a hierarchical sequence of questions about their intended properties. The Question Tree notation is particularly useful for helping to select the appropriate template, but it also complements the finite-state automaton and disciplined natural language representations provided by Propel. We also report on some case studies that are using Propel and on an experimental evaluation of the understandability of the disciplined natural language representation

Stephen F. Siegel, Anastasia Mironova, George S. Avrunin, and Lori A. Clarke. Using model checking with symbolic execution to verify parallel numerical programs. In Mauro Pezzé, editor, Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 157–168, July 2006. [ DOI | .pdf ]

We present a method to verify the correctness of parallel programs that perform complex numerical computations, including computations involving floating-point arithmetic. The method requires that a sequential version of the program be provided, to serve as the specification for the parallel one. The key idea is to use model checking, together with symbolic computation, to establish the equivalence of the two programs.

Jamieson M. Cobleigh, George S. Avrunin, and Lori A. Clarke. Breaking up is hard to do: An investigation of decomposition for assume-guarantee reasoning. In Mauro Pezzé, editor, Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 97–108, Portland, ME, July 2006. [ DOI | .pdf ]

Finite-state verification techniques are often hampered by the state-explosion problem. One proposed approach for addressing this problem is assume-guarantee reasoning. Using recent advances in assume-guarantee reasoning that automatically generate assumptions, we undertook a study to determine if assume-guarantee reasoning provides an advantage over monolithic verification. In this study, we considered all two-way decompositions for a set of systems and properties, using two different verifiers, FLAVERS and LTSA. By increasing the number of repeated tasks, we evaluated the decompositions as the systems were scaled. In only a few cases could assume-guarantee reasoning verify properties on larger systems than monolithic verification and, in these cases, assume-guarantee reasoning could only verify these properties on systems a few sizes larger than monolithic verification. This discouraging result, although preliminary, raises doubts about the usefulness of assume-guarantee reasoning.

Shangzhu Wang, George S. Avrunin, and Lori A. Clarke. Architectural building blocks for plug-and-play design. In Ian Gorton, George T. Heneman, Ivica Crnkovic, Heinz W. Schmidt, Judith A. Stafford, Clemens A. Szyperski, and Kurt Wallnau, editors, Proceedings of the 9th International SIGSOFT Symposium on Component-Based Software Engineering (CBSE 2006), number 4063 in LNCS, pages 98–113, Västerås, Sweden, June 2006. [ DOI | .pdf ]

One of the distinguishing features of distributed systems is the importance of the interaction mechanisms that are used to define how the sequential components interact with each other. Given the complexity of the behavior that is being described and the large design space of various alternatives, choosing appropriate interaction mechanisms is difficult. In this paper, we propose a component-based specification approach that allows designers to experiment with alternative interaction semantics. Our approach is also integrated with design-time verification to provide feedback about the correctness of the overall system design. In this approach, connectors representing specific interaction semantics are composed from reusable building blocks. Standard communication interfaces for components are defined to reduce the impact of changing interactions on components' computations. The increased reusability of both components and connectors also allows savings at model-construction time for finite-state verification.

Jianbin Tan, George S. Avrunin, and Lori A. Clarke. Managing space for finite-state verification. In Proceedings of the 28th International Conference on Software Engineering, pages 152–161, Shanghai, May 2006. [ DOI | .pdf ]

Finite-state verification (FSV) techniques attempt to prove properties about a model of a system by examining all possible behaviors of that model. This approach suffers from the so-called state-explosion problem, where the size of the model or the analysis costs may be exponentially large with respect to the size of the system. Approaches that use symbolic data structures to represent the examined state space appear to provide an important optimization. In this paper, we investigate applying two symbolic data structures, Binary Decision Diagrams (BDDs) and Zero-suppressed Binary Decision Diagrams (ZDDs), in two FSV tools, LTSA and FLAVERS. We describe an experiment showing that these two symbolic approaches can improve the performance of both FSV tools and are more efficient than two other algorithms that store the state space explicitly. Moreover, the ZDD-based approach often runs faster and can handle larger systems than the BDD-based approach.

Bin Chen, George S. Avrunin, Lori A. Clarke, and Leon J. Osterweil. Automatic fault-tree derivation from Little-JIL process definitions. In Qing Wang, Dietmar Pfahl, David M. Raffo, and Paul Werinck, editors, Proceedings of SPW/ProSim 2006, number 3966 in LNCS, pages 150–158, Shanghai, May 2006. [ DOI | .pdf ]

Defects in safety critical processes can lead to accidents that result in harm to people or damage to property. Therefore, it is important to find ways to detect and remove defects from such processes. Earlier work has shown that Fault Tree Analysis (FTA) can be effective in detecting safety critical process defects. Unfortunately, it is difficult to build a comprehensive set of Fault Trees for a complex process, especially if this process is not completely well-defined. The Little-JIL process definition language has been shown to be effective for defining complex processes clearly and precisely at whatever level of granularity is desired. In this work, we present an algorithm for generating Fault Trees from Little-JIL process definitions. We demonstrate the value of this work by showing how FTA can identify safety defects in the process from which the Fault Trees were automatically derived.

Shangzhu Wang, George S. Avrunin, and Lori A. Clarke. Verification support for plug-and-play architectural design. Extended abstract in Proceedings of the Workshop on the Role of Software Architecture in Testing and Analysis, Portland, ME, July 2006, 2006. [ DOI | .pdf ]

In software architecture, components are intended to represent the computational units of the system and connectors are intended to represent the interactions between those units. Choosing the semantics of these interactions is a key part of the design process, but the wide range of alternatives from which to choose and the complexity of the behavior affected by the choices makes it difficult to get them right. We propose an approach in which connectors with particular semantics are constructed from a library of pre-defined building blocks. Changes in the semantics of a connector can be accomplished by adding new building blocks to the connector, or by removing or replacing some of its existing building blocks. In our approach, a small set of standard interfaces allows components to communicate with each other through a wide variety of connectors, so the impact on components for even substantial changes in the semantics of the connectors is minimized.

In this paper, we focus on the way this approach supports design-time verification to provide feedback about the correctness of the design. By enhancing the re-use of models of both components and connectors, this approach has the potential to significantly reduce the cost of verification as a design evolves.

Richard M. Chang, George S. Avrunin, and Lori A. Clarke. Property inference from program executions. Technical report, Department of Computer Science, University of Massachusetts, 2006. [ .pdf ]

Software verification techniques require properties that define the intended behavior of a system be specified. Generating such properties is often very difficult and serves as an impediment to the adoption of verification techniques. Techniques that leverage program executions to infer these properties are a promising avenue for automatically generating these properties. In this paper, we propose a property inference approach that leverages event traces derived from program executions to efficiently infer properties that are subtle variations of commonly occurring properties. We define inference templates that represent sets of these properties and describe our inference algorithm that refines these templates based on event traces.

George S. Avrunin, Lori A. Clarke, Elizabeth A. Henneman, and Leon J. Osterweil. Complex medical processes as context for embedded systems. ACM SIGBED Review, 3(4):9–14, 2006. [ DOI | .pdf ]

Many embedded systems are intended for use in complex and highly concurrent processes with multiple human agents. In these cases, the requirements for the system depend critically on the details of the process. If certification is to be useful for such systems, it must take the details of the pro- cess into account. In this paper, we describe some current research involving the formal definition and analysis of complex medical processes. We discuss the ways in which this work may provide a basis for a more complete understanding of the behavior of medical devices in the context of the processes in which they are used, and thus for certification methods for sophisticated embedded systems.

Stephen F. Siegel and George S. Avrunin. Modeling wildcard-free MPI programs for verification. In Symposium on Principles and Practice of Parallel Programming (PPoPP '05), pages 95–106, Chicago, IL, June 2005. [ DOI | .pdf ]

We give several theorems that can be used to substantially reduce the state space that must be considered in applying finite-state verification techniques, such as model checking, to parallel programs written using a subset of MPI. We illustrate the utility of these theorems by applying them to a small but realistic example.

Lori A. Clarke, Yao Chen, George S. Avrunin, Bin Chen, Rachel Cobleigh, Kim Frederick, Elizabeth A. Henneman, and Leon J. Osterweil. Process programming to support medical safety. In Mingshu Li, Barry Boehm, and Leon J. Osterweil, editors, Unifying the Software Process Spectrum: International Software Process Workshop, SPW 2005, number 3840 in LNCS, pages 347–359, Beijing, May 2005. [ DOI | .pdf ]

Medical errors are now recognized as a major cause of untimely deaths or other adverse medical outcomes. To reduce the number of medical errors, the Medical Safety Project at the University of Massachusetts is exploring using a process programming language to define medical processes, a requirements elicitation framework for specifying important medical properties, and finite-state verification tools to evaluate whether the process definitions adhere to these properties. In this paper, we describe our experiences to date. Although our findings are preliminary, we have found that defining and evaluating processes helps to detect weaknesses in these processes and leads to improved medical processes definitions.

George S. Avrunin, Stephen F. Siegel, and Andrew R. Siegel. Finite-state verification for high-performance computing. In Phillip Johnson, editor, Proceedings of the Second International Workshop on Software Engineering for High Performance Computing System Applications, pages 68–73, St. Louis, MO, May 2005. [ DOI | .pdf ]

In this paper, we discuss some preliminary results in applying finite-state verification techniques to high performance parallel codes, with a particular emphasis on scientific programs that employ the widely-used Message Passing Interface (MPI). These results suggest that such techniques may have significant potential for improving both the productivity of developers of parallel scientific programs and the quality of those programs. We also briefly sketch some of the research issues that must be addressed to achieve that potential.

Shangzhu Wang, George S. Avrunin, and Lori A. Clarke. Architectural building blocks for plug-and-play system design. Technical report, Department of Computer Science, University of Massachusetts, Amherst, 2005.

One of the distinguishing features of distributed systems is the importance of the interaction mechanisms that are used to define how the sequential components interact with each other. Given the complexity of the behavior that is being described and the large design space of various alternatives from which to choose, choosing appropriate interaction mechanisms is usually difficult. In this paper, we propose a compositional specification approach that facilitates experimentation with alternative interaction semantics. In this approach, connectors representing specific interaction semantics are composed from predefined building blocks and a standard component interface and protocol are defined. Such an approach reduces the impact of change, since components often do not have to change when the interaction semantics are changed. This approach also facilitates design-time verification, since abstract models of the unchanged components and of the building blocks can often be reused. We evaluate our approach by defining building blocks for a family of message passing mechanisms and present several versions of an example that use different message passing semantics from this family. The example illustrates how this approach supports "plug-and-play" system design and verification.

Jianbin Tan, George S. Avrunin, Lori A. Clarke, Shlomo Zilberstein, and Stefan Leue. Heuristic-guided counterexample search in FLAVERS. In Matthew Dwyer, editor, Proceedings of the 12th ACM SIGSOFT Symposium on the Foundations of Software Engineering, pages 201–210, Newport Beach, CA, November 2004. [ DOI | .pdf ]

One of the benefits of finite-state verification (FSV) tools, such as model checkers, is that a counterexample is provided when the property cannot be verified. Not all counterexamples, however, are equally useful to the analysts trying to understand and localize the fault. Often counterexamples are so long that they are hard to understand. Thus, it is important for FSV tools to find short counterexamples and to do so quickly. Commonly used search strategies, such as breadth-first and depth-first search, do not usually perform well in both of these dimensions. In this paper, we investigate heuristic-guided search strategies for the FSV tool FLAVERS and propose a novel two-stage counterexample search strategy. We describe an experiment showing that this two-stage strategy, when combined with appropriate heuristics, is extremely effective at quickly finding short counterexamples for a large set of verification problems.

Jianbin Tan, George S. Avrunin, and Lori A. Clarke. Heuristic-based model refinement for FLAVERS. In Proceedings of the Twenty-Sixth International Conference on Software Engineering, pages 635–644, May 2004. [ .pdf ]

FLAVERS is a finite-state verification approach that allows an analyst to incrementally add constraints to improve the precision of the model of the system being analyzed. Except for trivial systems, however, it is impractical to compute which constraints should be selected to produce precise results for the least cost. Thus, constraint selection has been a manual task, guided by the intuition of the analyst. In this paper, we investigate several heuristics for selecting task automaton constraints, a kind of constraint that tends to reduce infeasible task interactions. We describe an experiment showing that one of these heuristics is extremely effective at improving the precision of the analysis results without significantly degrading performance.

Stephen F. Siegel and George S. Avrunin. Verification of MPI-based software for scientific computation. In Susanne Graf and Laurent Mounier, editors, Model Checking Software: 11th International SPIN Workshop, number 2989 in LNCS, pages 286–303, Barcelona, April 2004. Springer-Verlag. [ DOI | .pdf ]

We explore issues related to the application of finite-state verification techniques to scientific computation software employing the widely-used Message-Passing Interface (MPI). Many of the features of MPI that are important for programmers present significant difficulties for model checking. In this paper, we examine a small parallel program that computes the evolution in time of a discretized function u defined on a 2-dimensional domain and governed by the diffusion equation. Although this example is simple, it makes use of many of the problematic features of MPI. We discuss the modeling of these features and use Spin and INCA to verify several correctness properties for various configurations of this program. Finally, we describe some general theorems that can be used to justify simplifications in finite-state models of MPI programs and that guarantee certain properties must hold for any program using only a particular subset of MPI.

Rachel L. Smith, George S. Avrunin, and Lori A. Clarke. From natural language requirements to rigorous property specifications. In Workshop on Software Engineering for Embedded Systems (SEES 2003): From Requirements to Implementation, pages 40–46, Chicago, IL, September 2003. [ .pdf ]

Property specifications concisely describe selected aspects of what a software system is supposed to do. It is surprisingly difficult to write these properties correctly. Although there are rigorous mathematical formalisms for representing properties, these are often difficult to use. No matter what notation is used, however, there are often subtle, but important, details that need to be considered. The Propel tool aims to make the job of writing and understanding properties easier by providing templates that explicitly capture these details as options for commonly-occurring property patterns. These templates are represented using “disciplined” natural language, decision trees, and finite-state automata, allowing the developer to easily move between these representations.

Stephen F. Siegel and George S. Avrunin. Analysis of MPI programs. Technical Report UM-CS-2003-036, Department of Computer Science, University of Massachusetts, 2003. [ .pdf ]

We investigate the application of formal verification techniques to parallel programs that employ the Message Passing Interface (MPI). We develop a formal model of a subset of MPI, and then prove a number of theorems about that model that ameliorate or eliminate altogether the state explosion problem. As an example, we show that if one wishes to verify freedom from deadlock, it suffices to consider only synchronous executions.

Rachel L. Smith, George S. Avrunin, Lori A. Clarke, and Leon J. Osterweil. Propel: An approach supporting property elucidation. In Proceedings of the Twenty-Fourth International Conference on Software Engineering, pages 11–21, Orlando, FL, May 2002. [ DOI | .pdf ]

Property specifications concisely describe what a software system is supposed to do. It is surprisingly difficult to write these properties correctly. There are rigorous mathematical formalisms for representing properties, but these are often difficult to use. No matter what notation is used, however, there are often subtle, but important, details that need to be considered. Propel aims to make the job of writing and understanding properties easier by providing templates that explicitly capture these details as options for commonly-occurring property patterns. These templates are represented using both "disciplined" natural language and finite-state automata, allowing the specifier to easily move between these two representations.

Stephen F. Siegel and George S. Avrunin. Improving the precision of INCA by eliminating solutions with spurious cycles. IEEE Trans. Softw. Eng., 28(2):115–128, 2002. (This is a revised and expanded version of a paper that originally appeared in ISSTA 2000.). [ DOI | .pdf ]

The Inequality Necessary Condition Analyzer (INCA) is a finite-state verification tool that has been able to check properties of some very large concurrent systems. INCA checks a property of a concurrent system by generating a system of inequalities that must have integer solutions if the property can be violated. There may, however, be integer solutions to the inequalities that do not correspond to an execution violating the property. INCA thus accepts the possibility of an inconclusive result in exchange for greater tractability. We describe here a method for eliminating one of the two main sources of these inconclusive results.

Stephen F. Siegel and George S. Avrunin. Improving the precision of INCA by preventing spurious cycles. In Mary Jean Harrold, editor, Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 191–200, Portland, OR, August 2000. ACM Press. [ .pdf ]

The Inequality Necessary Condition Analyzer (INCA) is a finite-state verification tool that has been able to check properties of some very large concurrent systems. INCA checks a property of a concurrent system by generating a system of inequalities that must have integer solutions if the property can be violated. There may, however, be integer solutions to the inequalities that do not correspond to an execution violating the property. INCA thus accepts the possibility of an inconclusive result in exchange for greater tractability. We describe here a method for eliminating one of the two main sources of these inconclusive results.

George S. Avrunin, James C. Corbett, and Matthew B. Dwyer. Benchmarking finite-state verifiers. Software Tools for Technology Transfer, 2(4):317–320, 2000. [ DOI | .pdf ]

A variety of largely automated methods have been proposed for finite-state verification of software systems. Although anecdotal accounts of success are widely reported, there is very little empirical data on the relative strengths and weaknesses of those methods across a broad range of analysis questions and systems. But this information is critical for the transfer of the technology from research to practice. We review some of the problems involved in obtaining this information and suggest several ways in which the community can facilitate empirical evaluation of finite-state verification tools.

George S. Avrunin, James C. Corbett, Matthew B. Dwyer, Corina S. Păsăreanu, and Stephen F. Siegel. Comparing finite-state verification techniques for concurrent software. Technical Report UM-CS-1999-069 (revised February 2000), Department of Computer Science, University of Massachusetts, November 1999. [ .ps ]

Finite-state verification provides software developers with a powerful tool to detect errors. Many different analysis techniques have been proposed and implemented, and the limited amount of empirical data available shows that the performance of these techniques varies enormously from system to system. Before this technology can be transferred from research to practice, the community must provide guidance to developers on which methods are best for different kinds of systems. We describe a substantial case study in which several finite-state verification tools were applied to verify properties of the Chiron user interface system, a real Ada program of substantial size. Our study provides important data comparing these different analysis methods, and points out a number of difficulties in conducting fair comparisons of finite-state verification tools.

Gleb Naumovich, George S. Avrunin, and Lori A. Clarke. An efficient algorithm for computing MHP information for concurrent Java programs. In O. Nierstrasz and M. Lemoine, editors, Software Engineering—ESEC/FSE '99. 7th European Software Engineering Conference held jointly with the 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering, number 1687 in LNCS, pages 338–354, Toulouse, September 1999. [ DOI | .pdf ]

Information about which statements in a concurrent program may happen in parallel (MHP) has a number of important applications. It can be used in program optimization, debugging, program understanding tools, improving the accuracy of data flow approaches, and detecting synchronization anomalies, such as data races. In this paper we propose a data flow algorithm for computing a conservative estimate of the MHP information for Java programs that has a worst-case time bound that is cubic in the size of the program. We present a preliminary experimental comparison between our algorithm and a reachability analysis algorithm that determines the “ideal” static MHP information for concurrent Java programs. This initial experiment indicates that our data flow algorithm precisely computed the ideal MHP information in the vast majority of cases we examined. In the two out of 29 cases where the MHP algorithm turned out to be less than ideally precise, the number of spurious pairs was small compared to the total number of ideal MHP pairs.

Gleb Naumovich, George S. Avrunin, and Lori A. Clarke. Data flow analysis for checking properties of concurrent Java programs. In Proceedings of the Twenty-First International Conference on Software Engineering, pages 399–410, Los Angeles, May 1999. [ DOI | .pdf ]

In this paper we show how the FLAVERS data flow analysis technique, originally formulated for programs with the rendezvous model of concurrency, can be applied to concurrent Java programs. The general approach of FLAVERS is based on modeling a concurrent program as a flow graph and using a data flow analysis algorithm over this graph to check statically if a property holds on all executions of the program. The accuracy of this analysis can be improved by supplying additional information, represented as finite state automata, to the data flow analysis algorithm.

In this paper we present a straightforward approach for modeling Java programs that uses the accuracy improving mechanism to represent the possible communications among threads in Java programs, instead of representing them directly in the flow graph model. We also discuss a number of error-prone thread communication patterns that can arise in Java and describe how FLAVERS can be used to check for the presence of these.

Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. Patterns in property specifications for finite-state verification. In Proceedings of the Twenty-First International Conference on Software Engineering, pages 411–420, Los Angeles, May 1999. [ DOI | .pdf ]

Model checkers and other finite-state verification tools allow developers to detect certain kinds of errors automatically. Nevertheless, the transition of this technology from research to practice has been slow. While there are a number of potential causes for reluctance to adopt such formal methods, we believe that a primary cause is that practitioners are unfamiliar with specification processes, notations, and strategies. In a recent paper, we proposed a pattern-based approach to the presentation, codification and reuse of property specifications for finite-state verification. Since then, we have carried out a survey of available specifications, collecting over 500 examples of property specifications. We found that most are instances of our proposed patterns. Furthermore, we have updated our pattern system to accommodate new patterns and variations of existing patterns encountered in this survey. This paper reports the results of the survey and the current status of our pattern system.

Gleb Naumovich and George S. Avrunin. A conservative data flow algorithm for detecting all pairs of statements that may happen in parallel. In Proceedings of 6th International Symposium on the Foundations of Software Engineering, pages 24–34, November 1998. [ DOI | .pdf ]

Information about which pairs of statements in a concurrent program can execute in parallel is important for optimizing and debugging programs, for detecting anomalies, and for improving the accuracy of data flow analysis. In this paper, we describe a new data flow algorithm that finds a conservative approximation of the set of all such pairs. We have carried out an initial comparison of the precision of our algorithm and that of the most precise of the earlier approaches, Masticola and Ryder's non-concurrency analysis, using a sample of 159 concurrent Ada programs that includes the collection assembled by Masticola and Ryder. For these examples, our algorithm was almost always more precise than non-concurrency analysis, in the sense that the set of pairs identified by our algorithm as possibly happening in parallel is a proper subset of the set identified by non-concurrency analysis. In 132 cases, we were able to use reachability analysis to determine exactly the set of pairs of statements that may happen in parallel. For these cases, there were a total of only 10 pairs identified by our algorithm that cannot actually happen in parallel.

George S. Avrunin, James C. Corbett, and Laura K. Dillon. Analyzing partially-implemented real-time systems. IEEE Trans. Softw. Eng., 24(8):602–614, August 1998. (This is a revised and expanded version of a paper of the same title appearing in the Proceedings of the International Conference on Software Engineering, 1997.). [ DOI | .pdf ]

Most analysis methods for real-time systems assume that all the components of the system are at roughly the same stage of development and can be expressed in a single notation, such as a specification or programming language. There are, however, many situations in which developers would benefit from tools that could analyze partially-implemented systems, those for which some components are given only as high-level specifications while others are fully implemented in a programming language. In this paper, we propose a method for analyzing such partially-implemented real-time systems. Here we consider real-time concurrent systems for which some components are implemented in Ada and some are partially specified using regular expressions and Graphical Interval Logic (GIL), a real-time temporal logic. We show how to construct models of the partially-implemented systems that account for such properties as run-time overhead and scheduling of processes, yet support tractable analysis of nontrivial programs. The approach can be fully automated, and we illustrate it by analyzing a small example.

Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. Property specification patterns for finite-state verification. In Mark Ardis, editor, Proceedings of FMSP '98, the Second Workshop on Formal Methods in Software Practice, pages 7–15, Clearwater Beach, FL, March 1998. [ DOI | .pdf ]

Finite-state verification (e.g., model checking) provides a powerful means to detect errors that are often subtle and difficult to reproduce. Nevertheless, the transition of this technology from research to practice has been slow. While there are a number of potential causes for reluctance in adopting such formal methods in practice, we believe that a primary cause rests with the fact that practitioners are unfamiliar with specification processes, notations, and strategies. Recent years have seen growing success in leveraging experience with design and coding patterns. We propose a pattern-based approach to the presentation, codification and reuse of property specifications for finite-state verification.

Gleb Naumovich, George S. Avrunin, Lori A. Clarke, and Leon J. Osterweil. Applying static analysis to software architectures. In Mehdi Jazayeri and Helmut Schauer, editors, Software Engineering—ESEC/FSE '97, volume 1301 of Lecture Notes in Computer Science, pages 77–93, Zurich, September 1997. Springer Verlag. [ DOI | .pdf ]

In this paper we demonstrate how static concurrency analysis techniques can be used to verify application-specific properties of an architectural description. Specifically, we use two concurrency analysis tools, INCA, a flow equation based tool, and FLAVERS, a data flow analysis based tool, to detect errors or prove properties of a Wright architectural description of the gas station problem. Although both these tools are research prototypes, they illustrate the potential of static analysis for verifying that architectural descriptions adhere to important properties, for detecting problems early in the lifecycle, and for helping developers understand the changes that need to be made to satisfy the properties being analyzed.

George S. Avrunin, James C. Corbett, and Laura K. Dillon. Analyzing partially-implemented real-time systems. In ICSE '97: Proceedings of the 19th International Conference on Software Engineering, pages 228–238, New York, NY, USA, 1997. ACM. [ DOI | .pdf ]

George Avrunin and Jan Demers. Exploring symmetry with first graders. http://mathforum.org/mathed/mime/avrunin.html, 1997. (Invited report for the Math Forum's Mathematicians in Mathematics Education page).

George S. Avrunin. Symbolic model checking using algebraic geometry. In Rajeev Alur and Thomas A. Henzinger, editors, Computer Aided Verification, 8th International Conference, volume 1102 of Lecture Notes in Computer Science, pages 26–37, New Brunswick, NJ, July 1996. Springer-Verlag. [ DOI | .ps ]

In this paper, I show that methods from computational algebraic geometry can be used to carry out symbolic model checking using an encoding of Boolean sets as the common zeros of sets of polynomials. This approach could serve as a useful supplement to symbolic model checking methods based on Ordered Binary Decision Diagrams and may provide important theoretical insights by bringing the powerful mathematical machinery of algebraic geometry to bear on the model checking problem.

A. T. Chamillard, Lori A. Clarke, and George S. Avrunin. An empirical comparison of static concurrency analysis techniques. Technical Report 96-84, Department of Computer Science, University of Massachusetts, 1996. Revised May 1997. [ .ps ]

This paper reports the results of an empirical comparison of several static analysis tools for evaluating properties of concurrent systems and also reports the results of our attempts to build predictive models for each of the tools based on program and property characteristics. Although this area seems well-suited to empirical investigation, we encountered a number of significant issues that make designing a sound and unbiased study surprisingly difficult. These experimental design issues are also discussed in this paper.

George S. Avrunin. Concurrent and real-time systems. In Quality-Enhancing Software Technologies. Microelectronics and Computer Technology Corporation (MCC), 1996.

James C. Corbett and George S. Avrunin. Using integer programming to verify general safety and liveness properties. Formal Methods in System Design, 6:97–123, January 1995. [ DOI | .ps ]

Analysis of concurrent systems is plagued by the state explosion problem. The constrained expression analysis technique uses necessary conditions, in the form of linear inequalities, to verify certain properties of concurrent systems, thus avoiding the enumeration of the potentially explosive number of reachable states of the system. This technique has been shown to be capable of verifying simple safety properties, like freedom from deadlock, that can be expressed in terms of the number of certain events occurring in a finite execution, and has been successfully used to analyze a variety of concurrent software systems. We extend this technique to the verification of more complex safety properties that involve the order of events and to the verification of liveness properties, which involve infinite executions.

James C. Corbett and George S. Avrunin. Towards scalable compositional analysis. In David Wile, editor, Proceedings of the Second ACM SIGSOFT Symposium on Foundations of Software Engineering, pages 53–61, New Orleans, December 1994. ACM Press. [ DOI | .pdf ]

Due to the state explosion problem, analysis of large concurrent programs will undoubtedly require compositional techniques. Existing compositional techniques are based on the idea of replacing complex subsystems with simpler processes with the same interfaces to their environments, and using the simpler processes to analyze the full system. Most algorithms for proving equivalence between two processes, however, require enumerating the states of both processes. When part of a concurrent system consists of many highly coupled processes, it may not be possible to decompose the system into components that are both small enough to enumerate and have simple interfaces with their environments. In such cases, analysis of the systems by standard methods will be infeasible.

In this paper, we describe a technique for proving trace equivalence of deterministic and divergence-free systems without enumerating their states. (For deterministic systems, essentially all the standard notions of process equivalence collapse to trace equivalence, so this technique also establishes failures equivalence, observational equivalence, etc.) Our approach is to generate necessary conditions for the existence of a trace of one system that is not a trace of the other; if the necessary conditions cannot be satisfied the systems are equivalent. We have implemented the technique and used it to establish the equivalence of some systems with state spaces too large for enumeration to be practical.

George S. Avrunin, James C. Corbett, Laura K. Dillon, and Jack C. Wileden. Automated derivation of time bounds in uniprocessor concurrent systems. IEEE Trans. Softw. Eng., 20(9):708–719, September 1994. [ DOI | .pdf ]

The successful development of complex real-time systems depends on analysis techniques that can accurately assess the timing properties of those systems. This paper describes a technique for deriving upper and lower bounds on the time that can elapse between two given events in an execution of a concurrent software system running on a single processor under arbitrary scheduling. The technique involves generating linear inequalities expressing conditions that must be satisfied by all executions of such a system and using integer programming methods to find appropriate solutions to the inequalities. The technique does not require construction of the state space of the system and its feasibility has been demonstrated by using an extended version of the constrained expression toolset to analyze the timing properties of some concurrent systems with very large state spaces.

James C. Corbett and George S. Avrunin. A practical technique for bounding the time between events in concurrent real-time systems. In Thomas Ostrand and Elaine Weyuker, editors, Proceedings of the 1993 International Symposium on Software Testing and Analysis (ISSTA), pages 110–116, Cambridge, MA, June 1993. ACM Press. [ DOI | .pdf ]

Showing that concurrent systems satisfy timing constraints on their behavior is difficult, but may be essential for critical applications. Most methods are based on some form of reachability analysis and require construction of a state space of size that is, in general, exponential in the number of components in the concurrent system. In an earlier paper with L. K. Dillon and J. C. Wileden, we described a technique for finding bounds on the time between events without enumerating the state space, but the technique applies chiefly to the case of logically concurrent systems executing on a uniprocessor, in which events do not overlap in time. In this paper, we extend that technique to obtain upper bounds on the time between events in maximally parallel concurrent systems. Our method does not require construction of the state space and the results of preliminary experiments show that, for at least some systems with large state spaces, it is quite tractable. We also briefly describe the application of our method to the case in which there are multiple processors, but several processes run on each processor.

Victor Yodaiken and George S. Avrunin. Real-time state machines and circuit verification with modal functions. Technical Report 93-04, Department of Computer Science, University of Massachusetts at Amherst, 1993.

Recent work on automated verification of circuits has highlighted the problems of concise representation, composition, and manipulation of state machines with large state sets. In this paper, we show how state machines that capture both the real-time behavior and connective structure of circuits can be defined, parameterized, composed, and refined, using modal functions. Our work is currently focused on "by hand" methods of verification as a step towards automated methods.

George S. Avrunin and Jon F. Carlson. Nilpotency degree of cohomology rings in characteristic two. Proc. Amer. Math. Soc., 118(2):239–243, 1993. [ http | .pdf ]

In this paper, we consider the cohomology ring of a finite 2-group with coefficients in a field of characteristic two. We show that, for any positive integer n, there exists a 2-group whose cohomology ring has elements of nilpotency degree n + 1 and all smaller degrees.

George S. Avrunin. Sharpening bounds on the time between events in maximally parallel systems. Technical Report 92-69, Department of Computer Science, University of Massachusetts at Amherst, 1992.

A recent paper describes a method for obtaining bounds on the time that can elapse between two given events in an execution of a concurrent software system running on a single processor under arbitrary scheduling. The technique involves generating linear inequalities expressing conditions that must be satisfied by all executions of such a system and using integer programming methods to find appropriate solutions to the inequalities. Corbett has extended this approach to obtain upper bounds on the time between events in executions of multi-processor concurrent systems in which each process proceeds unless forced to wait to communicate with another, the case of maximal parallelism, and processes communicate by synchronous message passing. Corbett's method does not strictly enforce the maximal parallelism assumption, however, and may thus give poor (though valid) bounds in some cases. In this paper, I show how to modify Corbett's method to obtain sharper bounds.

George S. Avrunin, Ugo A. Buy, James C. Corbett, Laura K. Dillon, and Jack C. Wileden. Automated analysis of concurrent systems with the constrained expression toolset. IEEE Trans. Softw. Eng., 17(11):1204–1222, November 1991. [ DOI | .pdf ]

The constrained expression approach to analysis of concurrent software systems has several attractive features, including the facts that it can be used with a variety of design and programming languages and that it does not require a complete enumeration of the set of reachable states of the concurrent system. This paper reports on the construction of a toolset automating the main constrained expression analysis techniques and the results of experiments with that toolset. The toolset is capable of carrying out completely automated analyses of a variety of concurrent systems, starting from source code in an Ada-like design language and producing system traces displaying the properties represented by the analyst's queries. It has been successfully used with designs that involve hundreds of concurrent processes.

George S. Avrunin, Ugo A. Buy, James C. Corbett, Laura K. Dillon, and Jack C. Wileden. Experiments with an improved constrained expression toolset. In Proceedings of the Symposium on Testing, Analysis, and Verification (TAV4), pages 178–187. ACM SIGSOFT, October 1991. [ DOI | .pdf ]

At TAV3, we described a prelimiminary version of the constrained expression toolset, and reported on the results of our initial experiments with it. Through those experiments we discovered shortcomings in some of the tools that limited the size of the examples that we could analyze. We have since redesigned and reimplemented several components of the toolset, with performance improvements of more than two orders of magnitude in some cases. The improved toolset has been successfully used with designs that involve hundreds of concurrent processes. In this paper, we describe several experiments with the new version of the toolset, including preliminary experiments with a technique for analyzing sys- tems that include an essentially arbitrary number of identical components.

George S. Avrunin, Ugo A. Buy, and James C. Corbett. Integer programming in the analysis of concurrent systems. In Kim Guldstand Larsen and Arne Skou, editors, Computer Aided Verification, 3rd International Workshop Proceedings, volume 575 of Lecture Notes in Computer Science, pages 92–102, Aalborg, Denmark, July 1991. Springer-Verlag. [ DOI ]

George S. Avrunin and Jack C. Wileden. Improvements in automated analysis of concurrent and real-time software. In André M. van Tilborg and Gary M. Koob, editors, Foundations of Real-Time Computing: Formal Specifications and Methods, chapter 8, pages 195–215. Kluwer Academic Publishers, 1991.

George S. Avrunin, Ugo Buy, and James Corbett. Automatic generation of inequality systems for constrained expression analysis. Technical Report 90-32, Department of Computer and Information Science, University of Massachusetts, Amherst, 1990.

George S. Avrunin, Laura K. Dillon, and Jack C. Wileden. Experiments with automated constrained expression analysis of concurrent software systems. In Richard A. Kemmerer, editor, Proceedings of the ACM SIGSOFT '89 Third Symposium on Software Testing, Analysis and Verification, pages 124–130, December 1989. [ DOI | .pdf ]

It is unlikely that any single approach to analysis of concurrent software systems will meet all the needs of software developers throughout the development process. Thus, experimental evaluation of different analysis techniques is needed to determine their relative strengths and practical limitations. Such evaluation requires automated tools implementing the analysis techniques. This paper describes a prototype toolset automating the constrained expression approach to the analysis of concurrent software systems. The results of preliminary experiments with the toolset are reported and the implications of these experiments are discussed.

George S. Avrunin, Laura K. Dillon, and Jack C. Wileden. Constrained expression analysis of real-time systems. Technical Report 89-50, Department of Computer and Information Science, University of Massachusetts, 1989.

Laura K. Dillon, George S. Avrunin, and Jack C. Wileden. Constrained expressions: Toward broad applicability of analysis methods for distributed software systems. ACM Trans. Prog. Lang. Syst., 10(3):374–402, July 1988. [ DOI | .pdf ]

It is extremely difficult to characterize the possible behaviors of a distributed software system through informal reasoning. Developers of distributed systems require tools that support formal reasoning about properties of the behaviors of their systems. These tools should be applicable to designs and other preimplementation descriptions of a system, as well as to completed programs. Furthermore, they should not limit a developer's choice of development languages. In this paper we present a basis for broadly applicable analysis methods for distributed software systems. The constrained expression formalism can be used with a wide variety of distributed system development notations to give a uniform closed-form representation of a system's behavior. A collection of formal analysis techniques can then be applied with this representation to establish properties of the system. Examples of these formal analysis techniques appear elsewhere. Here we illustrate the broad applicability of the constrained expression formalism by showing how constrained expression representations are obtained from descriptions of systems in three different notations: SDYMOL, CSP, and Petri nets. Features of these three notations span most of the significant alternatives for describing distributed software systems. Our examples thus offer persuasive evidence for the broad applicability of the constrained expression approach.

Jack C. Wileden and George S. Avrunin. Toward automating analysis support for developers of distributed software. In Proceedings of the Eighth International Conference on Distributed Computing Systems, pages 350–357. IEEE Computer Society Press, June 1988. [ DOI | .pdf ]

George S. Avrunin. A prototype inequality generator. Software Development Laboratory Memo 88-1, Department of Computer and Information Science, University of Massachusetts, Amherst, June 1988.

Clyde H. Coombs and George S. Avrunin. The Structure of Conflict. Lawrence Erlbaum Associates, Hillsdale, NJ, 1988.

George S. Avrunin. Experiments in constrained expression analysis. Technical Report 87-125, Department of Computer and Information Science, University of Massachusetts, Amherst, 1987.

George S. Avrunin, Laura K. Dillon, Jack C. Wileden, and William E. Riddle. Constrained expressions: Adding analysis capabilities to design methods for concurrent software systems. IEEE Trans. Softw. Eng., SE-12(2):278–292, 1986. Reprinted in: S. M. Shatz and J.-P. Wang, eds., Tutorial: Distributed-Software Engineering, IEEE Computer Society Press, Washington, DC (1989), pp. 258–271.

George S. Avrunin and Jack C. Wileden. Describing and analyzing distributed software system designs. ACM Trans. Prog. Lang. Syst., 7(3):380–403, July 1985. [ DOI | .pdf ]

In this paper we outline an approach to describing and analyzing designs for distributed software systems. A descriptive notation is introduced, and analysis techniques applicable to designs expressed in that notation are presented. The usefulness of the approach is illustrated by applying it to a realistic distributed software-system design problem involving mutual exclusion in a computer network.

George S. Avrunin and Jack C. Wileden. Algebraic techniques for the analysis of concurrent systems. In Proceedings of the Sixteenth Hawaii International Conference on Systems Sciences, pages 51–57, 1983.

George S. Avrunin and Leonard L. Scott. A Quillen stratification theorem for modules. Bull. Amer. Math. Soc. (N.S.), 6:75–78, 1982. [ DOI | .pdf ]

George S. Avrunin and Leonard L. Scott. Quillen stratification for modules. Invent. Math., 66:277–286, 1982. [ DOI | .pdf ]

George S. Avrunin. Generic cohomology for twisted groups. Trans. Amer. Math. Soc., 268:247–253, 1981. [ http | .pdf ]

Let G be a simple algebraic group defined and split over k0 = Fp, and let σ be a surjective endomorphism of G with finite fixed-point set Gσ. We give conditions under which cohomology groups of G are isomorphic to cohomology groups of Gσ.

George S. Avrunin. Annihilators of cohomology modules. J. Algebra, 69:150–154, 1981. [ DOI ]

George S. Avrunin. 2-cohomology of some unitary groups. Ill. J. Math., 24:317–332, 1980. [ .pdf ]

George S. Avrunin. The image of the restriction map on 2-cohomology. Arch. Math. (Basel), 34:502–508, 1980. [ DOI ]

George S. Avrunin. A vanishing theorem for second degree cohomology. J. Algebra, 53:382–388, 1978. [ DOI ]

Clyde H. Coombs and George S. Avrunin. A theorem on single-peaked preference functions in one dimension. J. Math. Psych., 16:261–266, 1977. [ DOI ]

Clyde H. Coombs and George S. Avrunin. Single-peaked functions and the theory of preference. Psych. Rev., 84:216–230, 1977. Reprinted in: E. D. Lantermann and H. Feger, eds., Similarity and Choice, Hans Huber Publishers, Bern (1980), pp. 182–207. [ DOI ]

Dependent variables such as preference, hedonic tone, aesthetic appreciation, stimulus generalization, degree of interest or attention, exploratory behavior, developmental stages, and intensity of attitudes are frequently observed to be single-peaked functions of the independent variables. The problem of deriving, from more elementary underlying processes, a preference function that rises monotonically to a peak and then falls monotonically is discussed. Psychological principles are proposed for the perception and processing of good and bad attributes such as pleasure and pain and an elimination principle that affects the options that are available. It is shown that single-peakedness is inevitable if there is only 1 component, is quite likely if there are 2, and must be contrived if there are 3 or more components. Results are generalized for approach-avoidance, approach-approach, and avoidance-avoidance conflict in individuals and bear upon the resolution of social choice problems of a particular class.


This file was generated by bibtex2html 1.99.

[Back] Back to George Avrunin's home page