While in the former decades the design for dependability issue was confined to hard mission critical systems, the widespread use of online services raises the same question in an increasingly wide spectrum of applications. However, there exist basic differences between the design of mission critical applications and on-line services. Mission critical applications can tolerate a high degree of redundancy and the related cost overhead compared to a pure functional solution. However everyday's applications require only a high availability of the applications as an add-on to the quality of the system, but do not allow for a significant overhead.
This paper presents a new methodology for designing control of industrial processes. The methodology has been developed jointly by the Department of Measurement and Information Systems at the Technical University of Budapest, the Department of the Information Technology at the Széchenyi College, and the Department of Information Systems at the Budapest University of Economics.
The objective of the system is the development of automated synthesis of mathematical models of production optimization problems, where the requirements specification is described by means of state of the art object-oriented CASE tools.
The paper introduces a method which allows quantitative performance and dependability analysis of systems modeled by using UML statechart diagrams. The analysis is performed by transforming the UML model to Stochastic Reward Nets (SRN). A large subset of statechart model elements is supported including event processing, state hierarchy and transition priorities. The transformation is presented by a set of SRN design patterns. Performance measures can be directly derived using SRN tools, while dependability analysis requires explicit modeling of erroneous states and faulty behavior.
The paper introduces a method which allows quantitative performance and dependability analysis of systems modeled by using UML statechart diagrams. The analysis is performed by transforming the UML model to Stochastic Reward Nets (SRN). A large subset of statechart model elements is supported including event processing, state hierarchy and transition priorities. The transformation is presented by a set of SRN patterns. Performance measures can be directly derived using SRN tools, while dependability analysis requires explicit modeling of erroneous states and faulty behavior. Keywords: UML statecharts, dependability analysis, performance analysis.
The Unified Modeling Language (UML) finds more and more applications. It is not only used for software development but also for modelling systems with dynamic behavior (e.g. Flexible Manufacturing Systems (FMS), or Business Process Modeling). While the static diagrams of UML were changed marginal in the latest versions of UML, the dynamic diagrams need still to be improved. For modelling truly the dynamic behavior by these diagrams a concept of time is needed. Therefore, we introduce in this paper an approach to extend UML with time as stochastically variable. To evaluate the enhanced models and to drive a numerical analysis we use the Petri Net analysis tool PANDA. This tool works also on some stochastic extensions of the Petri Nets (GSPN's), widely used for performance evaluation and numerical analysis. The evaluation of these models is based on exploring and solving the underlying Markov chains.
Test generation for today's complex digital circuits is an extremely computation intensive task. The search space of ATPG can be reduced by starting from higher level circuit descriptions. The integration of alternate testing methodology - IDDQ testing - is suggested for increasing the efficiency of a high level VHDL based test generator.
Abstract: The process of developing dependable, safety-critical systems controlled by computers requires a formal verification of conceptual and architectural choices by using different mathematical tools. According to a novel approach of IT system design, these input models to formal mathematical analysis are transformed automatically from the system model. Up to now, the design and implementation of such transformations was rather ad hoc missing any formal descriptions and methods. In this paper we present our efforts towards a model transformation system based on a powerful integration of graph transformation, planner algorithms and deductive databases in order to obtain an automatically generated, provenly correct and complete transformation code.
Abstract: The use of formal verification methods is essential in the design process of dependable computer controlled systems. A complex environment should support the semi-formal specification as well as the formal verification of the desired system. The efficiency of applying these formal methods will be highly increased if the underlying mathematical background is hidden from the designer. In such an integrated system effective techniques are needed to transform the system model to different sort of mathematical models supporting the assessment of system characteristics. The current paper introduces our research results towards a general-purpose model transformation engine. This approach results in yielding a provenly correct and complete transformation code by combining the powerful techniques of graph transformation, planner algorithms and deductive databases. Keywords: formal verification, graph transformation, visual languages, planner algorithms, deductive databases.
Abstract: The design process of complex, dependable systems requires a precise verification of design decisions during the system modelling phase using formal methods. For that reason, the mathematical models of various formal verification tools are planned to be automatically derived from the system model usually constructed from UML-diagrams. In the paper, a general framework for an automated model transformation system is introduced providing a uniform formal description method of such transformations by applying the powerful computational paradigm of graph transformation.
Keywords: system verification, model transformation, graph transformation, UML.
Abstract: Tools developed to demonstrate the practical applicability of the mainly theoretical foundations of graph transformation systems are of immense importance. Therefore, the interaction (moreover, integration) of these tools is a major challenge for the graph transformation (GraTra) community in order to increase the efficiency of international (academic or industrial) research. A first step towards integration is a standardized, common model interchange format providing a uniform graph description and rule representation that is capable of handling the most fundamental concepts of graph transformation. A potential candidate is the novel standard the web, the Extensible Markup Language (XML), which allows the interchange of models in a distributed environment (i.e. Internet). In this report, we demonstrate on several examples that a common, XML-based GraTra model interchange format should be constructed from a metamodel following the concepts of the Meta Object Facility standard; XMI (XML Metadata Interchange) is the most suitable XML-based language for such a format; the proposed XMI format can be derived automatically from the corresponding GraTra metamodel.
Abstract: The design of complex, dependable systems requires a precise formal verification of design decisions during the system modelling phase. For that reason, the mathematical models of various formal verification tools are planned to be automatically derived from the system model usually described by UML-diagrams. In the current paper, a general framework for an automated model transformation system is introduced providing a uniform formal description method of such transformations by applying the powerful computational paradigm of graph transformation. Model transformation rules are constructed in a modular way using a visual UML notation as representation in order to provide a closer correspondence with industrial techniques.
Keywords: system verification, model transformation, graph transformation, transformation unit, UML.
Abstract: Systemlevel fault diagnosis of massively parallel computers requires efficient algorithms, handling a many processing elements in a heterogeneous environment. Probabilistic fault diagnosis is an approach to make the diagnostic problem both easier to solve and more generally applicable. The price to pay for these advantages is that the diagnostic result is no longer guaranteed to be correct and complete in every fault situation. In an earlier paper the authors presented a novel methodology, called local information diagnosis, and applied it to create a family of probabilistic diagnostic algorithms. This paper examines the identification of faultfree and faulty units in detail by defining three heuristic methods of fault classification and comparing the diagnostic accuracy provided by these heuristics using measurement results.
Keywords: multiprocessor systems, systemlevel fault diagnosis, probabilistic algorithms
Abstract: The APE supercomputers are number-crunching engines aimed at solving complex scientific problems. The typical duration of a computation ranges from several days to several weeks. Due to the large number of components built into an APEmille machine, the expected MTTF without fault tolerance may fall into the same range. Thus, a long-running computation could be invalidated by a system failure with high probability, and there would be no guarantee that the results of a successfully terminated program are correct. To improve on the situation the designers of decided to incorporate fault tolerance into the system. They chose the error removel based approach, composed of error detection, system-level fault diagnosis, system repair, and backward error recovery. This paper presents a failure model of the APEmille components and develops a comprehensive recovery scheme for the whole computer, including even the reliable storage that is used to record the recovery information.
Keywords: fault-tolerant systems, computer architectures, parallel processors, error recovery, checkpointing, distributed databases
Abstract: The APE supercomputers are number-crunching engines aimed at solving complex scientific problems. The typical duration of a computation ranges from several days to several weeks. Due to the large number of components built into an APEmille machine, the expected MTTF without fault tolerance may fall into the same range. Thus, a long-running computation could be invalidated by a system failure with high probability, and there would be no guarantee that the results of a successfully terminated program are correct. To improve on the situation the designers of decided to incorporate fault tolerance into the system. They chose the error removel based approach, composed of error detection, system-level fault diagnosis, system repair, and backward error recovery. This paper presents a failure model of the APEmille components and develops a comprehensive recovery scheme for the whole computer, including even the reliable storage that is used to record the recovery information.