Scheduling Mixed-Criticality Workloads Upon Unreliable Processors


In mixed-criticality systems, functionalities of different degrees of importance (or criticalities) are implemented upon a common platform. Such mixed-criticality implementations are becoming increasingly common in embedded systems – consider, for example, the Integrated Modular Avionics (IMA) software architecture used in aviation [5] and the AUTOSAR initiative (AUTomotive Open System ARchitecture -- – for automotive systems. As a consequence the real-time systems research community has recently been devoting much attention to better understanding the challenges that arise in implementing such mixed-criticality systems; this includes research on various aspects of mixed-criticality scheduling. Most of this prior work draws inspiration from the seminal work of Vestal [6], and has taken the approach of validating the correctness of highly critical functionalities under more pessimistic assumptions than the assumptions used in validating the correctness of less critical functionalities. (For example, a piece of code may be characterized by a larger worst-case execution time (WCET) [6] in the more pessimistic analysis, or recurrent code that is triggered by some external recurrent event may be characterized by a higher frequency [1].) All functionalities are expected to be demonstrated correct under the less pessimistic analysis, whereas the analysis under the more pessimistic assumptions need only demonstrate the correctness of the more critical functionalities.

In this paper we take a somewhat different perspective on mixed-criticality scheduling: the system is analyzed only once, under a single set of assumptions. The mixed-criticality nature of the system is expressed in the requirement that while we would like all functionalities to execute correctly under normal circumstances, it is essential that the more critical functionalities execute correctly even when circumstances are not normal. To express this formally, we model the workload of a MC system as being comprised of a collection of real-time jobs -- these jobs may be independent, or they may be generated by recurrent tasks. Each job is characterized by a release date, a worst-case execution time (WCET), and a deadline; each job is further designated as being hi-criticality (more important) or lo-criticality (less important). We desire to schedule the system upon a single processor. This processor is unreliable in the following sense: while under normal circumstances it completes one unit of execution during each time unit (equivalently, it executes as a speed-1 processor), it may at any instant lapse into a degraded mode during which it may only complete as few as s units of execution during each time unit, for some (known) constant s < 1. It is not a priori known when, or whether, such degradation will occur. We seek efficient scheduling strategies that guarantee to complete all jobs by their deadlines if the performance of the processor does not degrade during run-time, while simultaneously guaranteeing to complete all hi-criticality jobs if the processor does suffer a degradation in performance.

Meeting Name

11th Workshop on Models and Algorithms for Planning and Scheduling Problems, MAPSP (2012: Nov., Pont-a-Mousson, France)


Computer Science


This research is supported in part by NSF grants CNS 0834270, CNS 0834132, and CNS 1016954; ARO grant W911NF-09-1-0535; AFOSR grant FA9550-09-1-0549; and AFRL grant FA8750-11-1-0033

Document Type

Article - Conference proceedings

Document Version


File Type




Publication Date

01 Nov 2012

This document is currently not available here.