What is a reactive system?
A reactive system is one which is in continuous interaction with its environment and executes at a pace determined by that environment
What is a schedule? What does appropriate scheduling allow?
A schedule is an assignment of tasks “J_i” to the processor, such that each task is executed until completion.
Appropriate scheduling allows:
-> Based on the list of tasks to be run and their expected completion time, we can define a schedule ensuring their timely execution
-> ideally, the system can be guaranteed to meet all constraints (predictability and determinism)
What do preemptive and no-preemptive mean?
preemptive -> Tasks can be interrupted by other tasks
non-preemptive -> once started, other tasks can only start, if the first task is finished
What is the meaning of static and dynamic algorithms?
static: scheduling decisions are based on fixed parameters assigned to tasks before execution
dynamic: scheduling decisions are based on parameters that may change during the execution
What does the term corectness mean in the context of scheduling?
Ensuring timing constraints.
-> Late answers are considered errors
-> ideally all timings are met —> not easy, because of external events, workload change etc. —> because of that, it is not always possible to meet constraints
What do the terms feasible and scheduable mean?
feasible: All tasks can be completed according to a set of specified constraints
schedulable: there exists at least one algorithm to produce a feasible schedule
What does each letter mean?
-> a_i: arrival time, time at which a task becomes ready for execution
-> s_i: start time
-> C_i: Computation time, tie necessary to execute the task
-> f_i: finish time
-> d_i: deadline, time at which the task should be completed
-> X_i = d_i - a_i - C_i: Laxity, Maximum time a task can be delayed after its activation to still meet its deadline
-> L_i: Lateness, delay of a task completion with respect to its deadline (negative, if task completes before)
-> E_i: Tardiness or exceeding time: time a task stays ctive after its deadline
What does EDD stand for and what are its properties? What tasks can it schedule?
EDD: Earliest deadline due
It minimizes the maximum lateness
non-preemptive
used for independent aperiodic tasks with concurrent arrival times
Concept:
-> Order the tasks in the order of their deadlines, smallest deadline first
-> if deadlines are equal, any ordering is fine
What does EDF stand for and what are its properties? What tasks can it schedule?
EDF: Earliest deadline first
minimizes the maximum lateness
preemptive
used for independent aperiodic tasks with arbitrary arrival times
execute tasks that has the earliest absolute deadline among the ready tasks. The task that satisfies ai < t and has the smallest value of d_i < t is execiuted.
Can EDF used for dependent tasks? If yes, how does it work?
Yes, but it has to be adapted with precedence constraints. (EDF*)
-> Modify release times and deadlines of all tasks -> use EDF on modified set of tasks
Use EDF* to solve the following problem
solve for r* and d*
Solve the following Problem with EDF*
What does LDF stand for and what are its properties? What tasks can it schedule?
LDF: Latest deadline first
-> minimizes the maximum lateness
-> non preemptive
-> for dependent aperiodic tasks with concurrent arrival times
-> J_f first
-> J_e second
Solve the Problem with LDF
What does RM stand for and what are its properties? What tasks can it schedule?
RMS: Rate monotonic
general assumptions:
-> tasks are released as soon as they arive
-> no task can suspend itself
-> all tasks are independent
-> preemptive
-> static priority assignement
-> deadlines egaul the period
RM is optimal among all fixed-priority assignments, meaning that no other fixed-priority algorithm can schedule a task, which RM can not.
When is a set of periodic tasks schedulable with RM?
if the following holds, it is schedulable with RM
n…number of tasks
The term on the left is called U and denotes the processor utilization factor. It is not necessary, but sufficient.
Use RM on the following example
Which of these algorithms is/are optimal for scheduling aperiodic tasks with precedence relations?
-> Earliest Deadline Due (EDD)
-> Latest Deadline Due (LDD)
-> Rate Monotonic (RM)
-> Modified Earliest Deadline First (EDF*)
-> Deadline monotonic (DM)
-> Latest Deadline First (LDF)
What does DMS stand for and what are its properties? What tasks can it schedule?
DMS: Deadline monotonic schedule
-> deadlines may differ from period
-> tasks with smaller deadlines have higher priority
What is a sufficient condition for a set of periodic tasks to be schedulable with DM?
Can EDF be used for periodic tasks?
Yes, EDF can schedule periodic tasks, if the deadline D_i and the period T_i are equal and the tasks are independent.
Is the EDF algorithm still optimal if Ti unequal Di? What
No, to ensure schedulability we need to analyse the processor demand.
What does LSTS stand for and what are its properties? What tasks can it schedule?
LSTS: Least slack time schedule
-> “least laxity first”
-> dynamic
-> for periodic and aperiodic tasks
feasible sets, if U <= 1
Better than EDF, if multi-core processor on single-core EDF outperforms
What does immediate service in scheduling mixed task sets mean? What are the consequences of this routine?
Aperiodic requests are served as soon as they arrive in the system.
Consequences:
-> minimum response time for aperiodic tasks
-> weak guarantees for periodic tasks
What does background scheduling in scheduling mixed task sets mean? What are the consequences of this routine?
Aperiodic tasks are only served if there are no periodic tasks waiting for execution.
Simple realization, however longer response times for aperiodic tasks. Periodic tasks are not affected.
-> only possible if aperioidic tasks don’t have tight timing requirements
What is a priority server? How does it work?
A specific periodic tasks (server) services aperiodic requests. The server has a assigned period and computation time. Like any other tasks, the server is then scheduled (not necessarily at lowest priority)
How can priority servers be classified?
-> static priority servers: polling server, deferrable server,
priority exchange, sporadic server, slack stealing, …
-> dynamic priority server: dynamic polling server, dynamic deferrable server, dynamic sporadic server, total bandwidth server, …
How does a polling server work? What are advantages and drawbacks?
It is a static priority server. It becomes active at its regular intervals Ti and serves any number of pending aperiodic tasks within its capacity. If there are no pending request at the beginning of its period, the server suspends itself till the beginning of its next period. It does not preserve the server’s capacity until needed.
advantage:
-> simple
-> low overhead
drawbacks:
-> if an aperiodic tasks arrives shortly after the server has supended, it has to wait
How does a deferable server work? What are advantages and drawbacks?
A deferable server is a static priority server that preserves its capacity, if no requests are pending. The capacity is maintained until the end of the period and can be used to service aperiodic tasks at any time, as long as it is no exhausted. At the beginning of any server period, the capacity is replenished.
Advantage
-> shorter response time for aperiodic tasks
Disadvantage
-> may have a negative impact on the schedualabilty of periodic tasks, since it may preempt them even when there are no pending requests
How does a total bandwidth server work? What are advantages and drawbacks?
This server type has dynamic priority and assigns a deadline to each aperiodic request, such that the overall utilization of the aperiodic load does never exceed a maximum value. Once the deadline is assigned, it is inserted into the queue of the system as any other periodic task.
Advantage:
-> better performance and flexability compared to static approaches
Disadvantages:
-> may have larger overhead and complexity
Which of these statements is/are incorrect?
-> Both RMS and DMS are preemptive scheduling algorithms using static priorities
-> With RMS, tasks with higher request rates have higher priority
-> A task set can be scheduled with RMS even if the sufficient schedulability test by Liu & Layland fails
-> RMS always fails when DMS fails
-> With DMS, tasks with higher request rates have higher priority
-> The EDF algorithm can also be used in conjunction with static priority assignments
-> The EDF algorithm can also be used with tasks running in a non-preemptive fashion
-> Any task set can be scheduled on a single processor with EDF if the processor utilization is ≤ 1
Which approach works best when having mixed task sets with non-critical periodic tasks?
-> Background scheduling
-> Immediate service
-> None of the twos & critical aperiodic tasks?
Which approach makes use of a periodic task to serve aperiodic requests?
-> None of the two
Which kind of server is this?
-> EDF deferrable server
What are semaphores?
Semaphores are integer variables with atomic increment, decrement and compare operations (wait, signal). They are used to protect a resource during a critiical code section.
What is priority incursion?
A task with lower task number blocks a resource that a task with higher priority needs for an unbounded time.
Which rules are used in resource access protocols?
-> Access rule: whether to block and when
-> progress rule: how to execute iside a critical section
-> release rule: how to order rhe pending requests of locked tasks
What is NPP? What are advantages and drawbacks?
NPP: Non-preemptive protocol
NPP is a resource access protocol that disallows preemption while executing a critical section. -> the task finishes its critical section before it is preempted.
-> can prevent deadlocks
drawback:
-> can cause unecessary blocking and bad processor utilization
What is IPC? What are advantages and drawbacks?
IPC: immediate priority ceiling
IPC is a resource protocol that assigns a ceiling priority to each task, which is the highest priority of the tasks that use the resource. The idea is to block tasks that might need the resource in the future.
What is PIP? What are advantages and drawbacks?
PIP: priority inheritance protocol
It is a resource protocol that allows a task to inherit the priority of another task that it blocks. The idea is to increase the priority of a taks that holds a resource that is requested by a higher priority task, so that it can finish its critical section faster and release the resource sooner.
advantages:
-> can bound priority inversion
-> not pessimistic
-> complex to implement
-> does not prevent deadlocks
What is direct blocking and what is push-through blocking?
-> Direct blocking: higher priority job tries toacquire a resource held by a lower priority one
-> push-through blocking: medium priority job is blocked by a lower priority job that inherited a higher priority from a job it directly blocks
What could cause a timing anomality? Does upgrading the hard- and softwarealways improve the performance?
-> small changes can already have big impacts and even make timing worse!
One cannot:
-> simply upgrade the harddware to a faster one
-> simply add processors to get more computing power
-> simply optimize the code of tasks (even if it results in faster computing times)
-> simply remove precedence constaints
What is Dhall’s Effect?
EDF is no longer optimal for multicore processor scheduling
-> Optimality for single-core processors does not mean optimality for multicore!
Which of these statements about the Priority Inheritance Protocol is/are incorrect?
-> It bounds priority inversion.
-> It does not avoid deadlocks.
-> It is prone to chained blocking.
-> It disables preemption when executing inside a critical section.
-> When using a resource R, a task executes with the highest priority of the tasks blocked on R.
-> It is more complex to implement than the non-preemptive protocol.
Which of the following strategies may be employed to effectively overcome priority inversion?
-> Abandoning the notion of priorities altogether.
-> Having only two priority levels.
-> Allowing for temporarily raising the priority of low-priority tasks.
-> Using preemptive algorithms based on strict priorities only.
-> Disabling preemption when a task enters its critical section.
-> Making use of Earliest Deadline First.
-> Allowing for temporarily decreasing the priority of high-priority tasks
What is the concept of migration in multicore scheduling?
Migration is the concept of a task being preempted on a core and it may resume on a different core
What are partitioned, partial and global scheduling? Which of these allow migration?
Partitioned scheduling:
-> No migration. Tasks are statically allocated to processors and never migrate
-> At runtime, a task can execute only on the assigned processor
Partial scheduling:
-> Tasks can only perform a limited number of migrations or migrate only on a subset of processors.
-> Most tasks are statically allocated to fixed processors
-> Remaining tasks are split into chunks which are allocated to different processors
Global scheduling:
-> Tasks are dynamically allocated to processors and can migrate at any time on any processor
-> each task can execute on any available processor at runtime
What is the pin-packing algorithm? Where is it used?
The pin-packing algoritm is an optimization problem, in which objects of diferent sizes are put into a finite number of bins in such a way, that the minimal number of bins is used.
It can be used to assign tasks to processors, such that the minimal number of cores is used (in other words, maximize the utilization of processors)
Explain Next fit (NF), First fit (FF), best fit (BF) and worst fit (WF) pin-packing algorithms.
NF: Place each item in the same bin as the last item, if it does not fit, start a new bin
FF: Place each item in the first bin that can contain it
BF: Place each item in the used bin with the smallest empy space, otherwise start a new bin
WF: Place each item in the used bin with the largest empty space, otherwise start a new bin
How to NF, FF, BF and WF generally perform?
-> NF: poor performance (does not utilize emoty space in previous bins)
-> FF: better performance (exploits the empty space available in all used bins)
-> BF: tends to fill the used bins as much as possible
-> WF: tends to balance the load among the used bins
What is FFD? When can it be used?
FFD: First fit decreasing
-> all times must be known offline
-> if they are all known offline, sort them in decreasing order and use FF
-> not necessarily optimal, but rather good balance with respect to efficiency
Is a set φ of tasks schedulable by partitioning EDF on m preemptive processors iff U(φ) ≤ m?
No, this is not possible.
Example partitioned EDF:
What are advantages and disadvantage of partitioned scheduling?
Advantages:
-> simplicity of the implementation
-> no cost related to migration
-> direct reuse of uniprocessor schedulability tests
-> Isolation between processors in case of overload
-> Utiliazation bouns such as U<=(m+1)/2 are rather low
-> Task allocation is NP-hard
-> Rigidity: suited to static configurations only
What problem occurs with semi-partitioned (partial) scheduling?
Split up tasks produce sub-tasks, which are not indepenant! -> they cannot be executed in parallel!
-> Each task may execute on at most one processor at a given time
-> Tasks have a precedence constraint
How do global strategies of RM/EDF/DM work? Are they optimal?
-> Priorities are assigned like in the singlecore processor case.
-> the m higher priority tasks are executed on the m processors
-> no processor is ever idle when a task is ready to execute
-> Neither of the two strategies is optimal -> they also suffer from Dhall’s effect -> some tasks may not be schedulable, even though most of the tasks cause a very low processor utilization
-> Any global schedule using task-/job-static priorities is not optimal!
Is the global-LSTS schedule optimal?
No, is uses job-level dynamic priorities and is still not optimal
Is it at all possible to achieve an optimal scheduling, such that a set of tasks φ is schedulable on m processors with U(φ)≤m?
Yes it is! It is called the p-fair algorithm.
How and when does the p-fair algorithm work?
When:
-> The p-fair algorithm only works for implicit deadlines! -> no optimal algorithm is known for constrained or arbitrary deadline systems.
How:
-> It divides the execution of jobs into small slots of equal length and assigns them proportionally to their utilization
->
-> although it is optimal, it is utterly impractical as there are many preemptions etc.
What can be done to cope with “Dhall’s effect” with global EDF/RM?
Introduce gutilization thresholds ε!
-> priority assignement depends n an utilization threshold ε
-> If Ui > ε, then ti is assigned maximum priority (∞)
What are advantages and drawbacks of global scheduling?
-> suited for dynamic configurations
-> optimal schedulers exist, i.e., it dominates other scheduling policies (if unconstrained migration + dynamic priorities)
-> Autoatic load balancing over all processors (i.e., it balances the overload/underloads sperad on all processors)
Drawbacks:
-> Large overhead due to migrations and context switches
-> simple algorithms work poorly for meeting hard deadlines (e.g global RM or global EDF)
-> It is hard to identify the worst-case scenario (criticial instant)
Which of these is an easy fix to Dhall’s effect?
-> The use of Global-RM instead of Global-EDF scheduling.
-> Assigning a lower priority to tasks with a zero laxity.
-> Computing the schedule using t=0 as critical instant.
-> Assigning a higher priority to tasks with high utilization.
-> There is no solution to Dhall’s effect.
Which of these statements about global multiprocessor scheduling are NOT correct?
-> Any global scheduler using task- or job-static priorities is not optimal
-> Optimal algorithms allowing to achieve an utilization U≤m exist
-> It suffers high migration costs
-> It incurs little overhead
-> It automatically balances the load among processors
Which of these heuristics solutions for bin packing tends to balance the load among the used bins?
-> Next Fit (NF)
-> First Fit (FF)
-> Best Fit (BF)
-> Worst Fit (WF)
-> None of the choices are correct
Which of these is NOT a key advantage of partitioned scheduling?
-> The ability to keep local schedulers independent
-> The absence of costs related to migration
-> Reuse of schedulability tests from uniprocessor systems
-> The simplicity of allocating tasks to the various processors
-> The isolation between processors in case of overload
What is a key challenge specific to semi-partitioned scheduling?
-> Dhall’s effect
-> The need for clairvoyance to predict the next task to be executed
-> The need for several task queues on every processor
-> Keeping track of precedence constraints between sub-tasks allocated to different processors
Which multiprocessor scheduling technique does not allow any task migration?
-> Rate Monotonic scheduling
-> Partitioned scheduling
-> Deadline monotonic scheduling
-> Global scheduling
-> Least slack time scheduling
-> Semi-partitioned scheduling
What metrics are defined to estimate the execution time of a task? What is included in the execution time?
BCET: Best case execution time
ACET: Average execution time
WCET: Worst case execution time
The execution time includes processing time, time to communicate data with e.g. actuators and sensors or any operation on shared resources.
Is it sufficient to know the ACET for applications with hard real time constraints?
No, for hard real time constraints the WCET has to be known.
For firm or non-real time constraints the ACET would be sufficient.
Why is it important to know the WCET?
The WCET is important in the analysis and design of real-time embedded systems.
-> Needed for corectness -> e.g. ensuring deadlines, to perform schedulabilityanalyses or acceptance tests, to assess resource needs when planning the system’s design
Estimating timing bouds is also cruical to ensure timing predictability! -> do not underestimate the WCET or overestimate the BCET!
Is it possible to measure the WCET with external measurement tools?
-> Extensive testing/measurement is a common practice
BUT
-> Execution time often depends on program inputs -> measuring all different inputs may be intractable
-> Are really all possible execution scenarios covert? (exception handling, inalid inputs, …)
-> Execution time may depend on the context (internal processor, memory, operating sytem state)
-> multicores/hyperthreading
Therefore, the disadvantages are as follows:
-> Does not guarantee an upper bound to all executions, unless all initial system states / possible input states are measured
-> exhaustive eecution typically not possible
What about modern hardware and its role in computing the WCET?
It makes it harder to compute the WCET due to
-> The use of caches, pipelines, branch prediction, speculation techniques -> large time caying internal state! -> theh span between best and worst case may be several hundred cycles!
-> power management / environmental effects also affect the processors speed
What is the goal of control flow analysis?
This step aims to derive a control flow grah (CFG) of a program, which represents all possible paths tha can be taken during execution. The goal is to find the longest path through the CF and to determine loop bounds. Moreover, this step tries to find feasible and infeasible paths.
What is low level analysis?
This steps aims to determine bounds on the execution times for each basic block and each edge in the CFG. It accounts for hardware effects such as caches, pipelines, branch predictions, etc.
This step requires a detailed model of the target hardware and its microarchitecture, as well as techniques to analyze data, resource and control hazards etc.
How is the WCET calculated? Explain path-based, constraint-based and structure-based approaches.
Path-based: These methods search for thee overall path with the longest execution time in the CFG, but they may suffer from exponential complexity due to the number of branch pairs.
constraint-based: These methods transform the CFG structure into a set of integer linear equations which can be solved by an optimization techniques.
structure-based: These methods traverse the syntax tree of the program and apply rules to compute the timing o each syntactic unit, but they may ot be able to express every control flow.
What is IPET?
It is a technique to calculate the WCET.
1.) determine basic blocks od a program
2.) determine WCET Ci for each block
3.) calculate the the WCET by
N … number of tasks, xi … number of times a task is executed
-> xi is determined by structural constraints given by the program structure or additional constraints provided by the rogramer based on knowledge of the program context
Which of these keywords does NOT link to the Implicit Path Enumeration Technique (IPET)?
-> Integer linear programming
-> Structural constraints
-> Rule out infeasible paths
-> NP-hard
-> Bottom-up traversal of syntax tree
-> Program value analysis
Which of these techniques is/are NOT used when carrying out a low-level analysis?
-> Dependence analysis of data/control hazards
-> Control flow graph analysis
-> Resource reservation tables
-> Must/May cache analysis
-> Implicit path enumeration
What are typical goals of program path analysis?
-> Generate bounds on the execution time on a given platform
-> Account for hardware effects based on details of architecture
-> Ensure that loops can be taken an arbitrary number of times
-> Derive the control flow graph of a program
-> Studying by how much the use of a pipeline reduces the computation time
-> Identifying (in)feasible paths
Which of these are NOT drawbacks of WCET estimation obtained through measurements with external hardware?
-> They introduce additional costs
-> The execution time depends on program inputs and context
-> They are intrusive w.r.t. the software task to be tested
-> They do not guarantee an upper bound to all execution times
-> It is typically not possible to carry out an exhaustive testing
Mark the wrong answer(s): "A timing bound for a task’s worst-case execution time should be…
-> exact"
-> loose"
-> derivable in a cost-effective way"
-> safe"
-> optimistic"
-> tight"
Zuletzt geändertvor einem Monat