Fixed Priority Vs. Dynamic Priority Scheduling Algorithms in a Hard Real-Time environment

Fixed Priority Vs. Dynamic Priority Scheduling Algorithms John DeHart

      In my final year at West Virginia State University, my senior seminar teacher instructed us to create a program that demonstrated the benefits of using a dynamic scheduling algorithm over a fixed priority algorithm. He gave us a paper to read (written back in 1973), then sent us out to write the program like a mother bird pushing her babies out of a tree. I had a hard time understanding the paper at first; mostly because I had very little experience with real-systems and the paper is incredibly dry. It was also pretty difficult to find resources on the paper (despite it actually being a defining paper in the field), so I decided to make the program I wrote public, as well as to write this article in order to help anyone in a similar situation. This post will (hopefully) explain the algorithms as described in Scheduling Algorithms for Multi-programming in a Hard-Real-Time Environment by Dr. C. L. Liu and James W. Layland.

What is a Real-Time System?

      First, I’ll go ahead and give a very brief overview of what a real-time system is, as well as go over some important terms used in the paper. A real-time system is a system that works in real-time. Sounds like I’m being sarcastic, but that’s literally it. A good example of a real-time system is the fly-by-wire system in an airplane. The pilot doesn’t have any direct control over the mechanical parts of the plane. Instead, he moves the yoke, which tells a computer what parts to move in real-time.

      Fly-by-wire is also a good example of a hard real-time system. In a hard real-time system, there would be catastrophic effects to a task being late (the tasks, in this case, being processing input and controlling the mechanical parts of the plane).

      In contrast, a soft real-time environment is one where a task set or two can be missed without something bad happening to the system. Or, as the paper decides to put it: “where a statistical distribution of response times is acceptable.” The algorithms we’re discussing exist in a hard real-time environment. This is an important distinction to make because it means no task can be missed when considering these algorithms.

Terms to Understand:

      Preemptive means that one task can interrupt another if it has a higher priority. All three algorithms in the paper are preemptive. If they were not preemptive, the first task would have to be completed before the algorithm could start on the second. A Gantt diagram just shows how the algorithm works. Time is shown in units at the bottom of the graph, the green sections represent a task being completed during that time unit. Purple lines represnt a task arriving again and red lines represent that a task has arrived without being completed.

      A critical instant is when the tasks arrive at the same time. A critical time zone happens at a critical instant. It is the time between when a task arrives and when it is completed. Request periods are how often the task must be completed. For example, if a task has a request period of 30, the task would need to be completed by time unit 30, 60, 90, etc. C represents period length and T represents task length (how long it takes to complete that task).

      If you didn’t understand something in that section, don’t worry about it. I’ll go over it with context here in a second. If you still don’t understand it after you’ve finished reading, you can ask down in the comments.

The Fixed Priority Algorithm:

      Now that the background is out of the way, we can start talking about how the algorithms schedule tasks. The fixed priority rate monotonic algorithm schedules task based on their period length. If a task needs to be completed every 41 time units, it has a higher priority than a task that has to be completed every 62. Of course, the algorithm is also preemptive and can interrupt other tasks. If the task with 41 time units arrives before the 62 is finished, the algorithm stops the 62 until the 41 is finished. Like so:

Gantt diagram of the fixed priority rate monotonic algorithm.

The first task arrives a second time at the 41st time unit. However, the third task is not yet completed. Because the first task has a shorter deadline, it has a higher priority and must go first under this algorithm. The third task is then allowed to continue after the first is finished. Don’t worry about the bound and the utilization, yet. I’ll go over those next. Just make sure you understand this, first.

How do you determine the fixed priority algorithm is feasible?

      So, you might be wondering, “Is there a way to tell if this algorithm will work for a task set?” Yes, there is. First, let’s define some variables. T represents a task’s length. If a task takes 20 time units to complete, T = 20. Multiple tasks are differentiated with a subscript. For example, T1 may be equal to 20, T2 = 10, etc., etc., up to Ti (i being the number of tasks).

      The other variable you need to understand to determine feasibility is C. C is the task’s request period. This is how often a task must be completed. For example, if the request period is C = 30, the task must be completed every 30 time units. If the task is unable to be completed due to conflicts, the algorithm is not feasible. C is also differentiated using subscripts. I’ll also take the opportunity here to mention that, while i represents the number of tasks for the subscripts, m represents the number of tasks as well. So if there are 3 tasks, m = 3 for that set.

Calculating the Utilization Bound:

      Now we can finally get to the math. To calculate whether a task set can be feasible with this algorithm, we must first calculate the utilization bound (represented with U). The utilization bound is the highest percentage of processor time the task set can use and still be feasible. I’ll explain a little more after we get the math out of the way.

      The utilization bound is calculated with the formula:

U = m(21/m-1)

Not that tough. As an example, we’ll say that there are 4 tasks in a set. That would be U = 4(21/4-1). The utilization bound would then be 75.7%. If there were 3 tasks in a set, the bound would be 77.9%. Note that the less tasks there are, the higher the bound. This is because when there are less tasks, it gives them a better chance to complete. The utilization bound will only change based on how many tasks there are. Task length/period length will not effect U.

Calculating Processor Utilization:

      Calculating processor utilization is slightly more difficult and involves summations. The formula is as follows:

U = ∑m (Ci / Ti)

Let’s say we have a task set: C1=6, C2=1, and C3=26. T1=24, T2=50, and T3=60. To complete the formula, we’d say:

(C1 / T1) + (C1 / T1) + (C1 / T1)

Which then goes to:

(6 / 24) + (1 / 50) + (26 / 60) = 70.3%

Because the bound for a task set m = 3 is 78%, the task set is feasible under this algorithm.

What if U is Larger Than the Bound?

      That’s a good question. If U is larger than the bound, the task set may still be feasible. The only way to find out at that point is to create a Gantt diagram. There are plenty of other resources on how to do that. This program I wrote to demonstrate these algorithms will randomly generate task sets and draw the diagrams, in case you want to see some examples. Just know that the task set isn’t feasible until the Gantt diagram is drawn to the critical instant of a task set. The critical instant happens at the least common multiple of the request periods (T). Of course, the Utilization can never be above 100%.

The Dynamic, Deadline Driven Algorithm:

      Now we’ve finally made it to the star of the show: the Dynamic Algorithm. This algorithm is the reason the paper was written. It works by calculating which task needs to be completed every time unit. It determines which unfinished task needs to be completed, then picks the one with the closest deadline. Of course, this algorithm is also preemptive. Here’s an example of a task set being completed with the dynamic algorithm:

The fixed priority algorithm compared to the dynamic algorithm. This task set is particularly interesting because the it demonstrates the difference in efficiency between the two algorithms.

Note that the fixed priority algorithm preempts the last task in favor of the first at time unit 31. This is because it was a shorter task and therefore had a higher priority under this algorithm. Because of this though, the algorithm is not feasible for this task set. The deadline driven algorithm, however, completes the final task before moving on because it has the earliest uncompleted deadline at that point.

When is the Deadline Driven Algorithm Feasible?

      The deadline driven algorithm is feasible as long as the utilization bound is under 100%. That’s it. No math for this one, thank God. Why not, though? Well, it’s because the processor is using every single time unit, whereas there were many wasted time units under the fixed priority algorithm.

The Mixed Algorithm:

      Liu and Layland’s paper also outlined a “mixed” scheduling algorithm. This algorithm is simply a mixture of the two previous algorithms. Computer Scientists are creative with names, huh? I’ll give a very basic run-down on how this algorithm works, since it isn’t particularly relevant today. They basically split the task set into two halves. The first half contains all the tasks with the shortest periods. The second group contains the tasks with the largest periods. The first group is completed using the fixed priority algorithm, but the second group is completed with the deadline driven. The second group can only go when the first group is not completing a task. This is very important. The processor cannot complete two tasks at the same time.

Why is Talk About This Algorithm?

      The hardware back in the 70’s was much different from the hardware today. Liu and Layland realized that the algorithm they created could not be run on computers at the time. They also realized that it would be incredibly expensive to replace the hardware so they could implement the dynamic algorithm, so they came up with the mixed scheduling algorithm. This algorithm could be implemented on the hardware of the time with only a minor software update, rather than replacing entire machines or hardware. Of course, the algorithm was a product of its time, meaning that it’s not really relevant today. Althoug it is still an interesting part of history, in my opinion.

Conclusion

      In this article, we outlined how the fixed priority rate monotonic, dynamic deadline driven, and mixed algorithms work as outlined in Dr. Liu and Layland’s groundbreaking paper. We showed that the dynamic algorithm is more efficient than the fixed priority. I also showed how to calculate the utilization bound for each algorithm, as well as how to calculate when the algorithms were feasible. Hopefully this made sense to you and feel free to ask questions in the comments.

Note:

      The paper is much more extensive than I was. I know that this article was long, but the paper goes much more in-depth. The authors provide mathematical proofs on why these things are true, whereas I just presented you with the basic facts from the paper.

I highly suggest you read the paper HERE.

You can also see randomly generated Gantt Diagrams HERE. The program is run in the browser and hosted on this site, so no need to download anything. Seeing some examples on how the algorithms are scheduled might help you understand how they work.

If you’re interested in seeing the code for the program, it can be found HERE. The program is written in JavaScript and is run in your browser. No additional software is needed.

Thanks for reading,
-John DeHart

Leave a Reply

Your email address will not be published. Required fields are marked *