Implement project user space application using the C language

The objective of this project is to extend project #2, and build on your experience developing (and debugging) multi-threaded software as well as gain hands-on experience with simple pre-emptive schedulers for multi-processors. As in project #2, you will implement this project as a user space application using the C-language.


For this project, you are to extend the simple centralized scheduler to a multi-processor scheduler for an 8-core SMP machine. Recall that in the case of MP schedulers, we use multiple instances of a given scheduler, one per CPU, each of which operate relatively autonomously. So, in this case, you’ll need 8 instances of the scheduler, one per core, with separate instances of data structures for each instance. Each instance of the scheduler should run in a separate, distinguishable thread.

Now, unlike Project #2, you will not need 8-threads all accessing a common Ready queue as you will now have 8 queues, one per scheduler instance on each CPU. Similar to Project #2, you should continue to use a single I/O queue, that each CPU scheduler will send jobs to when they complete a CPU phase and encounter an I/O phase. Consequently, you’ll have:

1) 8 Ready/Run queues – one per CPU.

2) A Single I/O/Wait queue for jobs waiting to start an I/O phase.

3) A Single queue for jobs that have completed all job phases.

To ensure jobs completing an I/O phase are returned to the appropriate CPU on which it originally ran, you will need to affine each job created to a specific CPU scheduler. One simple way to do so would be add an identifier field to your job structure that corresponds with a given CPU/scheduler ID. Other mechanisms are possible and you’re free to choose which method works best for your project. The idea is that if a job is running on a CPU and has to do I/O we want to ensure that job is returned to the same CPU scheduler on which it started.

In terms of the thread creation and completion threads – those threads you use to create new jobs and submit them to appropriate queues as well as those that take completed jobs off the finished queue – you have several options. You may either:

1) Extend your existing 4 creation/completion threads to submit jobs across all 8 CPUs. So, any creation thread could submit jobs to any of the CPU scheduler instances.

2) Create 8 creation/completion threads affined to each CPU. In other words, start 8 creation/completion threads that submit jobs to specific CPU ready queues.

A new requirement for this project is that you must implement a pre-emptible scheduler. You will leverage your existing phased-based, job data structures, however, unlike project #2 that was using a simplistic FIFO/FCFS scheduler, in this project you must implement a CPU scheduler that supports preemption. Accordingly, you will need to add a timer mechanism similar to the timer interrupt used in OS schedulers with a configurable quantum to enable fairness through preemption of jobs during each phase. Clearly, because we’ve abstracted jobs to simple phases with durations, context switching between jobs is relatively straightforward. For example, if a given job is executing and gets preempted (by your timer), but still have remaining CPU time within the phase to run, that job should be “descheduled” by getting sent back to the ready queue. Similarly if a given job is scheduled on the CPU, but completes the CPU phase before the quantum ends, the job should be moved to the appropriate next state (either queued up for IO, or sent to the “finished” queue) and the timer should be reset.

For your pre-emptible CPU scheduler, you may choose to implement either Round-Robin or MFQ. Please note, that you have a choice for the I/O scheduler. You may either choose to implement a pre-emptible scheduler (again, RR or MFQ) or you may choose to schedule I/Os using FIFO/FCFS. There is no further requirement beyond project #2 for the I/O phases.

You will continue need to instantiate multiple instances of jobs with different phase types and durations including different combinations of CPU and IO. Each phase must have a time associated with it, such that the local CPU and I/O schedulers know how long each phase runs.

Atomicity requirement – Each phase of each job should be executed atomically and only once. In other words, because there are eight CPU threads and four I/O handling threads, you could have multiple threads attempting to en-queue or de-queue jobs onto the queues concurrently. Consequently, you must ensure consistency and atomicity of share data structures. For example, adding or removing elements from each queue may require locks, semaphores, or other constructs as we’ve discussed for mutual exclusion.

Output requirements - The threads in your application should print out a message when 1) jobs are started, 2)when jobs transition between queues, 3) when executing on CPU threads (and which CPU is operating on) and I/O threads, 4) when completed (transitioned to the Finished queue), and 5) when each job officially completes (taken off the Finished queue and freed by the creating thread). Please either redirect the output from the program to a file (if you’re just using stdout) or write to a file.