502 lines
21 KiB
Plaintext
502 lines
21 KiB
Plaintext
@node Task 1--Threads
|
|
@chapter Task 1: Scheduling
|
|
|
|
In this assignment, we give you a minimally functional thread system.
|
|
Your job is to extend the functionality of this system to gain a
|
|
better understanding of synchronization problems.
|
|
|
|
You will be working primarily in the @file{threads} directory for
|
|
this assignment. Compilation should be done in the @file{threads} directory.
|
|
|
|
Before you read the description of this task, you should read all of
|
|
the following sections: @ref{Introduction}, @ref{Task 0--Codebase}, @ref{Coding Standards},
|
|
@ref{Debugging Tools}, and @ref{Development Tools}. You should at least
|
|
skim the material from @ref{PintOS Loading} through @ref{Memory
|
|
Allocation}, especially @ref{Synchronization}. To complete this task
|
|
you will also need to read @ref{4.4BSD Scheduler}.
|
|
|
|
You must build task 1 on top of a working task 0 submission
|
|
as some of the task 1 test rely on a non-busy waiting implementation of @code{timer_sleep()}.
|
|
|
|
@menu
|
|
* Background::
|
|
* Development Suggestions::
|
|
* Task 1 Requirements::
|
|
* Task 1 FAQ::
|
|
@end menu
|
|
|
|
@node Background
|
|
@section Background
|
|
|
|
Now that you've become familiar with PintOS and its thread package, it's time to work on one of the most critical component of an operating system: the scheduler.
|
|
|
|
Working on the scheduler requires you to have grasped the main concepts of both the threading system and synchronization primitives. If you still feel uncertain about these topics, you are warmly invited to refer back to @ref{Understanding Threads} and @ref{Synchronization} and to carefully read the code in the corresponding source files.
|
|
|
|
@node Development Suggestions
|
|
@section Development Suggestions
|
|
|
|
In the past, many groups divided the assignment into pieces, then each
|
|
group member worked on his or her piece until just before the
|
|
deadline, at which time the group reconvened to combine their code and
|
|
submit. @strong{This is a bad idea. We do not recommend this
|
|
approach.} Groups that do this often find that two changes conflict
|
|
with each other, requiring lots of last-minute debugging. Some groups
|
|
who have done this have turned in code that did not even compile or
|
|
boot, much less pass any tests.
|
|
|
|
@localgitpolicy{}
|
|
|
|
You should expect to run into bugs that you simply don't understand
|
|
while working on this and subsequent tasks. When you do,
|
|
reread the appendix on debugging tools, which is filled with
|
|
useful debugging tips that should help you to get back up to speed
|
|
(@pxref{Debugging Tools}). Be sure to read the section on backtraces
|
|
(@pxref{Backtraces}), which will help you to get the most out of every
|
|
kernel panic or assertion failure.
|
|
|
|
@node Task 1 Requirements
|
|
@section Requirements
|
|
|
|
@menu
|
|
* Task 1 Design Document::
|
|
* Setting and Inspecting Priorities::
|
|
* Priority Scheduling::
|
|
* Priority Donation::
|
|
* Advanced Scheduler::
|
|
@end menu
|
|
|
|
@node Task 1 Design Document
|
|
@subsection Design Document
|
|
|
|
When you submit your work for task 1, you must also submit a completed copy of
|
|
@uref{threads.tmpl, , the task 1 design document}.
|
|
You can find a template design document for this task in @file{pintos/doc/threads.tmpl} and also on CATe.
|
|
You are free to submit your design document as either a @file{.txt} or @file{.pdf} file.
|
|
We recommend that you read the design document template before you start working on the task.
|
|
@xref{Task Documentation}, for a sample design document that goes along with a fictitious task.
|
|
|
|
@node Setting and Inspecting Priorities
|
|
@subsection Setting and Inspecting Priorities
|
|
|
|
Implement the following functions that allow a thread to examine and modify its own priority.
|
|
Skeletons for these functions are provided in @file{threads/thread.c}.
|
|
|
|
@deftypefun int thread_get_priority (void)
|
|
Returns the current thread's effective priority.
|
|
@end deftypefun
|
|
|
|
@deftypefun void thread_set_priority (int @var{new_priority})
|
|
Sets the current thread's priority to @var{new_priority}.
|
|
|
|
If the current thread no longer has the highest priority, yields.
|
|
@end deftypefun
|
|
|
|
@node Priority Scheduling
|
|
@subsection Priority Scheduling
|
|
|
|
Implement priority scheduling in PintOS.
|
|
When a thread is added to the ready list that has a higher priority
|
|
than the currently running thread, the current thread should
|
|
immediately yield the processor to the new thread. Similarly, when
|
|
threads are waiting for a lock, semaphore, or condition variable, the
|
|
highest priority waiting thread should be awakened first. A thread
|
|
may raise or lower its own priority at any time, but lowering its
|
|
priority such that it no longer has the highest priority must cause it
|
|
to immediately yield the CPU. In both the priority scheduler and the
|
|
advanced scheduler you will write later, the running thread should
|
|
be that with the highest priority.
|
|
|
|
Thread priorities range from @code{PRI_MIN} (0) to @code{PRI_MAX} (63).
|
|
Lower numbers correspond to lower priorities, so that priority 0
|
|
is the lowest priority and priority 63 is the highest.
|
|
The initial thread priority is passed as an argument to
|
|
@func{thread_create}. If there's no reason to choose another
|
|
priority, new threads should use @code{PRI_DEFAULT} (31). The @code{PRI_} macros are
|
|
defined in @file{threads/thread.h}, and you should not change their
|
|
values.
|
|
|
|
@node Priority Donation
|
|
@subsection Priority Donation
|
|
|
|
One issue with priority scheduling is ``priority inversion''. Consider
|
|
high, medium, and low priority threads @var{H}, @var{M}, and @var{L},
|
|
respectively. If @var{H} needs to wait for @var{L} (for instance, for a
|
|
lock held by @var{L}), and @var{M} is on the ready list, then @var{H}
|
|
will never get the CPU because the low priority thread will not get any
|
|
CPU time. A partial fix for this problem is for @var{H} to ``donate''
|
|
its priority to @var{L} while @var{L} is holding the lock, then recall
|
|
the donation once @var{L} releases (and thus @var{H} acquires) the lock.
|
|
|
|
Implement priority donation. You will need to account for all different
|
|
situations in which priority donation is required. In particular, be sure to handle:
|
|
|
|
@itemize
|
|
|
|
@item @strong{multiple donations}: multiple priorities can be donated to a single thread.
|
|
@item @strong{nested donations}: if @var{H} is waiting on
|
|
a lock that @var{M} holds and @var{M} is waiting on a lock that @var{L}
|
|
holds, then both @var{M} and @var{L} should be boosted to @var{H}'s
|
|
priority. If necessary, you may impose a reasonable limit on depth of
|
|
nested priority donation, such as 8 levels.
|
|
|
|
@end itemize
|
|
|
|
You must implement priority donation for locks.
|
|
You do not need to implement priority donation for the other PintOS synchronization constructs.
|
|
However, you do need to implement priority scheduling in all cases.
|
|
|
|
Finally, you should review your implementations of @code{thread_get_priority} and @code{thread_set_priority}
|
|
to make sure that they exhibit the correct behaviour in the presence of dontations.
|
|
In particular, in the presence of priority donations @code{thread_get_priority} must return the highest donated priority.
|
|
|
|
You do not need to provide any interface to allow a thread to directly modify other threads' priorities.
|
|
The priority scheduler is also not used in any later task.
|
|
|
|
@node Advanced Scheduler
|
|
@subsection Advanced Scheduler
|
|
|
|
Implement a multilevel feedback queue scheduler similar to the
|
|
4.4@acronym{BSD} scheduler to
|
|
reduce the average response time for running jobs on your system.
|
|
@xref{4.4BSD Scheduler}, for detailed requirements.
|
|
|
|
Like the priority scheduler, the advanced scheduler chooses the thread
|
|
to run based on priorities. However, the advanced scheduler does not do
|
|
priority donation. Thus, we recommend that you have the priority
|
|
scheduler working, except possibly for priority donation, before you
|
|
start work on the advanced scheduler.
|
|
|
|
You must write your code to allow us to choose a scheduling algorithm
|
|
policy at PintOS startup time. By default, the priority scheduler
|
|
must be active, but we must be able to choose the 4.4@acronym{BSD}
|
|
scheduler
|
|
with the @option{-mlfqs} kernel option. Passing this
|
|
option sets @code{thread_mlfqs}, declared in @file{threads/thread.h}, to
|
|
true when the options are parsed by @func{parse_options}, which happens
|
|
early in @func{main}.
|
|
|
|
When the 4.4@acronym{BSD} scheduler is enabled, threads no longer
|
|
directly control their own priorities. The @var{priority} argument to
|
|
@func{thread_create} should be ignored, as well as any calls to
|
|
@func{thread_set_priority}, and @func{thread_get_priority} should return
|
|
the thread's current priority as set by the scheduler.
|
|
|
|
The advanced scheduler is not used in any later task.
|
|
|
|
@node Task 1 FAQ
|
|
@section FAQ
|
|
|
|
@table @b
|
|
@item How much code will I need to write?
|
|
|
|
Here's a summary of our reference solution, produced by the
|
|
@command{diffstat} program. The final row gives total lines inserted
|
|
and deleted; a changed line counts as both an insertion and a deletion.
|
|
|
|
@verbatim
|
|
threads/fixed-point.h | 120 ++++++++++++++++++
|
|
threads/synch.c | 88 ++++++++++++-
|
|
threads/thread.c | 196 ++++++++++++++++++++++++++----
|
|
threads/thread.h | 19 ++
|
|
4 files changed, 397 insertions(+), 26 deletions(-)
|
|
@end verbatim
|
|
|
|
The reference solution represents just one possible solution. Many
|
|
other solutions are also possible and many of those differ greatly from
|
|
the reference solution. Some excellent solutions may not modify all the
|
|
files modified by the reference solution, and some may modify files not
|
|
modified by the reference solution.
|
|
|
|
@file{fixed-point.h} is a new file added by the reference solution.
|
|
|
|
@item Do we need a working Task 0 to implement Task 1?
|
|
|
|
Yes.
|
|
|
|
@item How do I update the @file{Makefile}s when I add a new source file?
|
|
|
|
@anchor{Adding Source Files}
|
|
To add a @file{.c} file, edit the top-level @file{Makefile.build}.
|
|
Add the new file to variable @samp{@var{dir}_SRC}, where
|
|
@var{dir} is the directory where you added the file. For this
|
|
task, that means you should add it to @code{threads_SRC} or
|
|
@code{devices_SRC}. Then run @code{make}. If your new file
|
|
doesn't get
|
|
compiled, run @code{make clean} and then try again.
|
|
|
|
When you modify the top-level @file{Makefile.build} and re-run
|
|
@command{make}, the modified
|
|
version should be automatically copied to
|
|
@file{threads/build/Makefile}. The converse is
|
|
not true, so any changes will be lost the next time you run @code{make
|
|
clean} from the @file{threads} directory. Unless your changes are
|
|
truly temporary, you should prefer to edit @file{Makefile.build}.
|
|
|
|
A new @file{.h} file does not require editing the @file{Makefile}s.
|
|
|
|
@item What does @code{warning: no previous prototype for `@var{func}'} mean?
|
|
|
|
It means that you defined a non-@code{static} function without
|
|
preceding it by a prototype. Because non-@code{static} functions are
|
|
intended for use by other @file{.c} files, for safety they should be
|
|
prototyped in a header file included before their definition. To fix
|
|
the problem, add a prototype in a header file that you include, or, if
|
|
the function isn't actually used by other @file{.c} files, make it
|
|
@code{static}.
|
|
|
|
@item What is the interval between timer interrupts?
|
|
|
|
Timer interrupts occur @code{TIMER_FREQ} times per second. You can
|
|
adjust this value by editing @file{devices/timer.h}. The default is
|
|
100 Hz.
|
|
|
|
We don't recommend changing this value, because any changes are likely
|
|
to cause many of the tests to fail.
|
|
|
|
@item How long is a time slice?
|
|
|
|
There are @code{TIME_SLICE} ticks per time slice. This macro is
|
|
declared in @file{threads/thread.c}. The default is 4 ticks.
|
|
|
|
We don't recommend changing this value, because any changes are likely
|
|
to cause many of the tests to fail.
|
|
|
|
@item How do I run the tests?
|
|
|
|
@xref{Testing}.
|
|
|
|
@item Why do I get a test failure in @func{pass}?
|
|
|
|
@anchor{The pass function fails}
|
|
You are probably looking at a backtrace that looks something like this:
|
|
|
|
@example
|
|
0xc0108810: debug_panic (lib/kernel/debug.c:32)
|
|
0xc010a99f: pass (tests/threads/tests.c:93)
|
|
0xc010bdd3: test_mlfqs_load_1 (...threads/mlfqs-load-1.c:33)
|
|
0xc010a8cf: run_test (tests/threads/tests.c:51)
|
|
0xc0100452: run_task (threads/init.c:283)
|
|
0xc0100536: run_actions (threads/init.c:333)
|
|
0xc01000bb: main (threads/init.c:137)
|
|
@end example
|
|
|
|
This is just confusing output from the @command{backtrace} program. It
|
|
does not actually mean that @func{pass} called @func{debug_panic}. In
|
|
fact, @func{fail} called @func{debug_panic} (via the @func{PANIC}
|
|
macro). GCC knows that @func{debug_panic} does not return, because it
|
|
is declared @code{NO_RETURN} (@pxref{Function and Parameter
|
|
Attributes}), so it doesn't include any code in @func{fail} to take
|
|
control when @func{debug_panic} returns. This means that the return
|
|
address on the stack looks like it is at the beginning of the function
|
|
that happens to follow @func{fail} in memory, which in this case happens
|
|
to be @func{pass}.
|
|
|
|
@xref{Backtraces}, for more information.
|
|
|
|
@item How do interrupts get re-enabled in the new thread following @func{schedule}?
|
|
|
|
Every path into @func{schedule} disables interrupts. They eventually
|
|
get re-enabled by the next thread to be scheduled. Consider the
|
|
possibilities: the new thread is running in @func{switch_thread} (but
|
|
see below), which is called by @func{schedule}, which is called by one
|
|
of a few possible functions:
|
|
|
|
@itemize @bullet
|
|
@item
|
|
@func{thread_exit}, but we'll never switch back into such a thread, so
|
|
it's uninteresting.
|
|
|
|
@item
|
|
@func{thread_yield}, which immediately restores the interrupt level upon
|
|
return from @func{schedule}.
|
|
|
|
@item
|
|
@func{thread_block}, which is called from multiple places:
|
|
|
|
@itemize @minus
|
|
@item
|
|
@func{sema_down}, which restores the interrupt level before returning.
|
|
|
|
@item
|
|
@func{idle}, which enables interrupts with an explicit assembly STI
|
|
instruction.
|
|
|
|
@item
|
|
@func{wait} in @file{devices/intq.c}, whose callers are responsible for
|
|
re-enabling interrupts.
|
|
@end itemize
|
|
@end itemize
|
|
|
|
There is a special case when a newly created thread runs for the first
|
|
time. Such a thread calls @func{intr_enable} as the first action in
|
|
@func{kernel_thread}, which is at the bottom of the call stack for every
|
|
kernel thread but the first.
|
|
|
|
@item What should I expect from the Task 1 code-review?
|
|
|
|
The code-review for this task will be conducted with each group in-person.
|
|
Our Task 1 code-review will cover @strong{four} main areas:
|
|
functional correctness, efficiency, design quality and general coding style.
|
|
|
|
@itemize @bullet
|
|
@item For @strong{functional correctness}, we will be looking to see if your implementation of priority scheduling strictly obeys the rule that "the highest priority ready thread will always be running" and that all cases of priority inversion are being correctly handled by your system for priority donations.
|
|
We will also be checking if your updated code for locks is free of any race conditions, paying specific attention to the @func{lock_acquire} and @func{lock_release} functions, as well as the interplay between them.
|
|
|
|
@item For @strong{efficiency}, we will be looking at the complexity characteristics of your modified code for semaphores, as well
|
|
the steps you have taken to minimise the time spent inside your timer interrupt handler.
|
|
|
|
@item For @strong{design quality}, we will be looking at the stability and robustness of any changes you have made to the core PintOS kernel (e.g. @func{thread_block}, @func{thread_unblock} and @func{thread_yield}) and the accuracy of the priority updates in your BSD scheduler.
|
|
|
|
@item For @strong{general coding style}, we will be paying attention to all of the usual elements of good style
|
|
that you should be used to from last year (e.g. consistent code layout, appropriate use of comments, avoiding magic numbers, etc.)
|
|
as well as your use of git (e.g. commit frequency and commit message quality).
|
|
In this task, we will be paying particular attention to the readability of your fixed-point mathematics abstraction within your BSD scheduler.
|
|
@end itemize
|
|
@end table
|
|
|
|
@menu
|
|
* Priority Scheduling FAQ::
|
|
* Advanced Scheduler FAQ::
|
|
@end menu
|
|
|
|
@node Priority Scheduling FAQ
|
|
@subsection Priority Scheduling FAQ
|
|
|
|
@table @b
|
|
@item Doesn't priority scheduling lead to starvation?
|
|
|
|
Yes, strict priority scheduling can lead to starvation
|
|
because a thread will not run if any higher-priority thread is runnable.
|
|
The advanced scheduler introduces a mechanism for dynamically
|
|
changing thread priorities.
|
|
|
|
Strict priority scheduling is valuable in real-time systems because it
|
|
offers the programmer more control over which jobs get processing
|
|
time. High priorities are generally reserved for time-critical
|
|
tasks. It's not ``fair,'' but it addresses other concerns not
|
|
applicable to a general-purpose operating system.
|
|
|
|
@item What thread should run after a lock has been released?
|
|
|
|
When a lock is released, the highest priority thread waiting for that
|
|
lock should be unblocked and put on the list of ready threads.
|
|
The scheduler should then run the highest priority thread on the ready list.
|
|
|
|
@item If the highest-priority thread yields, does it continue running?
|
|
|
|
Yes. If there is a single highest-priority thread, it continues
|
|
running until it blocks or finishes, even if it calls
|
|
@func{thread_yield}.
|
|
If multiple threads have the same highest priority,
|
|
@func{thread_yield} should switch among them in ``round robin'' order.
|
|
|
|
@item What happens to the priority of a donating thread?
|
|
|
|
Priority donation only changes the priority of the donee
|
|
thread. The donor thread's priority is unchanged.
|
|
Priority donation is not additive: if thread @var{A} (with priority 5) donates
|
|
to thread @var{B} (with priority 3), then @var{B}'s new priority is 5, not 8.
|
|
|
|
@item Can a thread's priority change while it is on the ready queue?
|
|
|
|
Yes. Consider a ready, low-priority thread @var{L} that holds a lock.
|
|
High-priority thread @var{H} attempts to acquire the lock and blocks,
|
|
thereby donating its priority to ready thread @var{L}.
|
|
|
|
@item Can a thread's priority change while it is blocked?
|
|
|
|
Yes. While a thread that has acquired lock @var{L} is blocked for any
|
|
reason, its priority can increase by priority donation if a
|
|
higher-priority thread attempts to acquire @var{L}. This case is
|
|
checked by the @code{priority-donate-sema} test.
|
|
|
|
@item Can a thread added to the ready list preempt the processor?
|
|
|
|
Yes. If a thread added to the ready list has higher priority than the
|
|
running thread, the correct behaviour is to immediately yield the
|
|
processor. It is not acceptable to wait for the next timer interrupt.
|
|
The highest priority thread should run as soon as it is runnable,
|
|
preempting whatever thread is currently running.
|
|
|
|
@item How does @func{thread_set_priority} affect a thread receiving donations?
|
|
|
|
It sets the thread's base priority. The thread's effective priority
|
|
becomes the higher of the newly set priority or the highest donated
|
|
priority. When the donations are released, the thread's priority
|
|
becomes the one set through the function call. This behaviour is checked
|
|
by the @code{priority-donate-lower} test.
|
|
|
|
@item Doubled test names in output make them fail.
|
|
|
|
Suppose you are seeing output in which some test names are doubled,
|
|
like this:
|
|
|
|
@example
|
|
(alarm-priority) begin
|
|
(alarm-priority) (alarm-priority) Thread priority 30 woke up.
|
|
Thread priority 29 woke up.
|
|
(alarm-priority) Thread priority 28 woke up.
|
|
@end example
|
|
|
|
What is happening is that output from two threads is being
|
|
interleaved. That is, one thread is printing @code{"(alarm-priority)
|
|
Thread priority 29 woke up.\n"} and another thread is printing
|
|
@code{"(alarm-priority) Thread priority 30 woke up.\n"}, but the first
|
|
thread is being preempted by the second in the middle of its output.
|
|
|
|
This problem indicates a bug in your priority scheduler. After all, a
|
|
thread with priority 29 should not be able to run while a thread with
|
|
priority 30 has work to do.
|
|
|
|
Normally, the implementation of the @code{printf()} function in the
|
|
PintOS kernel attempts to prevent such interleaved output by acquiring
|
|
a console lock during the duration of the @code{printf} call and
|
|
releasing it afterwards. However, the output of the test name,
|
|
e.g., @code{(alarm-priority)}, and the message following it is output
|
|
using two calls to @code{printf}, resulting in the console lock being
|
|
acquired and released twice.
|
|
@end table
|
|
|
|
@node Advanced Scheduler FAQ
|
|
@subsection Advanced Scheduler FAQ
|
|
|
|
@table @b
|
|
@item How does priority donation interact with the advanced scheduler?
|
|
|
|
It doesn't have to. We won't test priority donation and the advanced
|
|
scheduler at the same time.
|
|
|
|
@item Can I use one queue instead of 64 queues?
|
|
|
|
Yes. In general, your implementation may differ from the description,
|
|
as long as its behaviour is the same.
|
|
|
|
@item Some scheduler tests fail and I don't understand why. Help!
|
|
|
|
If your implementation mysteriously fails some of the advanced
|
|
scheduler tests, try the following:
|
|
|
|
@itemize
|
|
@item
|
|
Read the source files for the tests that you're failing, to make sure
|
|
that you understand what's going on. Each one has a comment at the
|
|
top that explains its purpose and expected results.
|
|
|
|
@item
|
|
Double-check your fixed-point arithmetic routines and your use of them
|
|
in the scheduler routines.
|
|
|
|
@item
|
|
Consider how much work your implementation does in the timer
|
|
interrupt. If the timer interrupt handler takes too long, then it
|
|
will take away most of a timer tick from the thread that the timer
|
|
interrupt preempted. When it returns control to that thread, it
|
|
therefore won't get to do much work before the next timer interrupt
|
|
arrives. That thread will therefore get blamed for a lot more CPU
|
|
time than it actually got a chance to use. This raises the
|
|
interrupted thread's recent CPU count, thereby lowering its priority.
|
|
It can cause scheduling decisions to change. It also raises the load
|
|
average.
|
|
@end itemize
|
|
@end table
|