[WEEK 08] PintOS - Project 1: Threads (Advanced Scheduler)

신호정 벨로그·2021년 10월 3일
0

Today I Learned

목록 보기
44/89

Instructions

Implement a multi-level feedback queue scheduler similar to the 4.4BSD scheduler to reduce the average response time for running jobs on your system.

Like the priority scheduler, the advanced scheduler chooses the thread to run based on priorities. However, the advanced scheduler does not do priority donation. Thus, we recommend that you have the priority scheduler working, except possibly for priority donation, before you start work on the advanced scheduler.

You must write your code to allow us to choose a scheduling algorithm policy at Pintos startup time. By default, the priority scheduler must be active, but we must be able to choose the 4.4BSD scheduler with the -mlfqs kernel option. Passing this option sets thread_mlfqs, declared in threads/thread.h, to true when the options are parsed by parse_options(), which happens early in main().

When the 4.4BSD scheduler is enabled, threads no longer directly control their own priorities. The priority argument to thread_create() should be ignored, as well as any calls to thread_set_priority(), and thread_get_priority() should return the thread's current priority as set by the scheduler. The advanced scheduler is not used in any later project.

Backgrounds

4.4BSD Scheduler

The goal of a general-purpose scheduler is to balance threads' different scheduling needs. Threads that perform a lot of I/O require a fast response time to keep input and output devices busy, but need little CPU time. On the other hand, compute-bound threads need to receive a lot of CPU time to finish their work, but have no requirement for fast response time. Other threads lie somewhere in between, with periods of I/O punctuated by periods of computation, and thus have requirements that vary over time. A well-designed scheduler can often accommodate threads with all these requirements simultaneously.

For project 1, you must implement the scheduler described in this appendix. Our scheduler resembles the one described in [McKusick], which is one example of a multilevel feedback queue scheduler. This type of scheduler maintains several queues of ready-to-run threads, where each queue holds threads with a different priority. At any given time, the scheduler chooses a thread from the highest-priority non-empty queue. If the highest-priority queue contains multiple threads, then they run in "round robin" order.

Multiple facets of the scheduler require data to be updated after a certain number of timer ticks. In every case, these updates should occur before any ordinary kernel thread has a chance to run, so that there is no chance that a kernel thread could see a newly increased timer_ticks() value but old scheduler data values.

The 4.4BSD scheduler does not include priority donation.

Criteria

1. Niceness

Thread priority is dynamically determined by the scheduler using a formula given below. However, each thread also has an integer nice value that determines how "nice" the thread should be to other threads. A nice of zero does not affect thread priority. A positive nice, to the maximum of 20, decreases the priority of a thread and causes it to give up some CPU time it would otherwise receive. On the other hand, a negative nice, to the minimum of -20, tends to take away CPU time from other threads.

The initial thread starts with a nice value of zero. Other threads start with a nice value inherited from their parent thread. You must implement the functions described below, which are for use by test programs. We have provided skeleton definitions for them in threads/thread.c

/* Returns the current thread's nice value. */
int thread_get_nice(void) {
	/* TODO: Your implementation goes here */
	return 0;
}

thread_get_nice(void) returns the current thread's nice value.

/* Sets the current thread's nice value to NICE. */
void thread_set_nice(int nice UNUSED) {
	/* TODO: Your implementation goes here */
}

thread_set_nice(int nice) sets the current thread's nice value to new nice and recalculates the thread's priority based on new value. If the running thread no longer has the highest priority, yields.

2. Calculating Priority

Our scheduler has 64 priorities and thus 64 ready queues, numbered 0 (PRI_MIN) through 63 (PRI_MAX). Lower numbers correspond to lower priorities, so that priority 0 is the lowest priority and priority 63 is the highest. Thread priority is calculated initially at thread initialization. It is also recalculated once every fourth clock tick, for every thread. In either case, it is determined by the formula:

priority = PRI_MAX - (recent_cpu / 4) - (nice * 2),

where recent cpu is an estimate of the CPU time the thread has used recently (see below) and nice is the thread's nice value. The result should be rounded down to the nearest integer (truncated). The coefficients 1/4 and 2 on recent cpu and nice, respectively, have been found to work well in practice but lack deeper meaning. The calculated priority is always adjusted to lie in the valid range PRI_MIN to PRI_MAX.

This formula gives a thread that has received CPU time recently lower priority for being reassigned the CPU the next time the scheduler runs. This is key to preventing starvation: a thread that has not received any CPU time recently will have a recent cpu of 0, which barring a high nice value should ensure that it receives CPU time soon.

3. Calculating recent_cpu

We wish recent cpu to measure how much CPU time each process has received "recently." Furthermore, as a refinement, more recent CPU time should be weighted more heavily than less recent CPU time. One approach would use an array of n elements to track the CPU time received in each of the last n seconds. However, this approach requires O(n) space per thread and O(n) time per calculation of a new weighted average.

Instead, we use a exponentially weighted moving average, which takes this general form:

x(0) = f(0),

x(t) = ax(t − 1) + (1 − a)f(t),

a = k/(k + 1),

where x(t) is the moving average at integer time t ≥ 0, f(t) is the function being averaged, and k > 0 controls the rate of decay. We can iterate the formula over a few steps as follows:

x(1) = f(1),

x(2) = af(1) + f(2),

...

x(5) = a^4 f(1) + a^3 f(2) + a^2 f(3) + a f(4) + f(5).

The value of f(t) has a weight of 1 at time t, a weight of a at time t + 1, a^2 at time t + 2, and so on. We can also relate x(t) to k: f(t) has a weight of approximately 1/e at time t + k, approximately 1/(e^2) at time t + 2k, and so on. From the opposite direction, f(t) decays to weight w at time t + log_a(w).

The initial value of recent_cpu is 0 in the first thread created, or the parent's value in other new threads. Each time a timer interrupt occurs, recent_cpu is incremented by 1 for the running thread only, unless the idle thread is running. In addition, once per second the value of recent cpu is recalculated for every thread (whether running, ready, or blocked), using this formula:

recent_cpu = (2 load_avg)/(2 load_avg + 1) * recent_cpu + nice

where loadavg is a moving average of the number of threads ready to run (see below). If load_avg is 1, indicating that a single thread, on average, is competing for the CPU, then the current value of recent cpu decays to a weight of .1 in log(2/3) .1 ≈ 6 seconds; if load avg is 2, then decay to a weight of .1 takes log_(3/4) .1 ≈ 8 seconds. The effect is that recent cpu estimates the amount of CPU time the thread has received "recently," with the rate of decay inversely proportional to the number of threads competing for the CPU.

/* Returns 100 times the current thread's recent_cpu value. */
int thread_get_recent_cpu(void) {
	/* TODO: Your implementation goes here */
	return 0;
}

thread_get_recent_cpu(void) returns 100 times the current thread's recent cpu value, rounded to the nearest integer.

4. Calculating load_avg

Finally, load avg, often known as the system load average, estimates the average number of threads ready to run over the past minute. Like recent_cpu, it is an exponentially weighted moving average. Unlike priority and recent_cpu, load_avg is system-wide, not thread-specific. At system boot, it is initialized to 0. Once per second thereafter, it is updated according to the following formula:

load_avg = (59/60) load_avg + (1/60) ready_threads,

where ready threads is the number of threads that are either running or ready to run at time of update (not including the idle thread).

Because of assumptions made by some of the tests, load_avg must be updated exactly when the system tick counter reaches a multiple of a second, that is, when timer_ticks () % TIMER_FREQ == 0, and not at any other time. You must implement thread_get_load_avg(), for which there is a skeleton in threads/thread.c.

/* Returns 100 times the system load average. */
int thread_get_load_avg(void) {
	/* TODO: Your implementation goes here */
	return 0;
}

thread_get_load_avg(void) returns the 100 times the current system load average, rounded to the nearest integer.

Summary

The following formulas summarize the calculations required to implement the scheduler. They are not a complete description of scheduler requirements.

Every thread has a nice value between -20 and 20 directly under its control. Each thread also has a priority, between 0 (PRI_MIN) through 63 (PRI_MAX), which is recalculated using the following formula every fourth tick:

priority = PRI_MAX - (recent_cpu / 4) - (nice * 2)

recent_cpu measures the amount of CPU time a thread has received "recently." On each timer tick, the running thread's recent cpu is incremented by 1. Once per second, every thread's recent cpu is updated this way:

recent_cpu = (2 load_avg) / (2 load_avg + 1) * recent_cpu + nice

load_avg estimates the average number of threads ready to run over the past minute. It is initialized to 0 at boot and recalculated once per second as follows:

load_avg = (59/60) load_avg + (1/60) ready_threads

where ready_threads is the number of threads that are either running or ready to run at time of update (not including the idle thread).

MLFQS (Multi-level Feedback Queue Scheduling)

Priority Scheduling 방식은 쓰레드의 실행을 우선순위를 기준으로 결정하는 방식이었기 때문에 우선순위가 낮은 쓰레드들은 CPU를 점유하기가 매우 어렵고 이로 인해 평균 응답 시간(Average Response Time)이 급격히 커질 가능성이 존재한다. 이를 보완하기 위해 Priority Donation을 도입하여 우선순위가 높은 쓰레드들이 실행되기 위해 필요한 쓰레드들은 우선순위를 양도받아 실행될 수 있지만, 그렇지 못한 쓰레드들이 존재할 가능성이 있다. 이러한 문제를 해결하고 평균 응답 시간을 줄이기 위해 multi-level feedback queue scheduling 방식을 기반으로 한 Advanced Scheduler를 도입한다.

  1. 우선순위에 따라 여러 개의 ready queue가 존재한다. (Multi-level)
  2. 우선순위를 실시간으로 조정한다. (Feedback)

1번 특징에 따르면, 핀토스에는 0부터 63까지 64개의 우선순위가 존재하기 때문에 64개의 ready queue를 만들고 이 중 가장 높은 우선순위의 ready queue 안에서 round-robin 방식의 스케줄링을 도입한다.

2번 특징에 따라 각 쓰레드의 우선순위를 조정하면 가장 높은 우선순위를 가진 쓰레드가 실행될 것이다.

Implementations

1. Niceness

  • NICE_MIN: -20, NICE_MAX: 20
  • NICE_DEFAULT: 0
  • nice 값이 높을수록 CPU time을 다른 쓰레드에게 양도하는 정도가 크다. (해당 쓰레드의 우선순위가 낮아진다.)

2. Priority

  • PRI_MIN: 0, PRI_MAX: 63
  • PRI_DEFAULT: 31
  • priority = PRI_MAX - (recent_cpu / 4) - (nice * 2)
  • 매 4 tick마다 모든 쓰레드의 priority를 재계산한다.

3. recent_cpu

  • 해당 쓰레드가 최근에 얼마나 많은 CPU time을 사용하였는지를 나타내는 값이다.
  • recent_cpu 값이 높을수록 최근에 사용하였다는 것을 의미하기 때문에 우선순위가 낮아진다.
  • RECENT_CPU_DEFAULT: 0
  • recent_cpu = (2 load_avg) / (2 load_avg + 1) * recent_cpu + nice
  • (idle_thread가 아닌 경우) 매 tick마다 running 쓰레드의 recent_cpu 값이 1씩 증가한다.
  • 매 1초마다 모든 쓰레드의 recent_cpu 값을 재계산한다.

4. load_avg

  • load_avg는 최근 1분 동안 실행 가능한 쓰레드의 평균 개수를 나타내는 값이다.
  • load_avg 값이 작으면 recent_cpu가 천천히 감소(priority는 천천히 증가)하고, 값이 크면 recent_cpu가 빠르게 감소(priority는 빠르게 증가)한다.
  • 수행 가능한 쓰레드의 평균 개수가 많을 때는 모든 쓰레드가 골고루 CPU time을 배분 받을 수 있도록 하기 위해 이미 사용한 쓰레드의 priority가 천천히 증가해야 하고, 평균 개수가 적을 때는 조금 더 빠르게 증가해도 모든 쓰레드가 골고루 CPU time을 받을 수 있기 때문이다.
  • LOAD_AVG_DEFAULT: 0
  • load_avg = (59 / 60) load_avg + (1 / 60) ready_threads (ready_threads는 ready + running 상태의 쓰레드의 갯수)
  • 매 1초마다 load_avg의 값을 재계산한다

Fixed-point Arithmetic

결과

pass tests/threads/mlfqs/mlfqs-block
pass tests/threads/alarm-single
pass tests/threads/alarm-multiple
pass tests/threads/alarm-simultaneous
pass tests/threads/alarm-priority
pass tests/threads/alarm-zero
pass tests/threads/alarm-negative
pass tests/threads/priority-change
pass tests/threads/priority-donate-one
pass tests/threads/priority-donate-multiple
pass tests/threads/priority-donate-multiple2
pass tests/threads/priority-donate-nest
pass tests/threads/priority-donate-sema
pass tests/threads/priority-donate-lower
pass tests/threads/priority-fifo
pass tests/threads/priority-preempt
pass tests/threads/priority-sema
pass tests/threads/priority-condvar
pass tests/threads/priority-donate-chain
pass tests/threads/mlfqs/mlfqs-load-1
pass tests/threads/mlfqs/mlfqs-load-60
pass tests/threads/mlfqs/mlfqs-load-avg
FAIL tests/threads/mlfqs/mlfqs-recent-1
pass tests/threads/mlfqs/mlfqs-fair-2
pass tests/threads/mlfqs/mlfqs-fair-20
pass tests/threads/mlfqs/mlfqs-nice-2
pass tests/threads/mlfqs/mlfqs-nice-10
pass tests/threads/mlfqs/mlfqs-block
1 of 27 tests failed.

0개의 댓글