Concurrency (并发)


Concurrency is about dealing with lots of things at once, but parallelism is about doing lots of things at once.

Maximise CPU utilisation + better user experience

CPU is idle when the process and thread are performing non CPU-bounded tasks like reading and writing to IO Device and waiting a result from a remote Server etc. By performing context switch, we can let another process or thread to use CPU to complete its computation. Parallelism allows us to run multiple threads of processes at the same, if we have 4 CPU cores, it means we can have 4 processes/threads consuming the CPU at the same time.

The above describes about how concurrency helps with CPU utilisation. Concurrency also ensures users feel everything is running at the same like browsing the web and playing music at the same time.

Parallelism (并行)

Corporative Scheduling

CPU Hogging

Process can hog to CPU forever, modern OS adapts to Preemptive Scheduling instead.

Preemptive Scheduling

  1. Before Kernel set the Program Counter to the Instruction of a selected Process (进程), the kernel sets the Timer Chip to trigger an Hardware interrupts (外中断) after some period of time(Time Slice)
  2. The kernel switches the Privilege Level to User Mode and set the program counter to the instruction of a selected process, so the process can start executing
  3. When the timer chip elapses, it triggers a Hardware interrupts
  4. The hardware interrupt invokes Trap Interrupt (陷入) which triggers the corresponding Interrupt Handler
  5. Interrupt handler passes control to Process Scheduler when it completes
  6. Process Scheduler selects a process to run by restoring the state of the CPU for that process from the process’s Process Control Block (PCB)
  7. Repeat step 1 to step 6

No CPU Hogging

The hardware interrupt generated by timer chip ensures the kernel obtain control to perform Process Management on a configured interval. The eliminates any process from hogging the CPU forever which may happen in the case of Corporative Scheduling.

Fixed Timeslice Round-Robin Preemptive Scheduling

Laggy Situation

When there is a lot of Process (进程) like 100 and the time slice is a fixed 10ms, one process needs to wait for 1000ms before it gets to run again.

Dynamic Timeslice Round-Robin Preemptive Scheduling


Modern process scheduler also take in Process Priority to ensure critical processes get more CPU time and run more often.

Helps to make each process more responsive

Ensures each Process will get to run again before it seems laggy to the user. As long as the Minimum Granularity is ensured and Target Latency is not exceeded.

Process gets to run faster when there is less Process

The Time Slice is a ratio of Target Latency and total Process. Less process means more time for each process.


VS Time-sharing?

In Time-Sharing, we have multi-users instead of multi-tasks. Multi-tasking focuses on the tasks, and the tasks can be from different users. So in that sense, multi-tasking is a superset of time-sharing.



The first time-sharing machine is invented at MIT in the early 1960s, machines before it are all Batch System.

Multics - Wikipedia was one of the first time-sharing OS which inspires the creation of Unix.