Skip to main content

Scheduling Algorithms

    Scheduling Algorithms

CPU scheduling algorithm deal with the problem of deciding which of the processes in the ready queue is to be allocated the CPU . six commonly used scheduling algorithms are:

1. first-come First-served(FCFS)

2. Shortest job First(SJF)

3. Priority scheduling

4. Round-Robin Scheduling(RR)

5. Multi-Level Queue Scheduling(MLQ)

6. Multi-Level Feedback Queue Scheduling (MFQ)

     

First-Come First-Served Scheduling (FCFS)

·      It is simplest and the most straight forward of all scheduling algorithms.

·      In this scheduling, the process that request the CPU first is allocated CPU first. Thus  the name first come first served.

·      We can say that in FCFS scheduling, a process is allocated CPU time according to the arrival time of a process.

·      The implementation of FCFS policy is easily manged with a FIFO queue

·      When a process enters the ready queue, its PCB is linked onto tail of the queue.

·      When CPU is free, it is allocated to the head of ready queue.the running process is then removed from the queue.

·      FCFS scheduling algorithm is non-preemptive. Once the CPU is allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O

 

1.     FCFS tends to favour CPU-bound processes. Consider . in system with one CPU bound process  and a number of I/O bound processes. In such as system, the following scenario may arise:

·      The CPU bound process will get the CPU and holds it.

·      During this all the other processes will finish their I/O and move  into the ready queue, waitin for CPU. When these processes are waiting in ready queue, the I/O devices are idle.

·      After some time, the CPU-bound process finishes its CPU burst (CPU burst time indicates for how much time . the process needs the CPU)and moves to an I/o devices . at this time all the I/O bound processes.

·      The CPU-bound process will then move back to ready queue and be allocated the CPU. Again all the I/O processes end up waiting in the ready queue until the CPU-bound process is done.


  • Advantages –
    1. It is simple and easy to understand.
  • Disadvantages –
    1. The process with less execution time suffer i.e. waiting time is often quite long.
    2. Favors CPU Bound process then I/O bound process.
    3. Here, first process will get the CPU first, other processes can get CPU only after the current process has finished it’s execution. Now, suppose the first process has large burst time, and other processes have less burst time, then the processes will have to wait more unnecessarily, this will result in more average waiting time, i.e., Convey effect.
    4. This effect results in lower CPU and device utilization.
    5. FCFS algorithm is particularly troublesome for time-sharing systems, where it is important that each user get a share of the CPU at regular intervals.
  • Example of FCFS scheduling

    A real-life example of the FCFS method is buying a movie ticket on the ticket counter. In this scheduling algorithm, a person is served according to the queue manner. The person who arrives first in the queue first buys the ticket and then the next one. This will continue until the last person in the queue purchases the ticket. Using this algorithm, the CPU process works in a similar manner.

  • How FCFS Works? Calculating Average Waiting Time

    Here is an example of five processes arriving at different times. Each process has a different burst time.

    ProcessBurst timeArrival time
    P162
    P235
    P381
    P430
    P544

    Using the FCFS scheduling algorithm, these processes are handled as follows.

    Step 0) The process begins with P4 which has arrival time 0

    Step 1) At time=1, P3 arrives. P4 is still executing. Hence, P3 is kept in a queue.

    ProcessBurst timeArrival time
    P162
    P235
    P381
    P430
    P544

    Step 2) At time= 2, P1 arrives which is kept in the queue.

    ProcessBurst timeArrival time
    P162
    P235
    P381
    P430
    P544

    Step 3) At time=3, P4 process completes its execution.

    Step 4) At time=4, P3, which is first in the queue, starts execution.

    ProcessBurst timeArrival time
    P162
    P235
    P381
    P430
    P544

    Step 5) At time =5, P2 arrives, and it is kept in a queue.

    ProcessBurst timeArrival time
    P162
    P235
    P381
    P430
    P544

    Step 6) At time 11, P3 completes its execution.

    Step 7) At time=11, P1 starts execution. It has a burst time of 6. It completes execution at time interval 17

    Step 8) At time=17, P5 starts execution. It has a burst time of 4. It completes execution at time=21

    Step 9) At time=21, P2 starts execution. It has a burst time of 2. It completes execution at time interval 23

    Step 10) Let's calculate the average waiting time for above example.

    Waiting time = Start time - Arrival time
    

    P4 = 0-0 = 0

    P3 = 3-1 = 2

    PI = 11-2 = 9

    P5= 17-4 = 13

    P2= 21-5= 16

    Average Waiting Time

    = 40/5= 8

 

 

Comments

Popular posts from this blog

Multitasking System

  Multitasking system ·           Technically , multitasking is same as multi programming ·           In a multitasking operating system, s single user can execute multiple programs at the same time ·           We can also say, multitasking is the system capability to work on more than one job or process at the same time. ·           It means that whenever a job needs to perform I/O operation, the cpu can be used for execting some other job                                                        diagram of multi tasking     ·           There are two type of multitasking : 1.       ...

Multi user Operating System

  Multi user operating system ·           In a multi-user operating system, multiple number of user can access different resources of a computer at a same time. ·           The access is provided using a network that consists of various personal computer attached to a mainframe computer system.                                                              diagram of multi -user operating system       ·           The various personal computer can send and receive information to mainframe computer system. ·           The example    of multi-user OS are UNIX, windows 2000,novell netware.            sing...

Application Of Threads

  Application Of Threads This section discuss various implementation of threads. Multithreading concepts are implemented by pthread (POSIX standard), solaris, linux, window 2000 and java. 1. POSIX THREAD(Pthreads) ·          Pthread refers to the POSIX standard(IEEE 1003.IC) that defines an API for thread creation and sychronisation. ·          Prior to the of POSIX THREADS, each hardware vendor implemented their own version of threads. ·          As each implemented from the other, writing portable multithreaded application was difficult. Thus POSIX standardize the API for thread management. ·          The current Pthread API is defined only for C programming language and it is implemented as function with a header file Pthread.h and thread library. ·          The naming...

Monolithic Architecture

  Monolithic Architecture Monolith means composed all in one piece. The  Monolithic  application describes a single-tiered  software  application in which different components combined into a single program from a single platform. Components can be: Authorization — responsible for authorizing a user Presentation — responsible for handling HTTP requests and responding with either HTML or JSON/XML (for web services APIs). Business logic — the application’s business logic. Database layer — data access objects responsible for accessing the database. Application integration — integration with other services (e.g. via messaging or REST API). Or integration with any other Data sources. Notification module — responsible for sending email notifications whenever needed. Example for Monolithic Approach Consider an example of Ecommerce application, that authorizes customer, takes an order, check products inventory, authorize payment and ships ordered products. This applicat...

Client server architecture

  Client server architecture Client - server architecture  is distributed  model  representing dispersed responsibilities among independent computers integrated across a network. Therefore,  it's  easy to replace, repair, upgrade and relocate a  server  while  client  remains unaffected. Advantages of Client-Server Architecture: Organizations often seek  opportunities to maintain services and quality competition to sustain its market position with the help of technologies. Deployment of client-server computing in an organization will effectively increase its productivity through the usage of  cost-effective user interface, enhanced data storage, vast connectivity and reliable application services Improved Data Sharing:   Data is retained by usual business processes and manipulated on a server is available for designated users (clients) over an authorized access. Integration of Services:   Every client is given the opp...