Skip to main content

PROCESS STATE

 

PROCESS STATE

  • The state of a process is used to describe the current activity of that process.
  • As a process executes, it changes state.
  • A process may be in one of the following states.
    • New : Process is being created.
    • Running : Instructions are being executed.
    • Waiting : Process is waiting for some event (e.g. an I/O completion, reception of a signal) to occur.
    • Ready : The process is waiting to be assigned to a processor
    • Terminated : The process has finished execution.
  • Only one process can be running on a processor at any instant of time. However, many processes can be in ready or waiting states. In case of multiple processes, the rest have to wait until the CPU is free and can be rescheduled.
Process States
Process States

PROCESS CONTROL BLOCK

Process Control Block (PCB) or Task Control Block contains many pieces of information associated with a specific process which includes :

  • Process State : can be new/ready/waiting/…
  • Program Counter : Contains address of next instruction to be executed.
  • CPU Registers : include accumulators, index registers, stack pointers, general-purpose registers, condition-code information
  • CPU Scheduling information : include Process priority, pointers to scheduling queues, other scheduling parameters.
  • Memory-management information : includes values of base and limit registers, page tables, and/or segment tables
  • Accounting information : includes process numbers, amount of CPU time and real time used, time limits, account numbers, etc
Process Control Block
Process Control Block
Linux Representation of PCB
  • In Linux, the PCB is represented as C structure task_struct [found in <linux/sched.h> include file in the kernel source code directory. Refer this for more details.
  • This structure stores all the necessary information representing  a process which includes
    • pointer to process’s parent
    • list of children and siblings
    • state of process
    • scheduling info
    • memory management info
  • Within linux kernel, all active processes represented using a doubly-linked list of task_struct.
  • The kernel maintains a pointer called current to point to the process currently executing in the system.

PROCESS SCHEDULING

Why need Process Scheduling?
  • To meet the objective of Multi-programming (maximize CPU utilization by having some process running at all times)
  • To meet the objective of Time Sharing (switch CPU among processes so frequently that users can interact with each program while it is running)
  • Meeting above objectives via Process Scheduler
    • Process Scheduler selects an available process for program execution (possibly from a list of several processes)

I . PROCESS SCHEDULING USING SCHEDULING QUEUES

The system consists of many queues. Depending on the process state, they are put into appropriate queues.
  • Job Queue : As processes enter the system, the are put into job queue, which consists of all the processes in the system.
  • Ready Queue : Processes residing in Main memory, which are ready and waiting to execute are kept on this list.
    • Implemented as a linked list
    • header contains pointer to first and last PCB in the list.
    • Each PCB includes pointer that points to next PCB in the ready queue.
  • Device Queue : Processes waiting for a particular I/O device are put here. Each device has its own device queue.
The representation of Process Scheduling is done via Queueing Diagrams
  • Contains 2 types of queues : Ready queue and Device queues
  • Rectangles : Each rectangular box represents a queue.
  • Circles : Circles represent resources that serve the queues.
  • Arrows : Arrows represent indicate the flow of processes in the system.
Scheduling Process
  1. New process → Put in Ready queue
  2. Process waits until selected for execution or dispatched.
  3. If selected for execution, CPU is allocated to the process.
    1. Process may issue I/O request → Put in I/O queue.
    2. Process could create new child → Wait for child to finish execution
    3. Process could be interrupted → Put back in ready queue.
  4. Continue this cycle until termination (point at which Process is removed from all queues and has its PCB and resources de-allocated.)
 
Queueing-diagram representation of process scheduling
Queueing-diagram representation of process scheduling

II. SCHEDULERS

A process waits in a scheduling queue until it is selected by the OS in some fashion. This task of selecting the processes is carried out by appropriate Scheduler. Schedulers are of 3 types:
  • Long term Scheduler (or job scheduler) [LTS]
    • Used typically in batch systems, where more jobs are submitted than can be executed immediately.
    • Processes are spooled to a mass storage device, typically a disk, where they are kept for later execution.
    • LTS selects processes from this pool → Loads them into memory for execution.
  • Short term Scheduler (or CPU Scheduler)
    • Used very frequently
    • selects one process from the processes that are ready to execute, and allocates the CPU to that process.
  • Medium Term Scheduler
    • Introduced in some OSs as an intermediate level of Scheduling.
    • Reason : to reduce the degree of multiprogramming
      • MTS carries out Swapping of processes.
    • Swapping :
      • First, a process is removed from memory (and from active contention for the CPU) to reduce the degree of multiprogramming.
      • Later, the process can be re-introduced into the memory, and its execution can be continued where it left-off.
      • Advantages : 
        • Improve Process mix
        • Memory constraints [change in memory requirements has over-committed memory, requiring memory to be freed up
 
Addition of medium-term scheduling to the queuing diagram
Addition of medium-term scheduling to the queuing diagram
ABC
1AttributeShort-term SchedulerLong-term Scheduler
2Frequency of executionSelects new process for CPU frequently.executes much-less frequently
3Time gap between ProcessesOften, STS executes at least once every 100msminutes may separate the creation of new processes and the next
4Speed of executionBecause of short time between executions, STS has to be fast.Because of longer interval between executions, LTS can afford to take more time to decide which process should be selected for execution.
  Note :
  • Long-term Scheduler controls degree of multi-programming (number of processes in memory). 
  • If degree of multiprogramming is stable → average rate of process creation == average departure rate of processes leaving the system.
  • I/O bound process : One that spends more of its time doing I/O than spending time on computations
  • CPU bound Process : Process that generates I/O requests infrequently, using more of its time doing computations.
  • It is important that the LTS selects a good process mix of I/O-bound processes and CPU-bound processes.
    • If all processes I/O bound → ready queue empty → short-term scheduler sits idle
    • If all processes CPU bound → I/O waiting queue empty → Devices go unused
    • Good combination of CPU-bound and I/O bound processes → Best Performance of system

CONTEXT SWITCH

  • Context switch : Task of switching CPU to another process [which requires performing a state save of current process and state restore of a different process.]
  • Why Context Switch?
    • When an interrupt occurs, the system needs to save the current context of the process running on the CPU and switch to some other process [kernel routine]
    • This is done to restore the context when processing is done, essentially suspending the process, and then resuming it.
    • Context :
      • saved in PCB of a process
      • includes value of CPU registers,process state,memory-management information
  • Context switch done by the Kernel
  • Context switch time is pure overhead → no useful work done during this time
    • time highly dependent on hardware support
    • e.g. , in some processors, which provide multiple sets of registers, Context Switch → changing pointer to current register set
    • If more complexity involved → more work needs to be done
  • Switching speed : depends upon
    • memory speed
    • number of registers to be copied
    • typically takes few milliseconds
CPU switch from process to process (switch between processes)
CPU switch from process to process (switch between processes)
In the next post, we will continue our discussions on processes further with discussion on operations on processes.

  • I/O status information : includes list of I/O devices allocated to the process, list of open files, etc

Comments

Popular posts from this blog

Multitasking System

  Multitasking system ·           Technically , multitasking is same as multi programming ·           In a multitasking operating system, s single user can execute multiple programs at the same time ·           We can also say, multitasking is the system capability to work on more than one job or process at the same time. ·           It means that whenever a job needs to perform I/O operation, the cpu can be used for execting some other job                                                        diagram of multi tasking     ·           There are two type of multitasking : 1.       ...

Multi user Operating System

  Multi user operating system ·           In a multi-user operating system, multiple number of user can access different resources of a computer at a same time. ·           The access is provided using a network that consists of various personal computer attached to a mainframe computer system.                                                              diagram of multi -user operating system       ·           The various personal computer can send and receive information to mainframe computer system. ·           The example    of multi-user OS are UNIX, windows 2000,novell netware.            sing...

Application Of Threads

  Application Of Threads This section discuss various implementation of threads. Multithreading concepts are implemented by pthread (POSIX standard), solaris, linux, window 2000 and java. 1. POSIX THREAD(Pthreads) ·          Pthread refers to the POSIX standard(IEEE 1003.IC) that defines an API for thread creation and sychronisation. ·          Prior to the of POSIX THREADS, each hardware vendor implemented their own version of threads. ·          As each implemented from the other, writing portable multithreaded application was difficult. Thus POSIX standardize the API for thread management. ·          The current Pthread API is defined only for C programming language and it is implemented as function with a header file Pthread.h and thread library. ·          The naming...

Monolithic Architecture

  Monolithic Architecture Monolith means composed all in one piece. The  Monolithic  application describes a single-tiered  software  application in which different components combined into a single program from a single platform. Components can be: Authorization — responsible for authorizing a user Presentation — responsible for handling HTTP requests and responding with either HTML or JSON/XML (for web services APIs). Business logic — the application’s business logic. Database layer — data access objects responsible for accessing the database. Application integration — integration with other services (e.g. via messaging or REST API). Or integration with any other Data sources. Notification module — responsible for sending email notifications whenever needed. Example for Monolithic Approach Consider an example of Ecommerce application, that authorizes customer, takes an order, check products inventory, authorize payment and ships ordered products. This applicat...

Client server architecture

  Client server architecture Client - server architecture  is distributed  model  representing dispersed responsibilities among independent computers integrated across a network. Therefore,  it's  easy to replace, repair, upgrade and relocate a  server  while  client  remains unaffected. Advantages of Client-Server Architecture: Organizations often seek  opportunities to maintain services and quality competition to sustain its market position with the help of technologies. Deployment of client-server computing in an organization will effectively increase its productivity through the usage of  cost-effective user interface, enhanced data storage, vast connectivity and reliable application services Improved Data Sharing:   Data is retained by usual business processes and manipulated on a server is available for designated users (clients) over an authorized access. Integration of Services:   Every client is given the opp...