Skip to main content

Thread

Thread

Thread

What is Thread?

A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multi threaded process.

Single vs Multithreaded Process

Difference between Process and Thread

S.N.ProcessThread
1Process is heavy weight or resource intensive.Thread is light weight, taking lesser resources than a process.
2Process switching needs interaction with operating system.Thread switching does not need to interact with operating system.
3In multiple processing environments, each process executes the same code but has its own memory and file resources.All threads can share same set of open files, child processes.
4If one process is blocked, then no other process can execute until the first process is unblocked.While one thread is blocked and waiting, a second thread in the same task can run.
5Multiple processes without using threads use more resources.Multiple threaded processes use fewer resources.
6In multiple processes each process operates independently of the others.One thread can read, write or change another thread's data.

Advantages of Thread

  • Threads minimise the context switching time.
  • Use of threads provides concurrency within a process.
  • Efficient communication.
  • It is more economical to create and context switch threads.
  • Threads allow utilisation of multiprocessor architectures to a greater scale and efficiency.

Types of Thread

Threads are implemented in following two ways −

  • User Level Threads − User managed threads.

  • Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core.

User Level Threads

In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.

User level thread

Advantages

  • Thread switching does not require Kernel mode privileges.
  • User level thread can run on any operating system.
  • Scheduling can be application specific in the user level thread.
  • User level threads are fast to create and manage.

Disadvantages

  • In a typical operating system, most system calls are blocking.
  • Multi threaded application cannot take advantage of multiprocessing.

Kernel Level Threads

In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multi threaded. All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.

Advantages

  • Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
  • If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
  • Kernel routines themselves can be multi threaded.

Disadvantages

  • Kernel threads are generally slower to create and manage than the user threads.
  • Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.

Multi threading Models

Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multi threading models are three types

  • Many to many relationship.
  • Many to one relationship.
  • One to one relationship.

Many to Many Model

The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads.

The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution.

Many to many thread model

Many to One Model

Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes.

Many to one thread model

One to One Model

There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors.

Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

One to one thread model

Difference between User-Level & Kernel-Level Thread

S.N.User-Level ThreadsKernel-Level Thread
1User-level threads are faster to create and manage.Kernel-level threads are slower to create and manage.
2Implementation is by a thread library at the user level.Operating system supports creation of Kernel threads.
3User-level thread is generic and can run on any operating system.Kernel-level thread is specific to the operating system.
4Multi-threaded applications cannot take advantage of multiprocessing.Kernel routines themselves can be multi threaded.

Comments

Popular posts from this blog

Batch Processing Operating System

  Batch processing system ·           Batch processing is one of the oldest method    of running the programs ·           The computer in the past were very large in size and their I/O devices were very different from those that are used today. The job processing was not interactive as it is today. ·           The user did not interact directly with computer system.   ·           The process scheduling , memory management, file management and I/Omanagement functions are quite simple in batch processing system   1.         Process scheduling (i.e. allocation strategy for a processor is typically in order of their arrival i.e. first come first served(FCFS)basis.   2.         Memory management  is done by divi...

Multitasking System

  Multitasking system ·           Technically , multitasking is same as multi programming ·           In a multitasking operating system, s single user can execute multiple programs at the same time ·           We can also say, multitasking is the system capability to work on more than one job or process at the same time. ·           It means that whenever a job needs to perform I/O operation, the cpu can be used for execting some other job                                                        diagram of multi tasking     ·           There are two type of multitasking : 1.       ...

Change the priority of a process

  Change the priority of a process You can tell the computer that certain processes should have a higher priority than others, and so should be given a bigger share of the available computing time. This can make them run faster, but only in certain cases. You can also give a process a  lower  priority if you think it is taking up too much processing power. Go to the  Processes  tab and click on the process you want to have a different priority. Right-click the process, and use the  Change Priority  menu to assign the process a higher or lower priority. There is typically little need to change process priorities manually. The computer will usually do a good job of managing them itself. (The system for managing the priority of processes is called  nice .) Does higher priority make a process run faster? The computer shares its processing time between all of the running processes. This is normally shared intelligently, so programs that are doing more ...
 C omparison between real time and time sharing operating system P rotection and s ecurity  • Protection refers to a mechanism for controlling the access of program s processes, or users to the resources defined by computer system. • The concept of protection came with the advent of multiprogramming where several processes compete for the use of CPU. • the purpose was to confine each users program to its assigned areaof memory so that the programs cannot interface and harm each other. • Protection in main memory is particularly important because of address translation. The purpose of protection is to allow concurrently running process to share the common physical address space. • Protection also ensure that only process that have gained proper authorization from the operating system can operate on memory segment , the CPU, files and other resources.

Monolithic Architecture

  Monolithic Architecture Monolith means composed all in one piece. The  Monolithic  application describes a single-tiered  software  application in which different components combined into a single program from a single platform. Components can be: Authorization — responsible for authorizing a user Presentation — responsible for handling HTTP requests and responding with either HTML or JSON/XML (for web services APIs). Business logic — the application’s business logic. Database layer — data access objects responsible for accessing the database. Application integration — integration with other services (e.g. via messaging or REST API). Or integration with any other Data sources. Notification module — responsible for sending email notifications whenever needed. Example for Monolithic Approach Consider an example of Ecommerce application, that authorizes customer, takes an order, check products inventory, authorize payment and ships ordered products. This applicat...