Skip to main content

Thread Synchronisation Issues


Thread Synchronisation Issues


Thread Synchronisation Issues

Concurrency and Synchronisation
When multiple threads access shared resources such that some of them modify a resource and others are reading that resource then we will face all sorts of data consistency problem due to synchronisation issues among threads. For example, if thread A reads shared data which is later changed by thread B and thread A is unaware of this change. Let’s first define some terms before jumping into details –

  • Synchronisation : using atomic operations to ensure correct operation of cooperating threads.
  • Critical section : a section of code, or collection of operations, in which only one thread may be executing at a given time. E.g. shopping.
  • Mutual exclusion : mechanisms used to create critical sections (ensure that only one thread is doing certain things at a given time).

However, synchronisation can introduce thread contention, which occurs when two or more threads try to access the same resource simultaneously and cause the Java runtime to execute one or more threads more slowly, or even suspend their execution. Starvation and live lock are forms of thread contention.

  • Starvation : Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by “greedy” threads. For example, suppose an object provides a synchronised method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronised access to the same object will often be blocked.
  • Live lock : A thread often acts in response to the action of another thread. If the other thread’s action is also a response to the action of another thread, then livelock may result. As with deadlock, live locked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gas ton pass, while Gas ton moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gas-ton moves to his left. They’re still blocking each other.

Typically, mutual exclusion achieved with a locking mechanism: prevent others from doing something. For example, before shopping, leave a note on the refrigerator: don’t shop if there is a note. We can lock an object that can only be owned by a single thread at any given time. Basic operations on a lock:

  • Acquire: mark the lock as owned by the current thread; if some other thread already owns the lock then first wait until the lock is free. Lock typically includes a queue to keep track of multiple waiting threads.
  • Release: mark the lock as free (it must currently be owned by the calling thread).

Synchronisation mechanisms need more than just mutual exclusion; also need a way to wait for another thread to do something (e.g., wait for a character to be added to the buffer). We can achieve this by using Condition variables.

Condition variables are used to wait for a particular condition to become true (e.g. characters in buffer).

  • _wait(condition, lock): release lock, put thread to sleep until condition is signaled; when thread wakes up again, re-acquire lock before returning.
  • signal(condition, lock): if any threads are waiting on condition, wake up one of them. Caller must hold lock, which must be the same as the lock used in the wait call.
  • broadcast(condition, lock): same as signal, except wake up all waiting threads
Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.


Comments

Popular posts from this blog

Monolithic Architecture

  Monolithic Architecture Monolith means composed all in one piece. The  Monolithic  application describes a single-tiered  software  application in which different components combined into a single program from a single platform. Components can be: Authorization — responsible for authorizing a user Presentation — responsible for handling HTTP requests and responding with either HTML or JSON/XML (for web services APIs). Business logic — the application’s business logic. Database layer — data access objects responsible for accessing the database. Application integration — integration with other services (e.g. via messaging or REST API). Or integration with any other Data sources. Notification module — responsible for sending email notifications whenever needed. Example for Monolithic Approach Consider an example of Ecommerce application, that authorizes customer, takes an order, check products inventory, authorize payment and ships ordered products. This applicat...

Defination of OS(operating system) and its concepts

    What do you mean by operating system?     Definition :  An operating system is a program that act as an interface between the user of a computer and the                                      Computer hardware. Operating system is a first program that gets loaded into the memory through a process called booting. Concepts of operating system : ·                       The purpose of operating system is to provide an environment in which a user can execute program in a convenient and efficient manner. ·                       Operating system is an integrated set of program that ma...

What is RAM and Why is it Important?

  What is RAM and Why is it Important? Random access memory (RAM) is a computer's short-term memory. None of your programs, files, or Netflix streams would work without RAM, which is your computer’s working space. But what is RAM exactly? In this article, we explain what RAM means in computer terms and why it’s important. What does RAM stand for? RAM is short for “random access memory” and while it might sound mysterious, RAM is one of the most fundamental elements of computing. RAM is the super-fast and temporary data storage space that a computer needs to access right now or in the next few moments. What is RAM and Why is it Important? Random access memory (RAM) is a computer's short-term memory. None of your programs, files, or Netflix streams would work without RAM, which is your computer’s working space. But what is RAM exactly? In this article, we explain what RAM means in computer terms and why it’s important. What does RAM stand for? RAM is short for “random access memo...

System structure operating architecture

  System structure operating  architecture An operating system is a construct that allows the user application programs to interact with the system hardware. Since the operating system is such a complex structure, it should be created with utmost care so it can be used and modified easily. An easy way to do this is to create the operating system in parts. Each of these parts should be well defined with clear inputs, outputs and functions. Simple Structure There are many operating systems that have a rather simple structure. These started as small systems and rapidly expanded much further than their scope. A common example of this is MS-DOS. It was designed simply for a niche amount for people. There was no indication that it would become so popular. An image to illustrate the structure of MS-DOS is as follows − It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to greater control over the computer system and its various applications. T...

Multilevel Feedback queue scheduling (MFQ)

  Multilevel Feedback queue scheduling (MFQ) ·          Multilevel feedback queue scheduling is an enhancement of multi-levelqueue scheduling. In this scheme, processes can move between the different queue ·          The various processes are separates in different queue on the basis of their CPU Burst Char characteristics ·          If a process consumes a lot of CPU time , it is placed into a lower priority queue. Thus I/O bound and interactive process are placed in the higher priority queue and CPU bound pricesses are in lower priority ·          If a processes waits too long in a lower priority queue it is moved higher priority queue. Such an aging prevents starvation. ·          The top priority queue is given smallest CPU time Quantum ·      ...