Skip to main content

Thread Synchronisation Issues


Thread Synchronisation Issues


Thread Synchronisation Issues

Concurrency and Synchronisation
When multiple threads access shared resources such that some of them modify a resource and others are reading that resource then we will face all sorts of data consistency problem due to synchronisation issues among threads. For example, if thread A reads shared data which is later changed by thread B and thread A is unaware of this change. Let’s first define some terms before jumping into details –

  • Synchronisation : using atomic operations to ensure correct operation of cooperating threads.
  • Critical section : a section of code, or collection of operations, in which only one thread may be executing at a given time. E.g. shopping.
  • Mutual exclusion : mechanisms used to create critical sections (ensure that only one thread is doing certain things at a given time).

However, synchronisation can introduce thread contention, which occurs when two or more threads try to access the same resource simultaneously and cause the Java runtime to execute one or more threads more slowly, or even suspend their execution. Starvation and live lock are forms of thread contention.

  • Starvation : Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by “greedy” threads. For example, suppose an object provides a synchronised method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronised access to the same object will often be blocked.
  • Live lock : A thread often acts in response to the action of another thread. If the other thread’s action is also a response to the action of another thread, then livelock may result. As with deadlock, live locked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gas ton pass, while Gas ton moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gas-ton moves to his left. They’re still blocking each other.

Typically, mutual exclusion achieved with a locking mechanism: prevent others from doing something. For example, before shopping, leave a note on the refrigerator: don’t shop if there is a note. We can lock an object that can only be owned by a single thread at any given time. Basic operations on a lock:

  • Acquire: mark the lock as owned by the current thread; if some other thread already owns the lock then first wait until the lock is free. Lock typically includes a queue to keep track of multiple waiting threads.
  • Release: mark the lock as free (it must currently be owned by the calling thread).

Synchronisation mechanisms need more than just mutual exclusion; also need a way to wait for another thread to do something (e.g., wait for a character to be added to the buffer). We can achieve this by using Condition variables.

Condition variables are used to wait for a particular condition to become true (e.g. characters in buffer).

  • _wait(condition, lock): release lock, put thread to sleep until condition is signaled; when thread wakes up again, re-acquire lock before returning.
  • signal(condition, lock): if any threads are waiting on condition, wake up one of them. Caller must hold lock, which must be the same as the lock used in the wait call.
  • broadcast(condition, lock): same as signal, except wake up all waiting threads
Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.


Comments

Popular posts from this blog

Monolithic Architecture

  Monolithic Architecture Monolith means composed all in one piece. The  Monolithic  application describes a single-tiered  software  application in which different components combined into a single program from a single platform. Components can be: Authorization — responsible for authorizing a user Presentation — responsible for handling HTTP requests and responding with either HTML or JSON/XML (for web services APIs). Business logic — the application’s business logic. Database layer — data access objects responsible for accessing the database. Application integration — integration with other services (e.g. via messaging or REST API). Or integration with any other Data sources. Notification module — responsible for sending email notifications whenever needed. Example for Monolithic Approach Consider an example of Ecommerce application, that authorizes customer, takes an order, check products inventory, authorize payment and ships ordered products. This applicat...

Multi Programming Operating System

  Multi programming  system ·           Multi programming operating system allows multiple users to execute multiple programs using a single CPU concurrently i.e. at the same time. ·           In multiprogramming several process are kept in the main memory and CPU execute all these processes concurrently. It means, the CPU immediately switches from one process to next that are ready to get executed ·           In such an operating system when one process start process start performing the instructions from several programs at the same time. ·           Rather, it means that there are number program available to CPU and that portion of one is executed, then segment of another and so on                                 ...

Time Sharing System and its Requirements

  Time sharing  system ·           Time sharing refers to the allocation of computer resources in a time dependent fashion to several program simultaneously ·           A time sharing system has many user terminals that are connected to same computer simultaneously. Using these terminal, different users can work on a system at the same time ·           Thus, it uses multi programming with a special CPU scheduling among all the last one, and then again beginning from the first one ·           In time sharing system, the CPU time is divided among all the users on schedule basis. ·           It release the CPU under any of the following three conditions: 1.         When the allotted time slice expires. 2.    ...

Multitasking System

  Multitasking system ·           Technically , multitasking is same as multi programming ·           In a multitasking operating system, s single user can execute multiple programs at the same time ·           We can also say, multitasking is the system capability to work on more than one job or process at the same time. ·           It means that whenever a job needs to perform I/O operation, the cpu can be used for execting some other job                                                        diagram of multi tasking     ·           There are two type of multitasking : 1.       ...

Multi user Operating System

  Multi user operating system ·           In a multi-user operating system, multiple number of user can access different resources of a computer at a same time. ·           The access is provided using a network that consists of various personal computer attached to a mainframe computer system.                                                              diagram of multi -user operating system       ·           The various personal computer can send and receive information to mainframe computer system. ·           The example    of multi-user OS are UNIX, windows 2000,novell netware.            sing...