The synchronized keyword in j**a is used to implement thread synchronization, ensuring that at most one thread executes a method or block decorated by synchronized at the same time. It is synchronized based on the entry and exit monitor objects.
In j**a, synchronized can be used for a decorating method or a **block:
When a method is decorated, the entire method body is treated as a critical area, and only one thread can execute the method.
When a block is decorated, only the thread that owns the object of the block can execute the block.
The synchronized keyword can be used to solve the thread safety problem when multiple threads access shared resources concurrently. When a thread enters the synchronized method or block, it automatically acquires a lock on the object, and other threads must wait for the thread to release the lock before it can be executed. This ensures that only one thread accesses the shared resource at the same time, avoiding data inconsistencies and concurrent access.
Supported lock types for synchronized keyword:
Pessimistic locks: Locks are added every time a shared resource is accessed.
Unfair locks: The order in which threads acquire locks is not necessarily in the order in which the threads are blocked.
Reentrant locks: Threads that have already acquired locks can acquire locks again.
Exclusive or exclusive locks: Only one thread can hold the lock at a time, and the other threads that have not acquired the lock enter the block.
1) synchronized modification method, example **:
Synchronized Modify Instance Method * public synchronized void method() ** synchronized Modify static method * public static synchronized void method().Taking the synchronized modification example method as an example, decompile the source code implementation:
In the synchronized modification method, the underlying layer is to implicitly lock and unlock by using an acc synchronized keyword.
2) synchronized modifier block, example:
synchronized modifier block: lock object is an instance object of the class * public void method() synchronized modifier block: lock object is a class object of the class * public void method().Taking the lock object in the synchronized modified ** block as an instance object, the source code is decompiled
Among them, when synchronized modifies the ** block, the underlying layer is locked and unlocked through the two keywords monitorenter and monitorexit
Use monitorenter to lock before performing a sync.
After performing the synchronization**, use MonitorExit to release the lock.
When an exception is thrown, the lock is also released using MonitorExit.
Whether it is the acc synchronized keyword, or the monitorenter and monitorexit keywords, the underlying layer is to lock and unlock by obtaining the monitor lock, and the monitor lock is implemented through objectmonitor, and the objectmonitor data structure in the virtual machine is as follows (implemented in C++):
objectmonitor()synchronized locking process, as shown in the image:
Processing flow: 1) When multiple threads access a synchronization at the same time, it will first enter the entrylist queue (blocking queue) to block.
2) When a thread obtains the object lock of the object, it enters the critical area, and sets the owner variable in the object lock to the current thread (i.e., obtains the object lock).
3) If the thread holding the object lock calls the wait() method, the currently held object lock will be released, the owner variable will be restored to null, and the thread will enter the waitset collection and wait to be woken up.
4) Threads in the waitset set are woken up and put into the entrylist queue again to compete for locks.
5) If the current thread finishes executing, the object lock will be released and the value of the variable will be reset so that other threads can enter to obtain the lock.
The J**A object consists of three regions in JVM memory:
Object Header:It is divided into mark word (mark field), class pointer (type pointer), and array length (if the j**a object is an array).
class pointer: A pointer to the object's class metadata that the virtual machine uses to determine which class the object is an instance of.
Array Length: Stores the length of the array object.
Instance Data:Stores the real valid information (such as field information) of the object, including the information of this class and the parent class.
Align padding:There is no special meaning, since the virtual machine requires that the object start address must be an integer multiple of 8 bytes, so the purpose of this area is only byte alignment.
In a 32-bit virtual machine, Mark Word is composed as follows:
Mark Word contains the hashcode of the object, the thread ID of the lock bias, the bias lock mark, the lock identification bit, and the bias lock timestamp (epoch: 2bit).
Wherein: No lock state: The lock ID bit is 01, the bias lock is marked as 0, the hashcode of the object is calculated and written to the object header, and when the object is locked, the hashcode is moved to the monitor.
Biased lock: The lock identifier is 01, the bias lock is marked as 1, and the thread ID of the lock bias is the threadID of the thread holding the biased lock.
Lightweight lock: The lock identifier is set to 00, and the pointer in the mark work is pointed to the lock record through the CAS spin.
Heavyweight lock: The lock identifier is set to 10, and the pointer in the mark work points to the pipe (monitor).
GC ID: The lock ID bit is 11, and the generational age of the object refers to the number of times GC occurs for J**A objects, and when GC occurs, the object is replicated in the survivor area every time, and the generational age is +1. When the subject reaches a set age threshold, the subject is promoted from the younger generation to the older generation. This parameter occupies 4 bits (i.e., 2 4-1=15), and by default, the age threshold for parallel GC is 15 and the age threshold for concurrent GC is 6.
From jdk1Since 6, the implementation mechanism of synchronized has been greatly adjusted, except for the use of jdk15. In addition to the introduced CAS spin, optimization strategies such as adaptive spin lock, lock elimination, lock coarsening, bias lock, and lightweight lock have been added, which greatly improves the performance of synchronized, and at the same time, the semantics are clear, the operation is simple, and there is no need to manually close, so it is recommended to use the synchronized keyword as much as possible when allowed.
There are four main states of synchronized locks, and the performance from high to low is: no lock state, biased lock state, lightweight lock state, and heavyweight lock state. The lock can be upgraded from a biased lock to a lightweight lock, and then an upgraded heavyweight lock, but the lock upgrade is one-way, that is, the lock can only be upgraded from low to high, and there will be no lock downgrade.
In jdk16, bias locks and lightweight locks are enabled by default, and bias locks can be disabled by -xx:-usebiasedlocking.
Example:
object object = new object();synchronized(object) after entering the critical zoneBefore the thread enters the critical zone, the lock ID is set to 01, the bias lock is marked as 0, and the hashcode of the object is calculated and written to the object header.
Biased lock upgrade process
When thread A enters the critical zone, it is found that the lock flag is 01 and the bias lock is marked as 0, immediately record the threadID of thread A into the mark word of the object, and update the bias lock to mark 1.
If thread A enters the critical zone again and finds that the threadid recorded in Mark Word is thread A, it is executed directly. As a result, biased locks are executed extremely efficiently.
Biased lock revocation process
The bias lock will not be released after the synchronization is performed (i.e., the thread will not actively release the bias lock), and the release timing of the bias lock: only when other threads compete for the bias lock, the thread holding the bias lock will be revoked and the bias lock will be released. And the revocation needs to wait for the global security point (i.e., no bytecode is being executed at that point in time).
When the global security point is reached, there are two revocation scenarios based on whether the thread currently holding the bias lock has completed the synchronization**
1) Thread A is performing synchronization** (i.e., thread A has not yet executed synchronization**) If thread B preempts the lock at this time, the bias lock is revoked and the lock is upgraded to a lightweight lock (at this time, the lightweight lock is held by thread A who originally held the bias lock and continues to execute its synchronization**) Thread B that is competing for the lock will enter the spin and wait to obtain the lightweight lock.
2) Thread A has finished synchronous** (i.e., thread A has exited synchronization**) If thread B preempts the lock at this time, the bias lock will be revoked, the threadid will be left blank, and the bias lock will be marked as 0. Depending on whether thread A competes for the lock again, it is divided into the following two scenarios:
If thread A no longer continues to compete, the lock is re-biased to thread B (i.e., thread B holds the bias lock).
If thread A continues to compete, the lock is upgraded to a lightweight lock, preempting the lock via CAS spin.
Lightweight locks are designed to reduce the performance consumption caused by traditional heavyweight locks using the operating system's mutex locks without multi-thread competition. So lightweight locks are designed to improve performance when synchronization is performed almost alternately.
Lightweight lock upgrade process
When thread A and thread B preempt the lock object at the same time, the bias lock is revoked and the lock is promoted to a lightweight lock, as follows:
Before thread A executes synchronization, a space is created in the stack frame of the JVM program to store the lock record. When thread A preempts a lock object, the mark word in the object header of the lock object is copied to the lock record of thread A through the CAS spin, and the pointer in the mark word to the lock record in the thread stack is pointed to the lock record space of thread A
If the update is successful, Thread A holds the lock on the object and updates the lock flag of the Mark Word of the object lock to 00 (i.e., Thread A holds the lightweight lock and performs synchronization**) and Thread B obtains the lightweight lock by waiting for the CAS spin.
If the update fails, the lock is preempted by thread B.
The lock record is a thread-specific data structure, and each thread has a list of available lock records, as well as a global list of available locks. The mark word of each locked object will be associated with a lock record (i.e., the lock word in the markword of the object header points to the start address of the lock record), and there is an owner field in the lock record that stores the unique identifier of the thread that owns the lock, indicating that the lock is occupied by the thread.
The internal structure of the lock record, as shown in Fig
If the thread fails to acquire a lock through a CAS spin, there is lock contention, and the lightweight lock will swell into a heavyweight lock.
Lightweight lock revocation process
There are two scenarios for lightweight lock revocation:
When more than two threads compete for a lock at the same time, the lightweight lock is revoked and escalated to a heavyweight lock, and instead of acquiring the lock through CAS self-waiting, the thread is directly blocked.
When the thread holding the lightweight lock finishes synchronizing**, the lightweight lock is released and the pointer to the lock record in the mark word of the lock object is replaced back with the mark word of the lock object by the CAS spin.
When multiple threads compete for the same lock at the same time, the JVM escalates a lightweight lock to a heavyweight lock. Heavyweight locks are implemented internally through monitors, which in turn are implemented through mutex locks of the underlying operating system.
When the lock is upgraded to a heavyweight lock, update the lock identification bit in the mark work to 10, and point the pointer in the mark work to the monitor.
Heavyweight lock workflow: When the system checks that the lock is a heavyweight lock, it blocks the thread waiting to get the lock, and the blocked thread does not consume CPU resourcesWhen a thread is blocked or woken up, the CPU needs to be converted from the user mode to the kernel state, and the state transition needs to consume CPU resources, so the overhead of heavyweight locks is relatively large.
Advantages and disadvantages of bias locks, lightweight locks, heavyweight locks:
Biased locks, lightweight locks, heavyweight locks
Biased locks: For single-threaded situations, when there is no lock contention, biased locks can be used when entering synchronization**.
Lightweight lock: suitable for less fierce competition and shorter synchronization ** execution time, upgraded to lightweight lock when there is competition, lightweight lock uses spin lock, although the use of lightweight lock will occupy CPU resources, but it is more efficient than using heavyweight lock.
Heavyweight locks: Suitable for situations where competition is fierce and synchronization** execution time is long, when the performance cost of using lightweight lock spins will be more severe than that of heavyweight locks, and it is necessary to upgrade lightweight locks to heavyweight locks.
jdk1.6. In addition to the introduction of lock optimization for synchronized such as biased lock and lightweight lock, lock optimization strategies such as adaptive spin lock, lock elimination, and lock coarsening are also introduced.
A spinlock is a type of busy wait-based lock that when a thread tries to acquire a lock, if the lock is already occupied by another thread, the thread waits in a loop until the lock is acquired. Spin locks are suitable for situations where the lock is held for a short period of time.
Spin lock in jdk14.2, off by default, can be turned on using -xx:+usespinning, in jdk16 is enabled by default, and the number of spins of spinlock is 10 times by default, which can be adjusted by parameter -xx:preblockspin.
An adaptive spin lock is an improved spin lock that dynamically adjusts the number of spins based on the usage of the lock:
If the lock was held by another thread in past spins, the current thread will increase the number of spins.
If in past spins, the lock was rarely able to spin successfully, then the current thread will reduce the number of spins or even ignore the spin process so as not to waste CPU resources.
The JVM scans the running context during JIT compilation, and after escape analysis, there is no possibility of competition or sharing for a certain segment, and the lock of this segment will be eliminated to improve the efficiency of program operation.
Example:
public void method()In the above **, the lock is private and immutable in the method, there is no need to add a lock at all, and the JVM will perform lock elimination.
Normally, the scope of a synchronization block should be as small as possible, and synchronization should only be done within the scope of the shared data, so as to minimize the number of synchronization operations and reduce the blocking time. At the same time, the addition of unlocking requires resource consumption, and if there is a series of continuous unlocking operations in **, it may cause unnecessary performance loss due to frequent unlocking operations.
Example:
public void method(object lock) synchronized (lock) }In the above, the two locked blocks can be completely merged into one, which reduces the performance overhead caused by frequent unlocking and improves the operation efficiency of the program.
The wait-notification mechanism is a way of collaboration between threads, implemented through the wait(), notify(), and notifyall() methods, which must be in the synchronized** block or synchronized method, otherwise an illegalmonitorstateexception will be thrown. Where:
wait() method: blocks the current thread and releases the lock, and the current thread enters the waiting state.
notify() method: Release the lock after the ** block is executed, and wake up a thread in the waiting queue (randomly wake up a thread in the waiting queue).
notifyall() method: release the lock after the ** block is executed, and wake up all threads in the waiting queue.
The wait-notification mechanism executes the process, as shown in the image
Processing process: 1) The thread executes the synchronous ** block, and when the condition is not met, the wait() method is called to release the lock held by the current thread, and the current thread is added to the waiting queue to be suspended.
2) After the other threads have executed the synchronization block, when the conditions are met, call notifyall() or notify() to wake up the blocking thread in the waiting queue, and put the thread into the entrylist (blocking queue) to re-compete for the lock.
Example:
** author Nanqiu * synchronized wait-notification mechanism example * @slf4jpublic class allocator catch(exception e) }alsadd(from); als.add(to);Return the resource * param from **param to target * public synchronized void free(object from, object to)}Note:
notify() randomly notifies one of the threads in the waiting queue, which can cause some threads to never be notified.
notifyall() notifies all threads in the waiting queue.
So, unless it's deliberate, try to use notifyAll().
Read Recommended] More exciting content, such as:
Redis series.
Data Structures & Algorithms.
NACOS series.
MySQL series.
JVM series.
Kafka series.
Concurrent programming series.
Please move to the personal homepage of [Nanqiu] for reference. The content is constantly being updated.
About the author] An old babe who loves technology and life, focuses on the field of J**A, and pays attention to [Nanqiu classmates] to take you to learn and grow together