1. DistributedCache

    Intention: To detect potential deadlocks in cache invalidation operations.

    Scenario: We initialize two ReentrantReadWriteLock instances on the shared resource cache. Two threads concurrently attempt to access this shared resource for different operations: invalidation and read. This common scenario in multithreaded applications can lead to deadlocks.

    The critical issue arises from lock ordering:

    1. invalidate() operation:
      • First acquires lock1
      • Then attempts to acquire lock2
      • Must wait if lock2 is held by another thread
      • Holds lock1 while waiting
    2. read() operation:
      • First acquires lock2
      • Then attempts to acquire lock1
      • Must wait if lock1 is held by another thread
      • Holds lock2 while waiting

    Potential Deadlock Scenario:

    1. Thread#1 (invalidate):
      • Acquires lock1
      • Waits for lock2
    2. Thread#2 (read):
      • Acquires lock2
      • Waits for lock1
    3. Result: Both threads are stuck waiting indefinitely, as neither can proceed without the lock held by the other thread.

    This is how it looks where both the threads are waiting to acquire the locks.

    This is how it looks where both the threads are waiting to acquire the locks.

  2. Message Processing PubSub

    Intention : To detect race conditions in a pub-sub message processing model.

    Scenario : The code is simulating a Producer-Consumer problem using threads. Here's what we are trying to do:

    These operations are performed on the shared resource messageQueue. The shared variable isProcessing is used to control the flow between the producer and the consumer.

    The Producer and Consumer classes both extend Thread, running concurrently to manage a shared messageQueue. The Producer adds messages to the queue inside a synchronized block, ensuring thread safety, and checks a shared isProcessing flag to avoid conflicts. Similarly, the Consumer retrieves and processes messages from the queue within a synchronized block, updating the isProcessing flag to allow the producer to add new messages after processing is complete. In the main() method, both threads are started to simulate continuous production and consumption of messages, though the current design risks race conditions due to the non-atomic updates to the shared flag.

    The isProcessing flag is supposed to prevent the producer from adding new messages while the consumer is still processing one. However, there is a race condition between the producer and the consumer related to this flag:

    1. Producer:
      • The producer checks if isProcessing is false before adding a message to the queue. If isProcessing is false, it adds the message and sets isProcessing to true.
    2. Consumer:
      • The consumer checks if there are messages in the queue and processes one if available. After processing, it sets isProcessing back to false to allow the producer to add a new message.

    The race condition occurs because there is no strict synchronization between the producer and the consumer regarding the isProcessing flag.

    For instance: The producer thread is about to add a message to the queue, but the consumer thread is still processing the previous message. However, the consumer may not have finished yet, and it could mistakenly set isProcessing to false before the producer finishes its task, allowing the producer to add a new message even though the consumer is still processing.

    This can cause incorrect behavior where the producer might add multiple messages before the consumer has finished processing them, leading to an overflow of messages in the queue or improper synchronization between production and consumption.