Follow these steps to get the Locking Service application running on your machine.
Clone the repository from GitHub using the following command:
git clone https://github.com/FitrahHaque/Distributed-Lockcd Raft-Consensusgo mod tidygo run main.goYou will see a menu to choose actions to perform, including Data Store, Server, and Client operations.
Follow the steps below to run a demonstration of the distributed locking mechanism.
Open a new terminal window within the Raft-Consensus directory and execute:
go run main.go
# Select Data Store (option D)
DOpen 3 separate terminals and in each run:
go run main.go
# Select Client (option C)
C
# Assign each client a unique name:
1 client1Repeat this for client2 and client3.
Open 5 terminals. In each, run:
go run main.go
# Select Server (option S)
S
# Provide a unique Server ID (0 to 4):
1 0Then, on terminals for servers 1-4, connect to server 0 to form the cluster:
# Connect to serverID 0
13 0 [::]:8080Check the current leader status:
9Run the following on each client terminal to connect to the cluster:
2 10On client1, acquire lock l1 for 60 seconds:
3 l1 60After the lock is acquired, write data to the store:
5 hi l1Verify the write operation by checking the file client/data.txt.
Finally, release the lock:
4 l1Check server terminals to verify the lock release status.
On client2, request lock l1:
3 l1 30On client3, simultaneously request lock l1:
3 l1 30client2 will obtain the lock first. After 30 seconds, it will automatically pass to client3. Verify this by observing client and server logs.
Request lock l2 simultaneously from all three clients:
3 l2 40Initially, client1 acquires the lock. Now simulate a leader crash (assuming the leader is server 0):
# Press Control-C-Enter to crash the leader.Clients will automatically reconnect to a newly elected leader. Verify that pending lock requests are processed correctly by checking logs across terminals.