This guide shows how to run our actively replicated DSMS (with Front End, Sequencer, Replicas, and Replica Managers) using only three physical machines on a local network. We will also demonstrate how to test crash failures.
We need:
Because we have only 3 physical machines, one of them must host the FE + Sequencer alongside one Replica (and its RM). The other two machines each host a Replica and its RM. That gives us 3 replicas total and a single FE/Sequencer instance. Below is how you can distribute them:
Each block (Replica + RM) can be run as separate processes or combined logic, but they should communicate on the same machine (e.g., via localhost ports).
Machine 1 hosts:
Key Points:
Machines 2 and 3 do not host the FE or Sequencer. They only run:
That way, each city code has a replica on Machine 1, 2, and 3. The Sequencer on Machine 1 sends requests to all nine processes (3 per machine).
Key Points:
In this final deployment:
Below is a typical startup order for the system:
Once the system is up, you can point any Admin or Buyer Client to connect to the Front End (on Machine 1). The FE will pass requests to the Sequencer, which multicasts them to all three Replicas in total order.
Your DSMS includes both Admin operations and Buyer operations. Below are the main ones you’ll want to test:
addShare(adminID, shareID, shareType, capacity)removeShare(adminID, shareID, shareType)listShareAvailability(adminID, shareType)purchaseShare(buyerID, shareID, shareType, count)getShares(buyerID)sellShare(buyerID, shareID, shareCount)swapShares(buyerID, oldShareID, oldType, newShareID, newType)When an admin or buyer client runs these operations, they’ll connect to the FE on Machine 1. The FE forwards the request to the Sequencer, which distributes them to all replicas. Each replica returns its local “agreed-upon” result. The FE compares results, detects mismatches or timeouts, and responds to the client.
This document lists comprehensive test cases to verify your Distributed Share Market System (DSMS) functionality, covering both **normal operations** and **crash scenarios** (process crash). The DSMS includes:
We also show **visual diagrams** depicting where a crash might occur and how the system recovers.
Below is a listing of core admin and buyer operations, covering typical edge cases. These assume **all replicas are running** and no process crashes occur.
| Test Case ID | Operation | Scenario | Expected Result |
|---|---|---|---|
| T-Admin-01 | addShare | Admin adds a new share with valid inputs | Share is successfully created with correct capacity; success message returned. |
| T-Admin-02 | addShare | Admin attempts to add an existing shareID | Operation fails with a message “Share already exists.” No new share is created. |
| T-Admin-03 | removeShare | Admin removes an existing share | Share is removed from the server. Future list or purchase references should fail for that share. |
| T-Admin-04 | removeShare | Admin attempts to remove a non-existent shareID | Operation returns “Share not found” or similar message. |
| T-Admin-05 | listShareAvailability | Admin requests availability for a valid shareType (EQUITY, BONUS, or DIVIDEND) | All known shares of that type are returned with capacities from each server city. No error. |
| T-Buyer-01 | purchaseShare | Buyer purchases a share with sufficient capacity available | Buyer's purchase is recorded, capacity is reduced accordingly. Successful response. |
| T-Buyer-02 | purchaseShare | Buyer tries to purchase more than capacity, or shareID doesn’t exist | Purchase fails or partial is allocated (depending on logic); an error or partial success message is returned. |
| T-Buyer-03 | getShares | Buyer requests their existing holdings | All owned shares and quantities from all servers are returned. |
| T-Buyer-04 | sellShare | Buyer tries to sell shares they own, less than or equal to the purchased quantity | Operation succeeds, capacity goes up on that share. Buyer’s holdings are decreased accordingly. |
| T-Buyer-05 | sellShare | Buyer tries to sell a share they do not own | Operation fails with error message (e.g., “You do not own this share.”). |
| T-Buyer-06 | swapShares | Buyer attempts to swap oldShare for newShare with enough capacity in newShare | Old share is removed from the buyer's holdings, new share is allocated. Successful swap message returned. |
| T-Buyer-07 | swapShares | Buyer attempts to swap but newShare lacks capacity | Swap fails; old share is returned to the buyer’s holdings. Clear error message. |
**Note**: For cross-city purchases (if you have that limit of 3 shares across different cities), add an additional test verifying the limit is enforced.
In the following scenarios, at least one **replica** is forcibly terminated during or right after a request is broadcast by the Sequencer. We assume your DSMS is in crash-failure tolerance mode (i.e., not the software-fault mode). The system should continue to operate with the remaining replicas.
| Test Case ID | Crash Point | Operation | Expected System Behavior |
|---|---|---|---|
| Crash-01 | After Sequencer sends an “addShare” request to all replicas, Replica B is killed before responding |
addShare |
FE receives 2 responses (Replica A and Replica C). FE times out waiting for B, suspects crash. FE returns success if A and C results match. RM(B) restarts or replaces B. System remains consistent with the new share added. |
| Crash-02 | During “removeShare” broadcast, Replica A’s process is terminated in mid-execution | removeShare |
FE gets responses from B and C. A never responds => crash suspicion => RMs confirm. The share is removed on B and C. Replica A is later restarted or replaced. Once A is back, it can catch up on missed requests. |
| Crash-03 | Replica C is killed after sending an incorrect partial result for “purchaseShare” (if that partial result arrives, it might be irrelevant in crash mode, but let's say we never trust incomplete data) | purchaseShare |
FE compares responses from A, B, C. If C is killed mid-flight, FE might get no final ack from C. A and B match => FE proceeds with that result. FE flags C as crashed. RM(C) restarts C. The final system state is consistent per A and B. |
| Crash-04 | Replica B is killed before any response for “swapShares” | swapShares |
FE collects responses from A and C. If they match, the swap is successful. B is restarted later by RM(B). The buyer’s holdings reflect the swapped share on A and C. B syncs after restart. |
| Crash-05 | Replica A is killed mid “sellShare” operation. FE times out on A’s response | sellShare |
FE obtains matching results from B, C, concludes the sale. A is flagged for crash; RM(A) restarts it. Overall capacity is incremented as B and C performed the sale. |
In each crash case, once the Replica Manager detects or is notified of the crash, it spins up a new instance (or restarts) so the system returns to having 3 replicas. The FE can respond to requests as soon as it has at least 2 matching responses from the healthy replicas.
Below are two sample sequence diagrams illustrating crash scenarios and the subsequent recovery. The first focuses on a crash during an admin operation (e.g., addShare), the second focuses on a buyer operation (e.g., purchaseShare).
The test cases above (normal operations + crash scenarios) will help you verify correctness, robustness, and recovery in your actively replicated DSMS:
Together, these tests ensure the DSMS achieves the **high availability** goal under a single crash failure.
You can keep the same pseudo-code from the earlier design. The only difference is how you assign host IP addresses and ports to each component. For example:
Machine1_IP = "192.168.0.10"
Machine2_IP = "192.168.0.11"
Machine3_IP = "192.168.0.12"
// On Machine 1
startReplica("A", "192.168.0.10", port=9000)
startRM("A", "192.168.0.10", port=9100)
startSequencer("192.168.0.10", port=7000)
startFrontEnd("192.168.0.10", port=7100, sequencerIP="192.168.0.10", seqPort=7000)
// On Machine 2
startReplica("B", "192.168.0.11", port=9000)
startRM("B", "192.168.0.11", port=9100)
// On Machine 3
startReplica("C", "192.168.0.12", port=9000)
startRM("C", "192.168.0.12", port=9100)
Each replica knows the IP address and ports of the other replicas and the Sequencer. The Front End is the only one the client ever contacts (via 192.168.0.10:7100 in this example).
After deployment, you can verify:
LONA9999) via the FE. Perform:
addShareremoveSharelistShareAvailabilitypurchaseSharegetSharessellShareswapSharesConfirm each replica’s data (e.g., capacities and buyer records) remain consistent. If you forcibly stop any single replica process (on Machine 2 or Machine 3), the FE should continue returning correct results (via the other two healthy replicas).
By following this setup, you can: