In this section, we break down the required modifications to integrate software-fault tolerance or high availability into the Distributed Share Market System (DSMS). We provide recommended pseudo-code and diagrams for each subcomponent.
The team must decide at system initialization whether the DSMS is running in:
All other modules (Front End, Sequencer, Replicas, Replica Managers) use the same DSMS code but will branch on this selected mode to handle:
Here is some pseudo-code that all processes can share to initialize the system’s mode:
// Pseudo-code for configuring "failure mode" at startup
// A global or shared configuration object
GlobalConfig config = new GlobalConfig();
// At some main method or config loader:
String mode = getLaunchParameter("failureMode"); // e.g., "byzantine" or "crash"
if (mode.equalsIgnoreCase("byzantine")) {
config.failureMode = "BYZANTINE";
} else {
config.failureMode = "CRASH";
}
// Then pass "config" to the front end, sequencer, RMs, and replicas.
// All components read config.failureMode to know how to behave.
Each student modifies one copy of the DSMSServer (or CORBA equivalent) to become a “replica.” The replica now:
(sequenceNumber, clientRequest) from the Sequencer rather than directly from clients.sequenceNumber).The DSMS logic (addShare, removeShare, purchaseShare, etc.) remains mostly the same, but you must:
Here is pseudo-code for how a replica might handle ordered requests:
ReplicaServer {
// Maintains a queue or map of sequenceNumber -> request
sortedRequests = new PriorityQueue(... compare by seqNo ...);
currentSeqExpected = 1;
// On receiving a new request (seqNo, clientRequest)
onReceiveRequest(seqNo, clientRequest):
insert (seqNo, clientRequest) into sortedRequests
// Try to process in order
while sortedRequests.peek() has seqNo == currentSeqExpected:
nextReq = sortedRequests.poll()
// process DSMS operation
result = DSMS_logic(nextReq.clientRequest)
// send 'result' back to FE
sendResponseToFE(nextReq.seqNo, result)
currentSeqExpected++
}
// DSMS_logic(request):
// parse operation (addShare, removeShare, etc.)
// run the existing DSMS server code
// return a string result
The Front End (FE) is the sole entry point for all clients (admins or buyers). Its responsibilities:
Here is pseudo-code for the FE logic:
FrontEnd {
handleClientRequest(clientRequest):
// 1) Send request to sequencer
seqNum = sendToSequencer(clientRequest)
// 2) Initialize timers/wait for 3 replica responses
responses = []
startTime = now()
while not enoughResponses(responses):
if responseArrivesFromReplica(rID, result):
responses.add( (rID, result) )
if checkMajorityOrTwoMatches(responses):
finalRes = getMajorityOrMatch(responses)
// send final result to client
return finalRes
if now() - startTime > TIMEOUT:
// suspect crash
identifyWhichReplicaDidNotRespond(responses)
notifyAllRMs( crashedReplicaID )
// possibly keep waiting for the remaining 2 if we can still get a majority
// fallback
finalRes = getMajorityOrMatch(responses)
return finalRes
}
Each Replica Manager is bound to exactly one replica. The RM’s duties:
Below is pseudo-code for the RM:
ReplicaManager(replicaID) {
consecutiveFaults = 0
onFaultSuspected(faultType):
if faultType == "IncorrectResult":
consecutiveFaults++
if consecutiveFaults >= 3:
stopReplica(replicaID)
startNewReplica(replicaID)
consecutiveFaults = 0
else if faultType == "CrashSuspected":
// coordinate with other RMs
if confirmCrashWithPeers(replicaID):
stopReplica(replicaID)
startNewReplica(replicaID)
consecutiveFaults = 0
stopReplica(replicaID):
// forcibly kill the process or call a cleanup method
startNewReplica(replicaID):
// spawn a fresh DSMS replica process
// rejoin the group for receiving requests
}
The Sequencer enforces total order on all client requests. It:
nextSeqNo.nextSeqNo and reliably multicasts (nextSeqNo, request) to all replicas.Here is some pseudo-code for the sequencer:
Sequencer {
nextSeqNo = 1
onReceiveRequestFromFE(clientRequest):
seqNo = nextSeqNo
nextSeqNo++
// reliably multicast (seqNo, clientRequest) to replicas
for each replica in replicaList:
sendUDPWithAck(replica.address, (seqNo, clientRequest))
// sendUDPWithAck would be something like:
sendUDPWithAck(address, message):
do {
sendUDP(address, message)
wait for ack or timeout
} while (no ack received && retryCount < MAX_RETRIES)
}
By following these implementation details for each subcomponent, you ensure that:
This completes the more detailed outline for Part 5 of the DSMS project, addressing each specific role and showing how pseudo-code and diagrams can guide your actual implementation.