This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Distributed coordination function (DCF) is the fundamental MAC technique of the IEEE 802.11 based WLAN standard. DCF employs a CSMA/CA with Binary exponential backoff algorithm.
DCF requires a station wishing to transmit to listen for the channel status for a DIFS interval. If the channel is found busy during the DIFS interval, the station defers its transmission. In a network where a number of stations contend for the wireless medium. If multiple stations sense the channel busy and defer their access, they will also virtually simultaneously find that the channel is released and then try to seize the channel. As a result, collisions may occur. In order to avoid such collisions, DCF also specifies random backoff, which forces a station to defer its access to the channel for an extra period. The length of the backoff period is determined by the following equation:
DCF also has an optional virtual carrier sense mechanism that exchanges short Request-to-send (RTS) and Clear-to-send (CTS) frames between source and destination stations during the intervals between the data frame transmissions.
DCF includes a positive acknowledge scheme, which means that if a frame is successfully received by the destination it is addressed to, the destination needs to send an ACK frame to notify the source of the successful reception.
DCF is defined in subclause 9.2 of the IEEE 802.11 standard and is the de-facto default setting for Wi-Fi hardware.
The IEEE 802.11 standard also defines an optional access method using a Point Coordination Function (PCF). PCF allows the Access Point acting as the network coordinator to manage channel access. The IEEE 802.11e enhances the DCF and the PCF, through a new coordination function called Hybrid Coordination Function (HCF).
network coordination as a means to provide spectrally efficient communications in cellular downlink systems. When network coordination is employed, all base antennas act together as a single network antenna array, and each mobile may receive useful signals from nearby base stations. Furthermore, the antenna outputs are chosen in ways to minimize the out-of-cell interference, and hence to increase the downlink system capacity. When the out-of-cell interference is mitigated, the links can operate in the high signal-to-noise ratio regime. This enables the cellular network to enjoy the great spectral efficiency improvement associated with using multiple antennas
A coordinator need simply ensure that if one of the nested transactions aborts, that all other subtransactions abort as well. Likewise, it should coordinate that all of them commit when each of them can. To this end, a nested transaction should wait to commit until it is told to do so by the coordinator.
b) We argued that distribution transparency may not be in place for pervasive systems. This statement is not true for all types of transparencies. Explain what do you understand by pervasive systems. Give an example.
Think of migration transparency. In mnay pervasive systems, components
are mobile and will need to re-establish connections when moving from one
access point to another. Preferably, such handovers should be completely
transparent to the user. Likewise, it can be argued that many other types of
transparencies should be supported as well. However, what should not be hidden
is a user is possibly accessing resources that are directly coupled to the user's current environment.
Q2. Consider a chain of processes P1, P2, ..., Pn implementingå¯¦- a multitiered client-server architecture. Process Pi is client of process Piï€«1, and Pi will return a reply to Piï€1 only after receiving a reply from Piï€«1. What are the main problems with this organization when taking a look at the request-reply performance at process P1?
Performance can be expected to be bad for large n. The problem is that
each communication between two successive layers is, in principle, between
two different machines. Consequently, the performance between P1 and P2
may also be determined by n ï€ï€ 2 request-reply interactions between the other
layers. Another problem is that if one machine in the chain performs badly or
is even temporarily unreachable, then this will immediately degrade the performance at the highest level.
Q3. Strong mobility in UNIX systems could be supported by allowing a process to fork a child on a remote machine. Explain how this would work.
Forking is a process in a multitasking or multithreading operation system.
In UNIX, it means that a complete image of the parent is copied to the
child, meaning that the child continues just after the call to fork. A similar
approach could be used for remote cloning, provided the target platform is
exactly the same as where the parent is executing. The .rst step is to have the
target operating system reserve resources and create the appropriate process
and memory map for the new child process. After this is done, the parent's
image (in memory) can be copied, and the child can be activated. (It should be clear that we are ignoring several important details here.)
Q4. Describe how connectionless communication between a client and a server proceeds when using sockets.
Both the client and the server create a socket, but only the server binds the
socket to a local endpoint. The server can then subsequently do a blocking
read call in which it waits for incoming data from any client. Likewise, after
creating the socket, the client simply does a blocking call to write data to the server. There is no need to close a connection.
Q5. The Request-Reply Protocol is underlying most implementations of remote procedure calls and remote method invocations. In the Request-Reply Protocol, the request messages carry a request ID so that the sender can match answer messages to the requests it sent out.
a) Task: Describe a scenario in which a client could receive a reply from an earlier request.
A request message and time out is sent by a clients, the request message is then retransmitted, and expecting only one reply. The server will eventually receives both request messages and sends two replies when the server is not overloaded. An earlier call will be received as a result when the client sends a subsequent request. The reply to the earlier message can be rejected by the client, if the message identifiers are copied from request to reply message.
Client sends request message, times out and then retransmits the request message, expecting only one reply. The server which is operating under a heavy load, eventually receives both request messages and sends two replies. When the client sends a subsequent request it will receive the reply from the earlier call as a result. If message identifiers are copied from request to reply messages, the client can reject the reply to the earlier message.
The Request-Reply-Acknowledge (RRA) protocol is a variant of the Request-Reply (RR) protocol, where the client has to acknowledge the server's reply. Assume that the operations requested by the client are not idempotent, that is, their outcome is different if they are executed a second time.
b) Task: For each of the two protocols, RR and RRA, describe which information the server has to store in order to reliably executeå¯é åŸ·è¡Œ the requests of the client and return information about the outcome. Discuss as well when the server can delete which piece of information under the two protocols.
This protocol is also known as the RR (request-reply) protocol. The protocol is based on the idea of using implicit acknowledgement to eliminate explicit acknowledgement messages. (i) A server's reply message is regulated as an acknowledgement of the client's request message. (ii) A subsequent call packet from a client is regarded as an acknowledgement of the server's reply message of the previous call made by that client. To take care of lost message, timeout based retransmission technique is normally used along with RR protocol. A client retransmits request message if it does not receive the reply message within the predetermined timeout period. Servers can support exactly-once call semantics by keeping records of the replies in a reply cache that enables them to filter out duplicate request message and to retransmit reply messages without the need to reprocess a request. Fig. 3 shows the message communication of RR Protocol. The client sends a request message to the server and waits for a reply message. After receiving the request message, server executes the procedure and also sends a reply message to the client that serves as an acknowledgement for the previous request message. When client receives the acknowledgement from server, it sends next request message that serves as an acknowledgement of previous RPC.
RRA protocol requires clients to acknowledge the receipt of reply messages. The server deletes information from its reply cache only after receiving an acknowledgement for it from the client. The RRA protocol provides exactly-once call semantics. In this protocol, there is a probability that the acknowledgement message itself gets lost. Therefore, a unique message identifier is associated with request message. This identifier is also associated with corresponding reply messages. Each acknowledgement message also contains same identifier.
*Ref: IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.7, July 2007
Modeling and Formal Verification of Communication Protocols
for Remote Procedure Call