paperclip

4. Medium Access Sub-Layer

  • Ques: What is static and dynamic address allocation?
  • Ans:

    The Dynamic Host Configuration Protocol (DHCP) has been devised to provide static and dynamic address allocation that can be manual or automatic.
    Static Address Allocation In this capacity DHCP acts as BOOTP does. It is backward compatible with BOOTP, which means a host running the BOOTP client can request a static address from a DHCP server. A DHCP server has a database that statically binds physical addresses to IP addresses.
    Dynamic Address Allocation DHCP has a second database with a pool of available IP addresses. This second database makes DHCP dynamic. When a DHCP client requests a temporary IP address, the DHCP server goes to the pool of available (unused) IP addresses and assigns an IP address for a negotiable period of time. When a DHCP client sends a request to a DHCP server, the server first checks its static database. If an entry with the requested physical address exists in the static database, the permanent IP address of the client is returned. On the other hand, if the entry does not exist in the static database, the server selects an IP address from the available pool, assigns the address to the client, and adds the entry to the dynamic database.

  • Ques: What is random access or contention?
  • Ans:

    In random access or contention methods, no station is superior to another station and none is assigned the control over another. No station permits, or does not permit, another station to send. At each instance, a station that has data to send uses a procedure defined by the protocol to make a decision on whether or not to send. This decision depends on the state of the medium (idle or busy). In other words, each station can transmit when it desires on the condition that it follows the predefined procedure, including the testing of the state of the medium. Two features give this method its name. First, there is no scheduled time for a station to transmit. Transmission is random among the stations. That is why these methods are called random access. Second, no rules specify which station should send next. Stations compete with one another to access the medium. That is why these methods are also called contention methods.

  • Ques: What is static and dynamic channel allocation?
  • Ans:

    Static Channel Allocation :
    The most typical way of allocating a single channel among multiple competing users is Frequency Division Multiplexing (FDM). If the number of users are N, the bandwidth is divided into N equal-sized portions. Each user is assigned one portion. If the number of users are small and constant, FDM is a simple and efficient allocation mechanism. A telephone trunk can be a simple example of this type. However, when the number of senders is not small and constant or the traffic is heavy, FDM presents some problems. If the spectrum is divided into N regions the number of users currently interested in communicating is less than N, a large piece of valuable spectrum will be wasted.And due to this problem If more than N users want to communicate, some of them will be denied permission for lack of bandwidth. The main reason for lack of bandwidth is that some of the users who have been assigned a frequency band hardly ever transmit or receive anything. So, dividing a single channel into static sub channels is quiet inefficient.The poor performance of static FDM can easily be seen from a simple queuing theory calculation. If the mean time delay is T,for a channel of capacity C bps, with an arrival rate of lambda frames/sec, each frame having a length drawn from an exponential probability density function with mean 1/1/µ bits/frame. With these parameters the arrival rate is lambda frames/sec and the service rate is µC frames/sec.

    Dynamic Channel Allocation :
    Unlike the static channel,dynamic channel allocation is efficient and is used in areas where the traffic is nonuniform and heavy. Five main assumptions must be considered in dynamic channel allocation.
    1. Station Model: Stations are also called terminals. The number of independent stations are N, with independent constant arrival rates lambda, and probability of a frame being generated in a time interval of (delta t) is (delta t x lambda). Once a frame has been generated the station does nothing until the frame has successfully been transmitted.
    2. Single Channel Assumption:
    From hardware point of view, all stations are equal. A single channel is available for communication on which all stations can transmit on it and all can receive from it.
    3. Collision Assumption: If two frames are transmitted simultaneously, they will collide resulting in a false signal. Each station has the ability to detect collision and it must be kept in mind that collided frame must be retransmitted later.
    4. Time Management:
    Continuous Time:
    Frame transmission can begin at any instant as there is no master clock diving time into discrete intervals.
    Slotted Time:
    Frame transmission start at the beginning of the time slots. A slot may contain 0,1, or more frame corresponding to an idle, successful or collision transmission respectively.
    5. Sensing of Channel:
    Carrier Sense:
    A channel can be sensed by station before trying to use it. If a station senses the channel as busy, no station will attempt to use it, until it goes idle.
    No Carrier Sense:
    Stations cannot sense the channel before trying to use it. First they transmit and then they came to know where the channel is busy or idle.

  • Ques: What are various random access protocols?
  • Ans:

    ALOHA, the earliest random access method, was developed at the University of Hawaii in early 1970. It was designed for a radio (wireless) LAN, but it can be used on any shared medium.It is obvious that there are potential collisions in this arrangement. The medium is shared between the stations. When a station sends data, another station may attempt to do so at the same time. The data from the two stations collide and become garbled.

    Carrier Sense Multiple Access
    To minimize the chance of collision and, therefore, increase the performance, the CSMA method was developed. The chance of collision can be reduced if a station senses the medium before trying to use it. Carrier sense multiple access (CSMA) requires that each station first listen to the medium (or check the state of the medium) before sending. In other words, CSMA is based on the principle "sense before transmit" or "listen before talk." CSMA can reduce the possibility of collision, but it cannot eliminate it. The possibility of collision still exists because of propagation delay; when a station sends a frame, it still takes time (although very short) for the first bit to reach every station and for every station to sense it. In other words, a station may sense the medium and find it idle, only because the first bit sent by another station has not yet been received.

    Carrier Sense Multiple Access with Collision Detection
    The CSMA method does not specify the procedure following a collision. Carrier sense multiple access with collision detection (CSMA/CD) augments the algorithm to handle the collision.In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If so, the station is finished. If, however, there is a collision, the frame is sent again.

    Carrier Sense Multiple Access with Collision Avoidance
    Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) The basic idea behind CSMA/CD is that a station needs to be able to receive while transmitting to detect a collision. When there is no collision, the station receives one signal: its own signal. When there is a collision, the station receives two signals: its own signal and the signal transmitted by a second station. To distinguish between these two cases, the received signals in these two cases must be significantly different. In other words, the signal from the second station needs to add a significant amount of energy to the one created by the first station. In a wired network, the received signal has almost the same energy as the sent signal because either the length of the cable is short or there are repeaters that amplify the energy between the sender and the receiver. This means that in a collision, the detected energy almost doubles. However, in a wireless network, much of the sent energy is lost in transmission. The received signal has very little energy. Therefore, a collision may add only 5 to 10 percent additional energy. This is not useful for effective collision detection.

  • Ques: What is Pure ALOHA and Slotted ALOHA?
  • Ans:

    Pure ALOHA The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol. The idea is that each station sends a frame whenever it has a frame to send. However, since there is only one channel to share, there is the possibility of collision between frames from different stations.The pure ALOHA protocol relies on acknowledgments from the receiver. When a station sends a frame, it expects the receiver to send an acknowledgment. If the acknowledgment does not arrive after a time-out period, the station assumes that the frame (or the acknowledgment) has been destroyed and resends the frame. A collision involves two or more stations. If all these stations try to resend their frames after the time-out, the frames will collide again. Pure ALOHA dictates that when the time-out period passes, each station waits a random amount of time before resending its frame. The randomness will help avoid more collisions. We call this time the back-off time TB.Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a maximum number of retransmission attempts Kmax' a station must give up and try later.

    Aloha

    Slotted ALOHA
    Pure ALOHA has a vulnerable time of 2 x Tfr . This is so because there is no rule that defines when the station can send. A station may send soon after another station has started or soon before another station has finished. Slotted ALOHA was invented to improve the efficiency of pure ALOHA. In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at the beginning of the time slot.Because a station is allowed to send only at the beginning of the synchronized time slot, if a station misses this moment, it must wait until the beginning of the next time slot. This means that the station which started at the beginning of this slot has already finished sending its frame. Of course, there is still the possibility of collision if two stations try to send at the beginning of the same time slot. However, the vulnerable time is now reduced to one-half, equal to Tfr

    Slotted-Aloha

  • Ques: Explain CONTROLLED ACCESS:Polling,Token Passing.
  • Ans:

    CONTROLLED ACCESS: In controlled access, the stations consult one another to find which station has the right to send. A station cannot send unless it has been authorized by other stations.

    Polling: Polling works with topologies in which one device is designated as a primary station and the other devices are secondary stations. All data exchanges must be made through the primary device even when the ultimate destination is a secondary device. The primary device controls the link; the secondary devices follow its instructions. It is up to the primary device to determine which device is allowed to use the channel at a given time. The primary device, therefore, is always the initiator of a session.

    Polling

    Token Passing: In the token-passing method, the stations in a network are organized in a logical ring. In other words, for each station, there is a predecessor and a successor. The predecessor is the station which is logically before the station in the ring; the successor is the station which is after the station in the ring. The current station is the one that is accessing the channel now. The right to this access has been passed from the predecessor to the current station. The right will be passed to the successor when the current station has no more data to send. In this method, a special packet called a token circulates through the ring. The possession of the token gives the station the right to access the channel and send its data. When a station has some data to send, it waits until it receives the token from its predecessor. It then holds the token and sends its data. When the station has no more data to send, it releases the token, passing it to the next logical station in the ring. The station cannot send data until it receives the token again in the next round. In this process, when a station receives the token and has no data to send, it just passes the data to the next station. Token management is needed for this access method. Stations must be limited in the time they can have possession of the token. The token must be monitored to ensure it has not been lost or destroyed.

  • Ques: What is Channelization?
  • Ans:

    CHANNELIZATION Channelization is a multiple-access method in which the available bandwidth of a link is shared in time, frequency, or through code, between different stations. In this section, we discuss three channelization protocols: FDMA, TDMA, and CDMA.
    Frequency-Division Multiple Access (FDMA): In frequency-division multiple access (FDMA), the available bandwidth is divided into frequency bands. Each station is allocated a band to send its data. In other words, each band is reserved for a specific station, and it belongs to the station all the time. Each station also uses a bandpass filter to confine the transmitter frequencies. To prevent station interferences, the allocated bands are separated from one another by small guard bands.
    Time-Division Multiple Access (TDMA): In time-division multiple access (TDMA), the stations share the bandwidth of the channel in time. Each station is allocated a time slot during which it can send data. Each station transmits its data in is assigned time slot.
    Code-Division Multiple Access (CDMA): Code-division multiple access (CDMA) was conceived several decades ago. Recent advances in electronic technology have finally made its implementation possible. CDMA differs from FDMA because only one channel occupies the entire bandwidth of the link. It differs from TDMA because all stations can send data simultaneously; there is no timesharing.

  • Ques: Explain Ethernet(802.3) frame format.
  • Ans:

    Frame Format The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol data unit (PDU), upper-layer data, and the CRe. Ethernet does not provide any mechanism for acknowledging received frames, making it what is known as an unreliable medium. Acknowledgments must be implemented at the higher layers.
    Preamble: The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating Os and Is that alerts the receiving system to the coming frame and enables it to synchronize its input timing. The pattern provides only an alert and a timing pulse. The 56-bit pattern allows the stations to miss some bits at the beginning of the frame. The preamble is actually added at the physical layer and is not (formally) part of the frame.
    Start frame delimiter (SFD): The second field (l byte: 10101011) signals the beginning of the frame. The SFD warns the station or stations that this is the last chance for synchronization. The last 2 bits is 11 and alerts the receiver that the next field is the destination address.
    Destination address (DA): The DA field is 6 bytes and contains the physical address of the destination station or stations to receive the packet. We will discuss addressing shortly.
    Source address (SA): The SA field is also 6 bytes and contains the physical address of the sender of the packet. We will discuss addressing shortly.
    Length or type: This field is defined as a type field or length field. The original Ethernet used this field as the type field to define the upper-layer protocol using the MAC frame. The IEEE standard used it as the length field to define the number of bytes in the data field. Both uses are common today.
    Data: This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46 and a maximum of 1500 bytes, as we will see later.
    CRC: The last field contains error detection information, in this case a CRC-32

  • Ques: What is Ethernet?Explain Ethernet cabling.
  • Ans:

    In 1985, the Computer Society of the IEEE started a project, called Project 802, to set standards to enable intercommunication among equipment from a variety of manufacturers. Project 802 does not seek to replace any part of the OSI or the Internet model. Instead, it is a way of specifying functions of the physical layer and the data link layer of major LAN protocols. The standard was adopted by the American National Standards Institute (ANSI). In 1987, the International Organization for Standardization (ISO) also approved it as an international standard under the designation ISO 8802.
    The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC). Since then, it has gone through four generations: Standard Ethernet (lot Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet (l Gbps), and Ten-Gigabit Ethernet (l0 Gbps),

    Standard Ethernet

    10Base5: Thick Ethernet: The first implementation is called 10BaseS, thick Ethernet, or Thicknet. The nickname derives from the size of the cable, which is roughly the size of a garden hose and too stiff to bend with your hands. lOBaseS was the first Ethernet specification to use a bus topology with an external transceiver (transmitter/receiver) connected via a tap to a thick coaxial cable.
    10Base2: Thin Ethernet: The second implementation is called lOBase2, thin Ethernet, or Cheapernet. IOBase2 also uses a bus topology, but the cable is much thinner and more flexible. The cable can be bent to pass very close to the stations. In this case, the transceiver is normally part of the network interface card (NIC), which is installed inside the station.
    10Base-T: Twisted-Pair Ethernet: The third implementation is called lOBase-T or twisted-pair Ethernet. 1OBase-T uses a physical star topology. The stations are connected to a hub via two pairs of twisted cable.TThe maximum length of the twisted cable here is defined as 100 m, to minimize the effect of attenuation in the twisted cable.wo pairs of twisted cable create two paths (one for sending and one for receiving) between the station and the hub. Any collision here happens in the hub.
    10Base-F: Fiber Ethernet: Although there are several types of optical fiber lO-Mbps Ethernet, the most common is called 10Base-F. lOBase-F uses a star topology to connect stations to a hub. The stations are connected to the hub using two fiber-optic cables.

  • Ques: What is Binary exponential back-off?
  • Ans:

    A station that is ready to send chooses a random number of slots as its wait time. The number of slots in the window changes according to the binary exponential back-off strategy. This means that it is set to one slot the first time and then doubles each time the station cannot detect an idle channel after the IFS time. This is very similar to the p-persistent method except that a random outcome defines the number of slots taken by the waiting station. One interesting point about the contention window is that the station needs to sense the channel after each time slot. However, if the station finds the channel busy, it does not restart the process; it just stops the timer and restarts it when the channel is sensed as idle. This gives priority to the station with the longest waiting time.