To properly learn something, we have to start at the beginning. We will be learning one concept at a time, process it, and move to the next.

The goal is consistent learning and absorbing information while feeling engaged and not overwhelmed.

I have divided the data link layer articles in five parts.

  1. Introduction to Data link layer
  2. Error Detection and Correction in Data link layer
  3. Flow and Error Control in Data link layer
  4. Protocols used in Data link layer
  5. Switches in Data link layer

Goal of this article

I will be explaining the flow control and error control in the data link layer of the TCP/IP Five-layer network model.

The data link layer is responsible for the implementation of a point-to-point flow and error control mechanism. Let’s begin with some important terminology.

  • We’ll find it convenient in this article to refer to any device that runs a data link layer protocol as a node. Nodes include hosts, routers, switches, and WiFi access points.

  • We will also refer to the communication channels that connect adjacent nodes along the communication path as links.

    • Point-to-Point link

      • It consists of a single sender at one end of the link and a single receiver at the other end of the link.
      • Point-to-Point Protocol (PPP)
      • High-level data link control (HDLC)
    • Broadcast link

      • It can have multiple sending and receiving nodes all connected to the same, single, shared broadcast channel.
      • The term broadcast is used here because when anyone node transmits a frame, the channel broadcasts the frame and each of the other nodes receives a copy.
  • When a data frame is sent from one host node to another node over a single medium(link), it is required that the sender node and the receiver node should work at the same speed.

  • The Sender node should send at a speed at which the receiver node can process and accept the data.

  • Two types of mechanisms can be deployed to control the flow:

    • Stop and Wait

      • This flow control mechanism forces the sender after transmitting a data frame to stop and wait until the acknowledgment of the data-frame sent is received.
    • Sliding Window

      • In this flow control mechanism, both sender and receiver agree on the number of data-frames after which the acknowledgment should be sent. This protocol tries to make use of underlying resources as much as possible.
  • There is always a probability that the data frame may be lost in the transit or it is received corrupted when it is transmitted.

  • In both cases, the receiver node does not receive the correct data frame and the sender node does not know anything about any loss.

  • In such a case, both sender node and receiver node are equipped with error control protocols that help them to detect transit errors such as loss of data frame. Hence, either the sender retransmits the data frame or the receiver may request to resend the previous data frame.

  • Requirements for the error control mechanism

    • Error detection: The sender and receiver nodes, either both or any, must ascertain that there is some error in the transit.

    • Positive ACK: When the receiver node receives a correct frame, it should acknowledge the received frame.

    • Negative ACK: When the receiver node receives a damaged frame or a duplicate frame, it sends a NACK back to the sender node and the sender node must retransmit the correct frame.

    • Retransmission: The sender node maintains a clock and sets a timeout period. If an acknowledgment of a data frame previously transmitted does not arrive before the timeout the sender node retransmits the frame, thinking that the frame or its acknowledgment is lost in transit.

Automatic Repeat reQuest(ARQ)

  • ARQ is an error-control strategy used in a two-way communication system.

  • It is a group of error-control protocols to achieve reliable data transmission over an unreliable source or service.

  • These protocols are responsible for the automatic retransmission of packets that are found to be corrupted or lost during the transmission process.

  • There are three types of techniques available which Data link layer may deploy to control the errors by Automatic Repeat Requests (ARQ).

Stop-and-wait ARQ

  • The sender maintains a timeout counter.
  • When a frame is sent, the sender starts the timeout counter.
  • If acknowledgment of frame comes in time, the sender transmits the next frame in the queue.
  • If the acknowledgment does not come in time, the sender assumes that either the frame or its acknowledgment is lost in transit. The sender retransmits the frame and starts the timeout counter.
  • If a negative acknowledgment is received, the sender retransmits the frame.

Go-Back-N ARQ

  • Both sender and receiver maintain a window.

  • The sending-window size enables the sender to send multiple frames without receiving the acknowledgment of the previous ones.

  • The receiving-window enables the receiver to receive multiple frames and acknowledge them.

  • The receiver keeps track of the incoming frame’s sequence number.

  • When the sender sends all the frames in the window, it checks up to what sequence number it has received the positive acknowledgment. If all frames are positively acknowledged, the sender sends the next set of frames. If the sender finds that it has received NACK or has not received any ACK for a particular frame, it retransmits all the frames after which it does not receive any positive ACK.

  • It is assumed that the receiver does not have any buffer space for its window size and has to process each frame as it comes.

Selective Repeat ARQ

  • The receiver while keeping track of sequence numbers, buffers the frames in memory and sends NACK for the only frame which is missing or damaged.

  • The sender, in this case, sends an only packet for which NACK is received.

Multiple Access Protocols

  • In error control, it is important to coordinate the access of multiple sending and receiving nodes to a shared broadcast channel which is known as a multiple access problem.

  • The Multiple access protocols are required to decrease collision and avoid crosstalk if there is no dedicated link present and multiple stations can access the channel simultaneously.

  • Multiple access protocols for a broadcast channel of rate R bits per second should have the following desirable characteristics:

    • When only one node has data to send, that node has a throughput of R bps.
    • When M nodes have data to send, each of these nodes has a throughput of R/M bps. This need not necessarily imply that each of the M nodes always has an instantaneous rate of R/M, but rather that each node should have an average transmission rate of R/M over some suitably defined interval of time.
    • The protocol is decentralized; that is, no master node represents a single point of failure for the network.
    • The protocol is simple so that it is inexpensive to implement.
  • Multiple access protocols can be subdivided further as

    • Random access protocols
    • Channel partitioning protocols
    • Taking-turns or Controlled protocols

Random access protocols

  • In a random access protocol, a transmitting node always transmits at the full rate of the channel, namely, R bps.

  • When there is a collision, each node involved in the collision repeatedly retransmits its frame (that is, packet) until its frame gets through without a collision. But when a node experiences a collision, it doesn’t necessarily

retransmit the frame right away. Instead, it waits for a random delay before retransmitting the frame.

  • Each node involved in a collision chooses independent random delays. Because the random delays are independently chosen, one of the nodes may pick a delay that is sufficiently less than the delays of the other colliding nodes and will therefore be able to sneak its frame into the channel without a collision.

  • A few of the most commonly used random-access protocols

    • ALOHA protocols
    • Carrier sense multiple access (CSMA) protocols
ALOHA

Here, we assume the following:

  • All frames consist of exactly L bits.

  • Time is divided into slots of size L/R seconds (that is, a slot equals the time to transmit one frame).

  • Nodes start to transmit frames only at the beginnings of slots.

  • The nodes are synchronized so that each node knows when the slots begin.

  • If two or more frames collide in a slot, then all the nodes detect the collision event before the slot ends.

Let p be a probability, that is, a number between 0 and 1. The operation of ALOHA in each node is simple.

  • When the node has a fresh frame to send, it waits until the beginning of the next slot and transmits the entire frame in the slot.

  • If there isn’t a collision, the node has successfully transmitted its frame and thus need not consider retransmitting the frame. (The node can prepare a new frame for transmission if it has one.)

  • If there is a collision, the node detects the collision before the end of the slot. The node retransmits its frame in each subsequent slot with probability p until the frame is transmitted without a collision.

There are two flavors of ALOHA:

  • Pure Aloha:

    • When a station sends data it waits for an acknowledgment. If the acknowledgment doesn’t come within the allotted time then the station waits for a random amount of time called back-off time (Tb) and re-sends the data.

    • Since different stations wait for a different amount of time, the probability of further collision decreases.

        Vulnerable Time = 2 * Frame transmission time
        Throughput =  G exp{-2 * G}
        Maximum throughput = 0.184 for G = 0.5
  • Slotted Aloha

    • It is similar to pure aloha, except that we divide time into slots, and sending of data is allowed only at the beginning of these slots.

    • If a station misses out on the allowed time, it must wait for the next slot. This reduces the probability of collision.

        Vulnerable Time =  Frame transmission time
        Throughput =  G exp{- * G}
        Maximum throughput = 0.368 for G = 1
    • Slotted ALOHA allows a node to transmit continuously at the full rate, R when that node is the only active node. Slotted ALOHA is also highly decentralized because each node detects collisions and independently decides when to retransmit.
  • In both slotted and pure ALOHA, a node’s decision to transmit is made independently of the activity of the other nodes attached to the broadcast channel.

  • In particular, a node neither pays attention to whether another node happens to be transmitting when it begins to transmit nor stops transmitting if another node begins to interfere with its transmission.

CSMA

As humans, we have human protocols that allow us not only to behave with more civility unlike ALOHA, but also to decrease the amount of time spent “colliding” with each other in conversation and, consequently, to increase the amount of data we exchange in our conversations.

Specifically, there are two important rules for polite human conversation. These two rules are embodied in the family of carrier sense multiple access (CSMA) protocols

  • Listen before speaking

    • If someone else is speaking, wait until they are finished.
    • In the networking world, this is called carrier sensing where a node listens to the channel before transmitting.
    • If a frame from another node is currently being transmitted into the channel, a node then waits until it detects no transmissions for a short amount of time and then begins transmission.
  • Stop when two people start at the same time

    • If someone else begins talking at the same time, stop talking until they are finished.
    • In the networking world, this is called collision detection where a transmitting node listens to the channel while it is transmitting.
    • If it detects that another node is transmitting an interfering frame, it stops transmitting and waits a random amount of time before repeating the sense-and-transmit-when-idle cycle.
  • CSMA ensures fewer collisions as the station is required to first sense the medium (for idle or busy) before transmitting data.

  • If it is idle then it sends data, otherwise, it waits till the channel becomes idle. However, there is still a chance of collision in CSMA due to propagation delay.

  • CSMA access modes-

    • 1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously keeps on checking the medium for being idle and transmits unconditionally(with 1 probability) as soon as the channel gets idle.

    • Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the medium after a random amount of time (not continuously) and transmits when found idle.

    • P-persistent: The node senses the medium, if idle it sends the data with p probability. If the data is not transmitted ((1-p) probability) then it waits for some time and checks the medium again, now if it is found idle then it sends with p probability. This repeat continues until the frame is sent. It is used in Wi-Fi and packet radio systems.

    • O-persistent: Superiority of nodes is decided beforehand and transmission occurs in that order. If the medium is idle, the node waits for its time slot to send data.

Carrier Sense Multiple Access with Collision Dection (CSMA/CD)
  • The need to wait a random (rather than fixed) amount of time is hopefully clear, that, if two nodes transmitted frames at the same time and then both waited the same fixed amount of time, they’d continue colliding forever.

  • If the interval is large and the number of colliding nodes is small, nodes are likely to wait a large amount of time (with the channel remaining idle) before repeating the sense-and-transmit-when-idle step.

  • On the other hand, if the interval is small and the number of colliding nodes is large, the chosen random values will likely be nearly the same, and transmitting nodes will again collide.

  • We want an interval that is short when the number of colliding nodes is small, and long when the number of colliding nodes is large.

  • The binary exponential backoff algorithm, used in Ethernet as well as in DOCSIS cable network multiple access protocols elegantly solves this problem.

    • When transmitting a frame that has already experienced n collisions, a node chooses the value of K at random from {0,1,2,...2n−1}. Thus, the more collisions experienced by a frame, the larger the interval from which K is chosen. For Ethernet, the actual amount of time a node waits is K⋅512 bit times (i.e., K times the amount of time needed to send 512 bits into the Ethernet) and the maximum value that n can take is capped at 10.
  • We also note here that each time a node prepares a new frame for transmission, it runs the CSMA/CD algorithm, not taking into account any collisions that may have occurred in the recent past. So a node with a new frame may immediately be able to sneak in a successful transmission while several other nodes are in the exponential backoff state.

  • Efficiency of CSMA/CD

    • When only one node has a frame to send, the node can transmit at the full channel rate (e.g., for Ethernet typical rates are 10 Mbps, 100 Mbps, or 1 Gbps).

    • If many nodes have frames to transmit, the effective transmission rate of the channel can be much less.

    • Efficiency of CSMA/CD is the long-run fraction of time during which frames are being transmitted on the channel without collisions when there is a large number of active nodes, with each node having a large number of frames to send.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)
  • This protocol uses collision avoidance techniques.

  • This protocol uses a link-layer acknowledgment/retransmission (ARQ) scheme because of the relatively high bit error rates of wireless channels.

  • CSMA/CA runs by the following steps. Suppose that a station has a frame to transmit.

    • If initially, the station senses the channel idle, it transmits its frame after a short period known as the Distributed Inter-frame Space (DIFS).

    • Otherwise, the station chooses a random backoff value using binary exponential backoff and counts down this value after DIFS when the channel is sensed idle. While the channel is sensed busy, the counter value remains frozen.

    • When the counter reaches zero (note that this can only occur while the channel is sensed idle), the station transmits the entire frame and then waits for an acknowledgment.

    • If an acknowledgment is received, the transmitting station knows that its frame has been correctly received at the destination station. If the station has another frame to send, it begins the CSMA/CA protocol at step 2. If the acknowledgment isn’t received, the transmitting station reenters the backoff phase in step 2, with the random value chosen from a larger interval.

Controlled Access Protocols

  • The two desirable properties of a multiple access protocol are

    • When only one node is active, the active node has a throughput of R bps.
    • When M nodes are active, then each active node has a throughput of nearly R/M bps.
    • The ALOHA and CSMA protocols have this first property but not the second.
  • In controlled access, the stations seek information from one another to find which station has the right to send. It allows only one node to send at a time, to avoid collision of messages on a shared medium.

  • A few of the most commonly used controlled access protocols

    • Polling protocols
    • Token Passing protocol
Polling Protocol
  • The polling protocol requires one of the nodes to be designated as a master node.

  • The master node polls each of the nodes in a round-robin fashion.

  • In particular, the master node first sends a message to node 1, saying that it (node 1) can transmit up to some maximum number of frames. After node 1 transmits some frames, the master node tells node 2 it (node 2) can transmit up to the maximum number of frames.

  • The master node can determine when a node has finished sending its frames by observing the lack of a signal on the channel.

  • The procedure continues in this manner, with the master node polling each of the nodes in a cyclic manner.

  • The polling protocol eliminates the collisions and empty slots that allow polling to achieve much higher efficiency.

  • Drawbacks

    • The protocol introduces a polling delay, the amount of time required to notify a node that it can transmit.

    • If the master node fails, the entire channel becomes inoperative.

Token Passing Protocol
  • The Token passing protocol does not require a master node.

  • A small, special-purpose frame known as a token is exchanged among the nodes in some fixed order.

  • When a node receives a token, it holds onto the token only if it has some frames to transmit; otherwise, it immediately forwards the token to the

next node.

  • If a node does have frames to transmit when it receives the token, it sends up to a maximum number of frames and then forwards the token to the next node.

  • Token passing is decentralized and highly efficient.

  • Drawbacks

    • The failure of one node can crash the entire channel.

    • If a node accidentally neglects to release the token, then some recovery procedure must be invoked to get the token back in circulation.

Channelization Protocols

  • In this, the available bandwidth of the link is shared in time, frequency, and code to multiple stations to access the channel simultaneously.

  • A few of the most commonly used channelization protocols

    • Frequency Division Multiple Access (FDMA)
    • Time Division Multiple Access (TDMA)
    • Code Division Multiple Access (CDMA)
FDMA
  • The available bandwidth is divided into equal bands so that each station can be allocated its band.

  • Guard bands are also added so that no to bands overlap to avoid crosstalk and noise.

TDMA
  • In this, the bandwidth is shared between multiple stations.

  • To avoid collision time is divided into slots and stations are allotted these slots to transmit data. However, there is an overhead of synchronization as each station needs to know its time slot. This is resolved by adding synchronization bits to each slot. Another issue with TDMA is propagation delay which is resolved by the addition of guard bands.

CDMA
  • One channel carries all transmissions simultaneously.

  • There is neither division of bandwidth nor division of time. For example, if there are many people in a room all speaking at the same time, then also perfect reception of data is possible if only two-person speak the same language. Similarly, data from different stations can be transmitted simultaneously in different code languages.

Did you find this post useful?

I would be grateful if you let me know by sharing it on Twitter!

Follow me @ParthS0007 for more tech and blogging content :)