Network architecture – layers – Physical links – Channel access on links – Hybrid multiple access techniques – Issues in the data link layer – Framing – Error correction and detection – Link-level Flow Control.
Network architecture that guides the design and implementation of networks. Two of the most widely referenced architectures—the OSI architecture and the Internet architecture.
In OSI Architecture, there are 7 layers which can be combined into basic four layers as shown in the below figure.
Example of a layered network system
Each layer relies on services from layer below and exports services to layer above
Hides implementation – layers can change without disturbing other layers
Abstraction hides the complexity of layering.
The idea of an abstraction is to define a unifying model that can capture some important aspect of the system.
Abstractions naturally lead to layering, especially in network systems.
The following figure depicts the layered system with alternative abstraction available at a give layer.
Layered system with alternative abstractions available at a given layer
In the above figure, two layers of abstraction sandwiched between the application program and the underlying hardware, as illustrated in above figure. The services provided at the high layers are implemented in terms of the services provided by the low layers.
The Role of Operating Systems and Network Administration in the IS Curriculum. Robert Adams and Carl Erickson Grand Valley State University Department of Computer Science and Information Systems Allendale, MI 49401 USA Abstract The reliance by companies of all sizes on information technology creates strong demand for system and network administration jobs. Information System majors will ...
The layer immediately above the hardware in this case might provide host-to-host connectivity and the next layer up on the available host-to-host communication is Process to Process channel that provides support for application to application communication services. From the above fissure, the process to process channel is a combination of two abstract channels; one is Request/Reply channel for sending or receiving the request or reply message. The Message stream abstract channel is used for sending or receiving the actual message.
Features of Layering
• First, it decomposes the problem of building a network into more manageable components, rather than implementing a monolithic piece of software
• Second, it provides a more modular design.
Building blocks of a network architecture
Each protocol object has two different interfaces
service interface: defines operations on this protocol
peer-to-peer interface: defines messages exchanged with peer
Term “protocol” is overloaded
specification of peer-to-peer interface
module that implements this interface
Each protocol defines two different interfaces as shown in below figure.
Service and peer interfaces
A suite of protocols that make up a network system is called as a protocol graph. The following figure illustrates a protocol graph for the above hypothetical layered system.
Example of a protocol graph
In this example, suppose that the file access program on host 1 wants to send a message to its peer on host 2 using the communication service offered by protocol RRP. In this case, the file application asks RRP to send the message on its behalf. To communicate with its peer, RRP then invokes the services of HHP, which in turn transmits the message to its peer on the other machine. Once the message has arrived at protocol HHP on host 2, HHP passes the message up to RRP, which in turn delivers the message to the file application.
When one of the application programs sends a message to its peer by passing the message to protocol RRP, RRP adds a header to the message. The header is a data structure contains some control information which are usually attached to the message at front. We say this, that the application’s data is encapsulated in the new message created by protocol RRP. This process of encapsulation is then repeated at each level of the protocol graph; for example, HHP encapsulates RRP’s message by attaching a header of its own.
Wireless Transaction Protocol Layer To improve clinical performance, hospitals need to ensure that therapeutic practitioners can remotely access critical pathways, hospital patient records, and clinical outcomes tracking. Of course, such abilities also present an enormous confront for maintaining security and privacy. This paper will discuss designing a scalable system that will implement ...
Now assume that HHP sends the message to its peer over some network, then when the message arrives at the destination host, it is processed in the opposite order: HHP first strips its header off the front of the message, interprets it and passes the body of the message up to RRP, which removes its own header and passes the body of the message up to the application program. The message passed up from RRP to the application on host 2 is exactly the same message as the application passed down to RRP on host 1. This whole process is illustrated in the following figure.
Example of how high-level messages are encapsulated inside of low-level messages
The ISO Open Systems Interconnection (OSI) architecture is illustrated in below figure which defines a partitioning of network functionality into seven layers, where one or more protocols implement the functionality assigned to a given layer.
• Starting at the bottom and working up, the physical layer handles the transmission of raw bits over a communications link.
• The data link layer then collects a stream of bits into a larger aggregate called a frame.
• The network layer handles routing among nodes within a packet-switched network. At this layer, the unit of data exchanged among nodes is typically called a packet rather than a frame.
• The lower three layers are implemented on all network nodes, including switches within the network and hosts connected along the exterior of the network.
• The transport layer then implements what we have up to this point been calling a process-to-process channel. Here, the unit of data exchanged is commonly called a message rather than a packet or a frame.
• The session layer performs the synchronization.
• The presentation layer is concerned with the format of data exchanged between peers
• The application layer where application programs are running which interacts with the user and receives the message from the user.
CERTIFICATION OF AUTHORSHIP: I certify that I am the author of this paper/project and that any assistance I received in its preparation is fully acknowledged and disclosed in the paper. I have also cited any sources from which I used data, ideas, or words, either quoted directly or paraphrased. I also certify that this paper/project was prepared by me specifically for this course. Student ...
Physical Layer Responsibilities
Responsible for transmitting individual bits from one node to the next
1. Defines the characteristics of interfaces and transmission media
2. Defines the type of transmission media
3. To transmit this stream of bits, it must be encoded into signals. Physical layer only defines the type of representation of bits.
4. Internet Architecture
5. Defines the transmission rate. i.e., number of bits per second.
6. Sender and receiver machine clock must be adjusted in order to have a same bit rate at both the sender and receiver side.
Data Link Layer Responsibilities
Responsible for transmitting frames from one to the next
1. Framing: Divides the stream of bits into frames.
2. Physical addressing: It adds the header information to the frame. The header information is the address of sender and receiver.
3. Flow control: It provides the flow control mechanism. The data rate received by the receiver
4. Error control: It provides the mechanism to find the damaged /lost/duplication of data in the transmission. The error control information is usually added to trailer part of the message
5. Access control: It determines which device can use the medium when there are more number devices involved in the transmission.
Network Layer Responsibilities
Source-to-destination delivery, possibly across multiple networks
1. Logical addressing: Adding the network address in the header.
2. Routing: Transmitting the packets to the correct destination in the network.
Transport Layer Responsibilities
Delivery of message from one process (running programs) to another
1. Process-to-process delivery of entire message: Delivering the message is not only to the correct destination but also to the specific process.
2. Port addressing: Inserting the port address in the header
3. Segmentation and reassembly: Dividing the message into segments and each segment will have sequence number which is used by the transport layer at the receiver machine for reassembling it.
SLIP is a TCP/IP protocol used for communication between two machines that are previously configured for communication with each other. For example, your Internet server provider may provide you with a SLIP connection so that the provider's server can respond to your requests, pass them on to the Internet, and forward your requested Internet responses back to you. A better service is provided by ...
4. Connection control: connectionless or connection-oriented
5. End-to-end flow control
6. End-to-end error control
Application Layer Responsibilities
Responsible for providing services to the user
1. Enables user access to the network
2. User interfaces and support for services such as
b. File transfer and access
c. Remote log-in
The Internet architecture, which is also sometimes called the TCP/IP architecture
• a four-layer model
• At the lowest level are a wide variety of network protocols, denoted NET1, NET2, and so on. In practice, these protocols are implemented by a combination of hardware (e.g., a network adaptor) and software (e.g., a network device driver).
• The second layer consists of a single protocol—the Internet Protocol (IP).
This is the protocol that supports the interconnection of multiple networking technologies into a single, logical internetwork.
• The third layer contains two main protocols—the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
TCP provides a reliable byte-stream channel, and UDP provides an unreliable datagram delivery channel
• Running above the transport layer are a range of application protocols, such as FTP, TFTP (Trivial File Transport Protocol), Telnet (remote login), and SMTP (Simple Mail Transfer Protocol, or electronic mail), that enable the interoperation of popular applications.
Advantages of using IP Arch. over OSI Arch.
• The Internet architecture does not imply strict layering. That is, the application is free to bypass the defined transport layers and to directly use IP or one of the underlying networks
• Looking closely at the internet protocol graph, it has an hourglass shape—wide at the top, narrow in the middle, and wide at the bottom. That is, IP serves as the focal point for the architecture—it defines a common method for exchanging packets among a wide collection of networks.
• It has the ability to adapt rapidly to new user demands and changing technologies.
INTRODUCTION has expcricnced great progress in recent years. Standards for broadcasting digital content to the end user are available and have been proven in large scale commercial deployments or, at least, extensive trial networks. This development can be observed recently also with regard to the standard for digital terrestrial television, DVB-T (Digital Video Broadcasting Tcrrestrial), which is ...
ISO – OSI Model Vs TCP/IP Model
OSI MODEL TCP/IP MODEL
Seven layers: Physical layer, Data link layer, Network layer, Transport Layer, Session layer, Presentation layer, Application layer Four layers:, Network layer, Transport layer, IP layer, Application layer
Each layer defines a family of functions and the functions are interdependent Each layer defines number of protocols and they are not dependent
Widely used in Local Area Network Used in internet
Links are medium that connects nodes to form a computer network.
Types of Links
The communication between the nodes is either based on a point-to-point model or a Multicast model. In the point-to-point model, a message follows a specific route across the network in order to get from one node to another. In the multicast model, on the other hand, all nodes share the same communication medium and, as a result, a message transmitted by any node can be received by all other nodes. A part of the message (an address) indicates for which node the message is intended. All nodes look at this address and ignore the message if it does not match their own address.
Connections between devices may be classified into three categories:
1. Simplex. This is a unidirectional connection, i.e., data can only travel in one direction. Simplex connections are useful in situations where a device only receives or only sends data (e.g., a printer).
2. Half-duplex. This is a bidirectional connection, with the restriction that data can travel in one direction at a time.
3. Full-duplex. This is a bidirectional connection in which data can travel in both directions at once. A full-duplex connection is equivalent to two simplex connections in opposite directions.
Issues in the data link layer
There are five problems that must be addressed before the nodes can successfully exchange packets.
1. encoding problem
2. framing problem
3. error detection problem
4. Reliable delivery and
5. media access control problem
Nonreturn to zero transmits 1s as zero voltages and 0s as positive voltages
1. a. A hexadecimal digit can be represented by four binary bits, thus the following hexadecimal notations can be represented as the following bit patterns: i. A 216 -- - 1010 00102; ii. 5 C 16 -- - 0101 11002. b. First, we should determine the bit pattern representation of the following hexadecimal notations as the same way as the above question: i. 8 F 16 -- -1000 11112, Hence, the most ...
Problem: Consecutive 1s or 0s
Low signal (0) may be interpreted as no signal
High signal (1) leads to baseline wander
Unable to recover clock
Transition data if input is 1, and no transition if input is 0.
Manchester encoding: A transition occurs in the middle of the bits. 0 becomes a low to high transition and 1 high to low
Differential Manchester: a transition in the beginning of the interval to transmit 0. No transition in the beginning of the interval to transmit 1. The transition in the middle is always present.
Problem: consecutive zeros
Idea: Every 4 bits of data is encoded in a 5-bit code, with the 5-bit codes selected to have no more than one leading 0 and no more than two trailing 0 (i.e., never get more than three consecutive 0s).
Resulting 5-bit codes are then transmitted using the NRZI encoding. Achieves 80% efficiency.
Breaking sequence of bits into a frame
Must determine first and last bit of the frame
Typically implemented by network adapter
Adapter fetches (deposits) frames out of (into) host memory
The network adaptor that enables the nodes to exchange frames.
When node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame from the node’s memory. This results in a sequence of bits being sent over the link. The adaptor on node B then collects together the sequence of bits arriving on the link and deposits the corresponding frame in B’s memory. Recognizing exactly what set of bits constitutes a frame—that is, determining where the frame begins and ends—is the central challenge faced by the adaptor. .
There are several ways to address the framing problem. Some of them are:
1. Byte Oriented: Special character to delineate frames, replace character in data stream
a. Sentinel approach
b. Byte counting approach
2. Bit Oriented: use a technique known as bit stuffing
3. Clock Based: fixed length frames, high reliability required
1. Byte-Oriented Protocols
a byte-oriented approach is exemplified by the BISYNC (Binary Synchronous Communication) protocol developed by IBM. The BISYNC protocol illustrates the sentinel approach to framing; its frame format is depicted in the following figure
– PPP protocol uses 0x7e=01111110 as the flag byte to delimit a frame
– When a 0x7e is seen in the payload, it must be escaped to keep it from being seen as an end of frame
The beginning of a frame is denoted by sending a special SYN (synchronization) character. The data portion of the frame is then contained between special sentinel characters: STX (start of text) and ETX (end of text).
The SOH (start of header) field serves much the same purpose as the STX field. The frame contains additional header fields that are used for, among other things, the link-level reliable delivery algorithm
The problem with the sentinel approach, is that the ETX character might appear in the data portion of the frame. BISYNC overcomes this problem by “escaping” the ETX character by preceding it with a DLE (data-link-escape) character whenever it appears in the body of a frame;
The DLE character is also escaped (by preceding it with an extra DLE) in the frame body. This approach is often called character stuffing because extra characters are inserted in the data portion of the frame.
Point-to-Point Protocol (PPP) is similar to BISYNC in that it uses character stuffing.
The format for a PPP frame is given in Figure.
The special start-of-text character, denoted as the Flag field is 01111110. The Address and Control fields usually contain default values, and so are uninteresting. The Protocol field is used for demultiplexing. The frame payload size can be negotiated, but it is 1500 bytes by default.
The Checksum field is either 2 (by default) or 4 bytes long used for error detection.
Byte counting approach
The number of bytes contained in a frame can be included as a field in the frame header.
DDCMP protocol uses this approach, as illustrated in the following figure
The COUNT field specifies how many bytes are contained in the frame’s body. One danger with this approach is that a transmission error could corrupt the COUNT field, in which case the end of the frame would not be correctly detected.
2. Bit Oriented approach
The High-Level Data Link Control (HDLC) protocol developed by IBM is an example of a bit-oriented protocol.
HDLC: High-Level Data Link Control
Delineate frame with a special bit-sequence: 01111110
Its frame format is given in below figure.
Bit-oriented protocols use a technique known as bit stuffing.
Bit Stuffing: The delimiting bit pattern used is 01111110 and is called a flag. To avoid this bit pattern occurring in user data, the transmitter inserts a 0 bit after every five consecutive 1 bits it finds. This is called bit stuffing.
Sender: any time five consecutive 1s have been transmitted from the body of the message, insert a 0.
Receiver: should five consecutive 1s arrive, look at next bit(s):
– if next bit is a 0: remove it
– if next bits are 10:end of frame
– if next bits are 11: error
Bit stuffing Example
3. Clock-Based Framing
This approach to framing is used by the Synchronous Optical Network (SONET) standard.
Error Detection and Correction
Data can be corrupted during transmission. For reliable communication, errors must be detected and corrected
Types of Error
In a single-bit error, Only one bit is changed: 0 changed to 1, or a 1 to a 0
Two or more bits in data unit are in error, not necessarily consecutive in order
The basic idea behind any error detection scheme is to add redundant information to a frame that can be used to determine if errors have been introduced. In other words, error detection uses the concept of redundancy, which means adding extra bits for detecting errors at the destination as shown in below figure.
We say that the extra bits we send are redundant because they add no new information to the message. Instead, they are derived directly from the original message using some well-defined algorithm. Both the sender and the receiver know exactly what that algorithm is. The sender applies the algorithm to the message to generate the redundant bits. It then transmits both the message and those few extra bits. When the receiver applies the same algorithm to the received message, it should (in the absence of errors) come up with the same result as the sender. It compares the result with the one sent to it by the sender. If they match, it can conclude (with high likelihood) that no errors were introduced in the message during transmission. If they do not match, it can be sure that either the message or the redundant bits were corrupted, and it must take appropriate action, that is, discarding the message, or correcting it if that is possible.
Error Detection methods
1. Simple-parity check
2. Two dimensional parity check
In this parity check, a parity bit is added to every data unit so that the total number of 1s is even (or odd for odd-parity).
The following figure illustrates this concept.
Suppose the sender wants to send the word world. In ASCII the five characters are coded as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001
Now suppose the word world is received by the receiver without being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers (6, 6, 4, 4, 4).
The data are accepted.
Now suppose the word world is corrupted during transmission.
11111110 11011110 11101100 11011000 11001001
The receiver counts the 1s in each character and comes up with even and odd numbers (7, 6, 5, 4, 4).
The receiver knows that the data are corrupted, discards them, and asks for retransmission.
Simple parity check can detect all single-bit errors. It can detect burst errors only if the total number of errors in each data unit is odd.
Two dimensional parity check
In two-dimensional parity check, a block of bits is divided into rows and a redundant row of bits is added to the whole block.
Suppose the following block is sent:
10101001 00111001 11011101 11100111 10101010
However, it is hit by a burst noise of length 8, and some bits are corrupted.
10100011 10001001 11011101 11100111 10101010
When the receiver checks the parity bits, some of the bits do not follow the even-parity rule and the whole block is discarded.
10100011 10001001 11011101 11100111 10101010
Parity checks based on addition; CRC based on binary division
A sequence of redundant bits (a CRC or CRC remainder) is appended to the end of the data unit
These bits are later used in calculations to detect whether or not an error had occurred.
• On sender’s end, data unit is divided by a predetermined divisor; remainder is the CRC
• When appended to the data unit, it should be exactly divisible by a second predetermined binary number
• At receiver’s end, data stream is divided by same number
• If no remainder, data unit is assumed to be error-free
Deriving the CRC
• A string of 0s is appended to the data unit; n is one less than number of bits in predetermined divisor
• New data unit is divided by the divisor using binary division; remainder is CRC
• CRC of n bits replaces appended 0s at end of data unit
CRC Generator function
CRC Checker function
The divisor number used in the CRC algorithm, which is (n+1) bit in length, can also be considered as the coefficients of a polynomial, called Generator Polynomial which is shown below.
The divisor can give n-bit length CRC remainder. For example, for the divisor 11001 the corresponding polynomial is X4+X3+1.
A polynomial is
Used to represent CRC generator
Cost effective method for performing calculations quickly
Standard CRC polynomials
Name Polynomial Application
CRC-8 x8 + x2 + x + 1 ATM header
CRC-10 x10 + x9 + x5 + x4 + x 2 + 1 ATM AAL
ITU-16 x16 + x12 + x5 + 1 HDLC
ITU-32 x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1 LANs
A polynomial is selected to have at least the following properties:
It should not be divisible by X.
It should not be divisible by (X+1).
The first condition guarantees that all burst errors of a length equal to the degree of polynomial are detected. The second condition guarantees that all burst errors affecting an odd number of bits are detected.
CRC can detect all single-bit errors
CRC can detect all double-bit errors (three 1’s)
CRC can detect any odd number of errors (X+1)
CRC can detect all burst errors of less than the degree of the polynomial.
CRC detects most of the larger burst errors with a high probability. For example CRC-12 detects 99.97% of errors with a length 12 or more.
When the algorithm to create the code is based on addition, they may be called a checksum.
The sender follows these steps:(Checksum generator)
1. The unit is divided into k sections, each of n bits.
2. All sections are added using one’s complement to get the sum.
3. The sum is complemented and becomes the checksum.
4. The checksum is sent with the data.
The receiver follows these steps(Checksum checker)
1. The unit is divided into k sections, each of n bits.
2. All sections are added using one’s complement to get the sum.
3. The sum is complemented.
4. If the result is zero, the data are accepted: otherwise, rejected.
Checksum generator function
Suppose the following block of 16 bits is to be sent using a checksum of 8 bits.
The numbers are added using one’s complement
Now the pattern sent is 10101001 00111001 00011101
Checksum checker function
Now suppose the receiver receives the pattern sent without any corruption, 10101001 00111001 00011101
the receiver performs the checksum checker function to ensure whether the received pattern is corrupted or not.
When the receiver adds the three sections, it will get all 1s, which, after complementing, is all 0s and shows that there is no error.
Complement 00000000 means that the pattern is OK.
Now suppose there is a burst error of length 5 introduced as
10101111 11111001 00011101
When the receiver adds the three sections, it gets
Partial Sum 1 11000101
Complement 00111001 the pattern is corrupted.
Detects all errors involving odd number of bits, most errors involving even number of bits
If one or more bits of a segment are damaged and the corresponding bits of opposite value in a second segment are also damaged, the sums of these columns will not change and the receiver will not detect a problem.
Error Correction method
Error Correction can be handled in two ways.
One is when an error is discovered; the receiver can ask the sender to retransmit the entire data unit. This is known as retransmission.
In the other, receiver can use an error-correcting code, which automatically corrects certain errors. One of the techniques is Hamming code.
For correcting an error one has to know the exact position of error, i.e. exactly which bit is in error? To this, we have to add some additional redundant bits. To calculate the numbers of redundant bits (r) required to correct m data bits, we must find out the relationship between the two. With m bits of data and r bit of redundancy added to them, the length of the resulting code is m+r. . To find the number of redundancy bit for a m data bits, the following formulae is used:
2r >= m+r+1
For example, if the value of m is 7, the r value can satisfy this equation is 4:
24 >= 7+4+1
The following table shows some possible m values and the corresponding r values.
Also, it is important to know the locations of the r bits in the m bits data. The r bits are placed in position 1,2,4,8,… (power of 2).
Suppose if m =7, then r must be 4 bits and total number of bits becomes 11(m+r), in which the r bits are placed in the locations 1, 2, 4, and 8 as shown below
In a 7 bit data unit, the combinations of locations used to calculate r values are as follows:
r1: 1,3,5,7,9,11 locations
r2: 2, 3,6,7,10,11 locations
r4: 4,5,6,7 locations
r8: 8, 9, 10, 11 locations
Calculating r values
1. We place r bits in its appropriate location in the m+r length data unit.
2. We calculate the even parities for the various bit combinations
Even when error-correcting codes are used some errors will be too severe to be corrected. As a result, some corrupt frames must be discarded. A link-level protocol that wants to deliver frames reliably must somehow recover from these discarded (lost) frames. This is usually accomplished using a combination of two fundamental mechanisms—
• An acknowledgment (ACK for short) is a small control frame that a protocol sends back to its peer saying that it has received an earlier frame. By control frame we mean a header without any data.
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it retransmits the original frame. This action of waiting a reasonable amount of time is called a timeout.
Piggybacking: To improve the use of network bandwidth, an acknowledgment method known as piggybacking is often used. In piggybacking, instead of sending a separate acknowledgment frame, the receiver waits until it has data frame to send to the sender and embeds the acknowledgment in that data frame.
Thus the link bandwidth can be utilized better also it increases the speed of data transmission.
Propagation delay is defined as the delay between transmission and receipt of packets between hosts. Propagation delay can be used to estimate timeout period
The general strategy of using acknowledgments and timeouts to implement reliable delivery is sometimes called automatic repeat request (ARQ).
There are four different ARQ algorithms:
1. Stop-and-Wait ARQ
2. Sliding Window ARQ
3. Go back N ARQ
4. Selective Repeat ARQ
1. Stop and Wait ARQ
• Sender doesn’t send next frame until he’s sure receiver has last packet
• The data frame/Ack. Frame sequence enables reliability. They are sequenced alternatively 0 and 1
• Sequence numbers help avoid problem of duplicate frames
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it retransmits the original frame
• The sender also starts retransmission when the timeout occurs.
a) Normal operation b) The Original frame is lost c) The ACK is lost d) Timeout occurs
• The link capacity can not be utilized effectively since only one data frame or ACK frame can e sent at a time
2. Sliding Window
The sliding window algorithm works as follows.
1. The sender assigns a sequence number, denoted SeqNum, to each frame.
2. The sender maintains three variables:
a. The send window size, denoted SWS, gives the upper bound on the number of outstanding (unacknowledged) frames that the sender can transmit;
b. LAR denotes the sequence number of the last acknowledgment received; and
c. LFS denotes the sequence number of the last frame sent.
3. The sender maintains the following invariant:
LFS − LAR ≤ SWS
This situation is illustrated in below figure
When an acknowledgment arrives, the sender moves LAR to the right, thereby allowing the sender to transmit another frame. The sender associates a timer with each frame it transmits, and it retransmits the frame should the timer expire before an ACK is received.
4. The receiver maintains the following three variables:
a. The receive window size, denoted RWS, gives the upper bound on the number of out-of-order frames that the receiver is willing to accept;
b. LAF denotes the sequence number of the largest acceptable frame;
c. LFR denotes the sequence number of the last frame received.
5. The receiver also maintains the following invariant:
LAF − LFR ≤ RWS
6. if LFR SWS since it’s impossible for more than SWS frames to arrive out of order.
1. Reliable transmission: The algorithm can be used to reliably deliver messages across an unreliable network
2. Frame Order: The sliding window algorithm can serve is to preserve the order in which frames are transmitted. Since each frame has a sequence number.
3. Flow control: The receiver not only acknowledges frames it has received, but also informs the sender of how many frames it has room to receive
4. The link capacity can be utilized effectively since multiple frames can be transmitted at a time
Selective Repeat ARQ: upon encountering a faulty frame, the receiver requests the retransmission of that specific frame. Since additional frames may have followed the faulty frame, the receiver needs to be able to temporarily store these frames until it has received a corrected version of the faulty frame, so that frame order is maintained.
Go Back N ARQ
A simpler method, Go-Back-N, involves the transmitter requesting the retransmission of the faulty frame as well as all succeeding frames (i.e., all frames
transmitted after the faulty frame).
The advantage of Selective Reject over Go-Back-N is that it leads to better throughput, because only the erroneous frames are retransmitted.
Go-Back-N has the advantage of being simpler to implement and requiring less memory.
Medium access – CSMA – Ethernet – Token ring – FDDI – Wireless LAN – Bridges and Switches
most successful local area networking technology
the Ethernet is a working example of the more general Carrier Sense
Multiple Access with Collision Detect (CSMA/CD)
CSMA/CD: Ethernet’s Media Access Control (MAC) policy
o CS = carrier sense
Send only if medium is idle
o MA = multiple access
o CD = collision detection
Stop sending immediately if collision is detected
Most popular packet-switched LAN technology
–unique, 48-bit unicast address assigned to each adapter
–the address will be used for broadcast all 1s in the address
–the address is multicast if the first bit is 1
Bandwidths: 10Mbps, 100Mbps, 1Gbps
Max bus length: 2500m
500m segments with 4 repeaters
Bus and Star topologies are used to connect hosts
Hosts attach to network via Ethernet transceiver or hub or switch
Detects line state and sends/receives signals
Hubs are used to facilitate shared connections
All hosts on an Ethernet are competing for access to the medium
A transceiver—a small device directly attached to the tap—detects when the line is idle and drives the signal when the host is transmitting. It also receives incoming signals. The transceiver is, in turn, connected to an Ethernet adaptor, which is plugged into the host.
Repeater: Multiple Ethernet segments can be joined together by repeaters. A repeater is a device that forwards digital signals, much like an amplifier forwards analog signals. Any signal placed on the Ethernet by a host is broadcast over the entire network
10Base2- can be constructed from a thinner cable called as thin-net, 200m length
10Base5 can be constructed from a thick cable called as thick-net, 500m length
10BaseT can be constructed from twisted pair cable; “T” stands for twisted pair, limited to under 100 m in length
10” in 10Base2 means that the network operates at 10 Mbps, “Base” refers to the fact that the cable is used in a baseband system, and the “2” means that a given segment can be no longer than 200 m
Access Method: CSMA/CD
Carrier Sense: This protocol is applicable to a bus topology. Before a station can transmit, it listens to the channel to see if any other station is already transmitting. If the station finds the channel idle, it attempt to transmit; otherwise, it waits for the channel to become idle. If two or more stations find the channel idle and simultaneously attempt to transmit. This is called a collision. When collision occurs, the station should suspend transmission and re-attempts after a random period of time.
Use of a random wait period reduces the chance of the collision recurring. The following flow chart depicts this technique.
If line is idle…
–upper bound message size of 1500 bytes
–minimum frame is 64 bytes (header + 46 bytes of data)
If line is busy…
–wait until idle and transmit immediately
–send jam signal, and then stop transmitting frame
–delay for exponential Back off time and try again
Exponential Back off
–1st time: 0 or 51.2us
–2nd time: 0, 51.2, or 102.4us
–3rd time51.2, 102.4, or 153.6us
–nth time: k x 51.2us, for randomly selected k=0..2^n – 1
–give up after several tries (usually 16)
Suppose host A begins transmitting a fram at time t as shown in below figure a.
It takes one link latency for the frame to reach host B. Thus it arrives at B at time of t+d which is shown in figure b.
Suppose an instant before host A’s frame arrives, B begins transmit its own frame and it collides with the A’s frame as shown in figure c.
This collision is detected by host B. Host B will send a jamming signal which is known as ‘runt frame’. The runt frame is a combination of 32 bit jamming sequence and 64 bit preamble bits. B’s runt frame arrives at A at time t+2d. That is A sees the collision at the time of t+2d.
Exponential Back off strategy
Once the adaptor has detected the collision, it stops transmission, and it waits a certain amount of time and tries again transmission. Each time it tries to transmit bit fails, then the adapter doubles the amount of time it waits before trying again. This strategy of doubling the delay interval between each transmission attempt is known as ‘exponential back off’.
The adapter delays are,
–1st time: 0 or 51.2us
–2nd time: 0, 51.2, or 102.4us
–3rd time51.2, 102.4, or 153.6us
–nth time: k x 51.2us, for randomly selected k=0..2^n – 1
–give up after several tries (usually 16)
Preamble allows the receiver to synchronize with the signal. It is 10101010.
Src addr and Dest addr are address of the source and destination which are 48 bit address.
Type is a demultiplexing key. It tells the frame can be used by higher level protocol.
Body is the field where it holds the data. Maximum of 1500 bytes of data can be stored. Minimum data length should be 46 bytes so that the collision can be detected.
CRC is a 32 bit used for error detection
1. It works better under lightly loaded conditions
2. No flow of control in Ethernet which is done by upper layers
3. New host can be added easily to the network
4. Very easy to administer and maintain
5. Relatively very cheap
6. It uses round trip delay of closer to 5us than 51.2us
Token Ring (IEEE 802.5)
• Specified by the IEEE 802.5 standard.
• Set of nodes are connected in a ring.
• Data always flows in one direction
• Node receiving frames from upstream neighbor passes it to downstream neighbor.
• Distributed algorithm dictates when each node can transmit.
• All nodes see all frames
• Destination saves a copy of the frame when it flows past.
• Token used to control who transmits.
• Electromechanical relays are used to Protection against failures — single node failure should not cause the entire ring to fail.
– One approach: change one bit in token which transforms it into a “start-of-frame sequence” and appends frame for transmission.
– Second approach: station claims token by removing it from the ring.
• Frame circles the ring and is removed by the transmitting station.
• Each station interrogates passing frame, if destined for station, it copies the frame into local buffer.
• After station has completed transmission of the frame, the sending station should release the token on the ring.
Access control: Token passing
• Token circulates around the ring.
• The token allows a host to transmit — contains a special sequence of bits.
• When a node that wishes to send sees the token, it
– picks up the token
– inserts its own frame instead on the ring.
• When frame traverses ring and returns, the sender takes frame off and reinserts token.
• Transmitted frame contains dest addr.
• Each node looks at the frame — if frame meant for the node, copy frame onto buffer, otherwise just forwards it.
• Sending node responsible for removal of frame from ring and releases the token back on the ring
Token Holding Time
How long can a node hold onto the token? This is dictated by the token holding time or THT.
• If lightly loaded network you may allow a node to hold onto it as long as it wants — very high utilization.
• In 802.5, THT is specified to be 10 milliseconds.
Token Holding Time (THT)
–defined as upper limit on how long a station can hold the token
Token Rotation Time (TRT)
–defined as how long it takes the token to traverse the ring.
TRT the purpose is to become a monitor. If more than one station sends claim token then highest address node wins.
Maintaining the token by the monitor
• Monitor ensures the presence of token. Token may get corrupted or lost
If no token seen for this time i.e,
TRT = Num_stations X THT + Ring Latency
The monitor assumes that the token may be lost; and it creates a new token.
Other Monitor Functions
• Check for corrupted frames
• Check for orphaned frames. The orphaned frame is an frame which inserted by a node that dies during the flow.
– The monitor uses a monitor bit, and sets this to 1 to see if the frame keeps on circulating.
• It bypass the malfunctioning stations
Frame Format in 802.5
Token Frame Format
Start of frame: indicates the start of the frame
Control: it identifies the frame type i.e. token or a data frame.
Dest addr: contains the physical address of the destination
Src addr: contains the physical address of the source.
Body: Each data frame carries up to 4500 bytes.
CRC: is used for error detection
End of frame: This represents end of the Token.
Status: FDDI FS field is similar to that of Token Ring. It is included only in data/Command frame and consists of one and a half bytes.
• Fiber Distributed Data Interface
• Similar to Token ring but uses — optical fiber. The copper version is known as CDDI.
• In FDDI, token is absorbed by station and released as soon as it completes the frame transmission
• FDDI uses a ring topology of multimode optical fiber transmission links operating at 100 Mbps to span up to 200 kms and permits up to 500 stations.
• Data is encoded using a 4B/5B encoder
• Two rings instead of one; second used if first fails.
• Two independent rings transmitting data in opposite direction
o FDDI can tolerate single node or link failures.
In case of failure of a node or a fiber link, the ring is restored by wrapping the primary ring to the secondary ring as shown in Figure b. If a station on the dual ring fails or is powered down, or if the cable is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single ring. When the ring is wrapped, the dual-ring topology becomes a single-ring topology. Data continues to be transmitted on the FDDI ring without performance impact during the wrap condition. Network operation continues for the remaining stations on the ring. When two or more failures occur, the FDDI ring segments into two or more independent rings that are incapable of communicating with each other.
Single Access Stations
• FDDI is expensive for nodes to connect to two cables and so FDDI allows nodes to attach using a single cable
– called Single Access Stations or SAS.
as shown in below figure.
• A Concentrator is used to attach several SASs to a ring.
Access control: Timed Token Algorithm for FDDI
• Each node measures TRT.
• If measured TRT > TTRT,
– then Token is late, so that station does not transmit data
• If measured TRT
– then Token is early; station holds token for difference between TTRT and measured TRT and can transmit data.
Division into traffic classes
• Traffic is divided into two classes
1. Synchronous traffic
– Traffic is delay sensitive
– station transmits data whether token is late or early
– But synchronous cannot exceed one TTRT in one TRT
2. Asynchronous traffic
– Station transmits only if token is early
• Every node monitors ring for valid token. If operations are correct, a node must observe a token or a data frame every so often.
• When Greatest idle time = Ring latency + frame transmission time,
and if nothing seen during this time, a node suspects something is wrong on the ring and sends a “claim frame”.
• Then, node bid (propose) for the TTRT using the claim frame.
The bidding process
• A node can send a claim frame without having the token when it suspects failure
• If claim frame came back, node knows that its TTRT bid is the lowest. And now it is responsible for inserting token on the ring.
• When a node receives a claim frame, it checks to see if the TTRT bid is lower than its own.
• If yes, it resets local definition of TTRT and simply forwards the claim frame.
• Else, it removes the claim frame and enters the bidding process
• Put its own claim frame on ring.
• When there are ties, highest address wins.
• In the worst case:
– First async. traffic use one TTRT worth of time.
– Next sync. traffic use one TTRT worth of time.
So, TRT at a node = 2 * TTRT
• It is important to note that if Sync. traffic was transmitted first and used TTRT, no async. traffic can be sent.
Difference between Token Ring and FDDI
Token Ring FDDI
• Shielded twisted pair
• 4, 16 Mbps
• No reliability specified
• Differential Manchester
• Centralized clock
• Access control: Token Passing
• It uses delayed release of token • Optical Fiber
• 100 Mbps
• Reliability specified (dual ring)
• 4B/5B encoding
• Distributed clocking
• Access control: Timed Token algorithm
• Early release of token is used
Wireless LAN – IEEE 802.11
802.11 was designed to run over three different physical media—two based on spread spectrum radio and one based on diffused infrared.
The idea behind spread spectrum is to spread the signal over a wider frequency band than normal, so as to minimize the impact of interference from other devices
Frequency Hopping Spread Spectrum (FHSS)
Frequency hopping is a spread spectrum technique that involves transmitting the signal over a random sequence of frequencies; that is, first transmitting at one frequency, then a second, then a third, and so on.
The receiver uses the same algorithm as the sender—and initializes it with the same seed—and hence is able to bound with the transmitter to correctly receive the frame.
Direct Sequence Spread Spectrum (DSSS)
The DSSS encoder spreads the data across a broad range of frequencies using a mathematical key.
The receiver uses the same key to decode the data.
It sends redundant copies of the encoded data to ensure reception.
The Infrared utilizes infrared light to transmit binary data using a specific modulation technique. The infrared uses a 16-pulse position modulation (PPM).
Access control: CSMA/CA(Carrier Sense Multiple Access /Collision Avoidance)
a wireless protocol would follow exactly the same algorithm as the Ethernet—wait until the link becomes idle before transmitting and back off should a collision occur.
Consider the situation depicted in figure, where each of four nodes is able to send and receive signals that reach just the nodes to its immediate left and right. For example, B can exchange frames with A and C but it cannot reach D, while C can reach B and D but not A.
• a carrier sensing scheme is used.
• a node wishing to transmit data has to first listen to the channel for a predetermined amount of time to determine whether or not another node is transmitting on the channel within the wireless range. If the channel is sensed “idle,” then the node is permitted to begin the transmission process. If the channel is sensed as “busy,” the node defers its transmission for a random period of time. This is the essence of both CSMA/CA and CSMA/CD. In CSMA/CA, once the channel is clear, a station sends a signal telling all other stations not to transmit, and then sends its packet.
Assume that node A has data to transfer to node B. Node A initiates the process by sending a Request to Send frame (RTS) to node B. The destination node (node B) replies with a Clear to Send frame (CTS).
After receiving CTS, node A sends data. After successful reception, node B replies with an acknowledgement frame (ACK).
If node A has to send more than one data fragment, it has to wait a random time after each successful data transfer and compete with adjacent nodes for the medium using the RTS/CTS mechanism.
To sum up, a successful data transfer (A to B) consists of the following sequence of frames:
• “Request To Send” frame (RTS) from A to B
• “Clear To Send” frame (CTS) from B to A
• “Data frame” (Data) from A to B
• Acknowledgement frame (ACK) from B to A.
The following flow graph explains the CSMA/CA technique
Hidden nodes problem
Suppose both A and C want to communicate with B and so they each send it a frame. A and C are unaware of each other since their signals do not carry that far. These two frames collide with each other at B, but A and C is not aware of this collision. A and C are said to be hidden nodes with respect to each other.
Solution for the hidden node problem
When node A wants to send a packet to node B
Node A first sends a Request-to-Send (RTS) to B
On receiving RTS
Node B responds by sending Clear-to-Send (CTS)
provided node B is able to receive the packet
When a node C sees a CTS, it should keep quiet for the duration of the transfer
Exposed node problem
A related problem, called the exposed node problem, occurs under the following circumstances.
B talks to A
C wants to talk to D
C senses channel and finds it to be busy
So, C stays quiet
B is sending to A in figure. Node C is aware of this communication because it hears B’s transmission. It would be a mistake for C to conclude that it cannot transmit to anyone just because it can hear B’s transmission.
Solution for Exposed Terminal Problem
Sender transmits Request to Send (RTS)
Receiver replies with Clear to Send (CTS)
See CTS – Stay quiet
See RTS, but no CTS – then O.K to transmit
When node B receives a data packet from node A, node B sends an Acknowledgement (ACK)
If node A fails to receive an ACK
Retransmit the packet
The frame contains the source and destination node addresses, each of which are 48 bits long; up to 2312 bytes of data; and a 32-bit CRC. The Control field contains three subfields of interest (not shown): a 6-bit Type field that indicates whether the frame carries data, is an RTS or CTS frame. Addr1 identifies the target node, and Addr2 identifies the source node. Addr3 identifies the intermediate destination.
–forwards packets from input port to output port
–port selected based on address in packet header
–adding more hosts will not deteriorate older connections
–it covers large geographic area (tolerate latency)
–it supports large numbers of hosts (scalable bandwidth)
I. Datagram switching
II. Virtual Circuit switching
III. Source Routing switching
I Datagram Switching
No connection setup phase
Each packet forwarded independently
Sometimes called connectionless model
Packets may follow different paths to reach their destination
Receiving station may need to reorder
Switches decide the route based on source and destination addresses in the packet
Analogy: postal system
Each switch maintains a forwarding table
Switching table for switch2
II Virtual Circuit Switching
Sometimes called connection-oriented model
Relationship between all packets in a message or session is preserved
Single route is chosen between sender and receiver at beginning of session
Call setup establishes virtual circuit; call teardown deletes the virtual circuit
All packets travel the same route in order
Approach is used in WANs, Frame Relay, and ATM
Each switch maintains a VC table
The VC table in a single switch contains
o a Virtual circuit identifier (VCI)
o an incoming interface on which packets for this VC arrives
o an outgoing interface on which packets for this VC leave
o potentially different VCI for outgoing packets
III Source Routing
A third approach to switching that uses neither virtual circuits nor conventional datagrams is known as source routing. The name derives from the fact that all the information about network topology that is required to switch a packet across the network is provided by the source host. Assign a number to each output of each switch and to place that number in the header of the packet. The switching function is then very simple: For each packet that arrives on an input, the switch would read the port number in the header and transmit the packet on that output. There will be more than one switch in the path between the sending and the receiving host. In such a case the header for the packet needs to contain enough information to allow every switch in the path to determine which output the packet needs to be placed on.
In this example, the packet needs to traverse three switches to get from host A to host B. At switch 1, it needs to exit on port 1, at the next switch it needs to exit at port 0, and at the third switch it needs to exit at port 3. Thus, the original header when the packet leaves host A contains the list of ports (3, 0, 1), where we assume that each switch reads the rightmost element of the list. To make sure that the next switch gets the appropriate information, each switch rotates the list after it has read its own entry. Thus, the packet header as it leaves switch 1 en route to switch 2 is now (1, 3, 0); switch 2 performs another rotation and sends out a packet with (0, 1, 3) in the header.
Bridges and Extended LANs
A class of switches that is used to forward packets between shared-media LANs such as Ethernets. Such switches are sometimes known by the name of LAN switches; historically they have also been referred to as bridges. Operate in both physical and data link layers
LANs have physical limitations (e.g 2500m).
Bridge is used to connect two or more LANs as shown below It uses ‘Store and forward’ technique.
a collection of LANs connected by one or more bridges is usually said to form an extended LAN.
Learning Bridges/Transparent bridge
A bridge maintains a forwarding table to forward the packet that it receives. The forwarding table contains two fields. One is host address filed and another one is used for storing the port number of bridge on which the host is connected. For example,
Each packet carries a global address, and the bridge decides which output to send a frame on by looking up that address in a table. Bridge inspects the source address in all the frames it receives and records the fact in the table. When a frame is received by the bridge, it opens the frame to see the destination address and then it checks the destination address in the forwarding table. Suppose if the destination address is available in the table then it forwards the frame to the respective one of its output port which is mentioned for that destination host in the table. Suppose if the destination address is not in the table then it floods the frame to all of its output port and then it uses the source address of the frame to update the table. Thus the bridge learns the table entry to decide whether to forward/discard the frame or to update the table and the algorithms are listed below:
Learning bridge example
• It ensures the topology has no loops
• Spanning tree
• Sub-graph that covers all vertices but contains no cycles
• Links not in the spanning tree do not forward frames
Constructing a Spanning Tree
• Need a distributed algorithm
– Switches cooperate to build the spanning tree
Key ingredients of the algorithm
• Switches need to elect a “root”
– The switch with the smallest identifier
• Each switch identifies if its interface is on the shortest path from the root
– And it exclude from the tree if not
• Messages (Y, d, X)
– From node X
– Claiming Y is the root
– And the distance is d
Steps in Spanning Tree Algorithm
• Initially, each switch thinks it is the root
– Switch sends a message out every interface
– … identifying itself as the root with distance 0
– Example: switch X announces (X, 0, X)
• Switches update their view of the root
– Upon receiving a message, check the root id
– If the new id is smaller, start viewing that switch as root
• Switches compute their distance from the root
– Add 1 to the distance received from a neighbor
– Identify interfaces not on a shortest path to the root
– … and exclude them from the spanning tree
Example From Switch #4’s Viewpoint
• Switch #4 thinks it is the root
– Sends (4, 0, 4) message to 2 and 7
• Then, switch #4 hears from #2
– Receives (2, 0, 2) message from 2
– … and thinks that #2 is the root
– And realizes it is just one hop away
• Then, switch #4 hears from #7
– Receives (2, 1, 7) from 7
– And realizes this is a longer path
– So, prefers its own one-hop path
– And removes 4-7 link from the tree
• Switch #2 hears about switch #1
– Switch 2 hears (1, 1, 3) from 3
– Switch 2 starts treating 1 as root
– And sends (1, 2, 2) to neighbors
• Switch #4 hears from switch #2
– Switch 4 starts treating 1 as root
– And sends (1, 3, 4) to neighbors
• Switch #4 hears from switch #7
– Switch 4 receives (1, 3, 7) from 7
– And realizes this is a longer path
– So, prefers its own three-hop path
– And removes 4-7 Iink from the tree
Circuit switching vs. packet switching / Packet switched networks – IP – ARP – RARP – DHCP – ICMP – Queuing discipline – Routing algorithms – RIP – OSPF – Subnetting – CIDR – Interdomain routing – BGP – Ipv6 – Multicasting – Congestion avoidance in network layer
Circuit switching Vs Packet switching
The phases of circuit switching
• Dedicated communication path between two stations
• Three phases
• Must have switching capacity and channel capacity to establish connection
• Must have intelligence to work out routing
The pros and cons of circuit switching
— Channel capacity dedicated for duration of connection
— If no data, capacity wasted
• Set up (connection) takes time
• Once connected, transfer is transparent
• Developed for voice traffic (phone)
Types of packet switching techniques
• Virtual circuit
Datagram Packet Switching
• Each packet treated independently
• Packets can take any practical route
• Packets may arrive out of order
• Packets may go missing
• Up to receiver to re-order packets and recover from missing packets
• no call setup at network layer
Virtual Circuit Packet Switching
• Preplanned route established before any packets sent
• Call request and call accept packets establish connection (handshake)
• Each packet contains a virtual circuit identifier instead of destination address
• No routing decisions required for each packet
• Clear request to drop circuit
• Not a dedicated path
• used to setup, maintain teardown VC
• used in ATM, frame-relay, X.25
• not used in today’s Internet
Compare and contrast datagram and virtual circuit approaches for packet switching
• Virtual circuits
— Network can provide sequencing and error control
— Packets are forwarded more quickly
• No routing decisions to make
— Less reliable
• Loss of a node looses all circuits through that node
— No call setup phase
• Better if few packets
— More flexible
• Routing can be used to avoid congested parts of the network
An internetwork is an interconnected collection of networks. An internetwork is often referred to as a “network of networks” because it is made up of lots of smaller networks. In this figure, we see Ethernets, an FDDI ring, and a point-to-point link.
Each of these is a single-technology network. The nodes that interconnect the networks are called routers. They are also sometimes called gateways. The Internet Protocol is the key tool used today to build scalable, heterogeneous internetworks. The IP datagram is fundamental to the Internet Protocol. A datagram is a type of packet that happens to be sent in a connectionless manner over a network. Every datagram carries enough information to let the network forward the packet to its correct destination; there is no need for any advance setup mechanism to tell the network what to do when the packet arrives.
To build an internetwork, it is better to define its service model, that is, the host-to-host services you want to provide. The IP service model can be thought of as having two parts:
• an addressing scheme, which provides a way to identify all hosts in the internetwork,
• a datagram (connectionless) model of data delivery.
This service model is sometimes called best effort because, although IP makes every effort to deliver datagrams, it makes no guarantees.
Datagram model of Delivery
The IP datagram is fundamental to the Internet Protocol. It is a connectionless model of data delivery. Every datagram carries enough information so that network forwards the packet to its correct destination; there is no need for any advance setup mechanism to tell the network what to do when the packet arrives. if something goes wrong and the packet gets lost, corrupted, or in any way fails to reach its intended destination, the network does nothing—it made its best effort. So that, this is sometimes called an unreliable service.
Packet Format IPv4
Version: 4 bits
The Version field indicates the format of the internet header. This document describes version 4.
HLen: 4 bits
Internet Header Length is the length of the internet header in 32 bit words, and thus points to the beginning of the data. Note that the minimum value for a correct header is 5.
Type of Service: 8 bits
The Type of Service provides an indication of the abstract parameters of the quality of service desired.
Total Length: 16 bits
Total Length is the length of the datagram, measured in octets, including internet header and data
Identification: 16 bits
An identifying value assigned by the sender to aid in assembling the fragments of a datagram.
Flags: 3 bits
Various Control Flags.
Fragment Offset: 13 bits
This field indicates where in the datagram this fragment belongs.
Time to Live: 8 bits
This field indicates the maximum time the datagram is allowed to remain in the internet system. If this field contains the value zero, then the datagram must be destroyed.
Protocol: 8 bits
This field indicates the next level protocol used in the data portion of the internet datagram
ource Address: 32 bits
The source address
Destination Address: 32 bits
The destination address
Checksum: used for Error correction
The options may appear or not in datagrams.
The internet header padding is used to ensure that the internet header ends on a 32 bit boundary
IP Fragmentation and Reassembly
In internetwork, each hardware technology specifies the maximum amount of data that a frame can carry. This is called the Maximum Transmission Unit (MTU).
IP uses a technique called fragmentation to solve the problem of heterogeneous MTUs. When a datagram is larger than the MTU of the network over which it must be sent, it is divided into smaller fragments which are each sent separately. This process is illustrated in Figure
Each fragment becomes its own datagram and is routed independently of any other datagrams. At the final destination, the process of re-constructing the original datagram is called reassembly
Datagram Forwarding in IP
The forwarding of IP datagrams are the following:
• Every IP datagram contains the IP address of the destination host.
• The “network part” of an IP address uniquely identifies a single physical network that is part of the larger Internet.
• All hosts and routers that share the same network part of their address are connected to the same physical network and can thus communicate with each other by sending frames over that network.
• Every physical network that is part of the Internet has at least one router that, by definition, is also connected to at least one other physical network; this router can exchange packets with hosts or routers on either network.
We describe the datagram forwarding algorithm in the following way:
if (NetworkNum of destination = NetworkNum of one of my interfaces) then
deliver packet to destination over that interface
if (NetworkNum of destination is in my forwarding table) then
deliver packet to NextHop router
deliver packet to default router
For a host with only one interface and only a default router in its forwarding table, this simplifies to
if (NetworkNum of destination = my NetworkNum) then
deliver packet to destination directly
deliver packet to default router
The IP service model, that one of the things that it provides is an addressing scheme. If you want to be able to send data to any host on any network, there needs to be a way of identifying all the hosts. Thus, we need a global addressing scheme—in which no two hosts have the same address.
Address is need to uniquely and universally identify every device to allow global communication
Internet address or IP address is used in the network layer of the Internet model
Consists of a 32-bit binary address
IP Address Representation
Binary notation – IP address is displayed as 32 bits
Dotted-decimal notation – more compact and easier to read form of an IP address
o Each number is between 0 and 255
Class full address
In classful addressing, the address space is divided into five classes: A, B, C, D, and E. We can find the class of an address when given the address in binary notation or dotted-decimal notation. If the address is given in binary notation, the first few bits can immediately tell us the class of the address. If the address is given in decimal-dotted notation, the first byte defines the class. Both methods are shown in figures
Find the class of each address:
a.The first byte is 227 (between 224 and 239); the class is D.
b. The first byte is 193 (between 192 and 223); the class is C.
c.The first byte is 14 (between 0 and 127); the class is A.
d.The first byte is 252 (between 240 and 255); the class is E.
e.The first byte is 134 (between 128 and 191); the class is B.
Classes and Blocks
One problem with classful addressing is that each class is divided into a fixed number
of blocks with each block having a fixed size as shown in Table
Netid and Hostid
In classful addressing, an IP address in class A, B, or C is divided into netid and hostid.
These parts are of varying lengths as shown in below.
The network address is an address that defines the network itself. It can not be assigned to a host. A network address has several properties:
All hosted bytes are 0
The network address is the first address in the set of address (block)
Given the network address, we can find the class of address.
Given the address 184.108.40.206, find the network address
The class is A. Only the first byte defines the netid and the remaining 3bytes define the host id in class A. We can find the network address by replacing the hosted bytes with 0s. Therefore, the network address is 220.127.116.11
Given address is 18.104.22.168, find the network address
The class is B. The first 2 bytes defines the netid in class B. We can replace the next 2 bytes with 0s. So the network address is 22.214.171.124
Given the network address 126.96.36.199, find the class.
The class is A because the netid is only 1 byte
Given the network address 188.8.131.52, find the class, the block, and the range of the addresses.
The class is A because the first byte is between 0 and 127. The block has a netidof 17. The addresses range from 184.108.40.206 to 220.127.116.11.
Given the network address 18.104.22.168, find the class, the block,and the range of the addresses.
The class is B because the first byte is between 128 and 191. The block has a netidof 132.21. The addresses range from 22.214.171.124 to 126.96.36.199.
Unicast, Multicast, and Reserved Addresses
Unicast address – identifies a specific device
Multicast address – identifies a host belongs to a group or groups (used only as a destination address)
Reserved addresses – class E addresses; only used in special cases
Class A Addresses
Numerically the lowest
Use only one byte to identify the class type and netid
Three bytes are available for hostid numbers
127 possible class A networks with a maximum of 16,777,214 computers on each network
Designed for large organizations with a large number of hosts or routers
Many addresses are wasted
Class B Addresses
First two octets are the network number and the last two octets are the host number
16,382 possible blocks for assignment to organizations with a maximum of 65,534 computers on each network
Designed for mid-size organizations that may have tens of thousands of hosts or routers
Many addresses are wasted
Class C Addresses
The first three octets are the network number and the last octet is the host number
2,096,896 blocks for assignment to organizations
First three bytes (netid) are the same
Each block only contains 256 addresses, which may be smaller than what many organizations need
Class D and Class E Addresses
Class D – reserved for multicast addresses
o Multicasting – transmission method which allows copies of a single packet to be sent to a selected group of receivers
Class E – reserved for future use
IP addressing is hierarchical
First it reaches network using its netid
Then it reaches the host itself using the second portion (hostid)
Since an organization may not have enough address, subnetting may be used to divide the network into smaller networks or subnetworks
Subnetting creates an intermediate level of hierarchy
IP datagram routing then involves three steps: delivery to the network, delivery to the subnetwork, and delivery to the host
A single IP class A, B, or C network is further divided into a group of hosts to form an IP sub-network. Sub-networks are created for manageability, performance, and security of hosts and networks and to reduce network congestion. The host ID portion of an IP address is further divided into a sub-network ID part and a host ID part. The sub-network ID is used to uniquely identify the different sub-networks within a network.
A mask is a 32-bit binary number that gives the first address in the block (the network address) when bitwise ANDed with an address in the block. The masks for classes A, B, and C are shown in table.
The last column of table shows the mask in the form /n where n can be 8, 16, or 24 in classful addressing. This notation is also called slash notation or Classless Interdomain Routing (CIDR) notation. The notation is used in classless addressing.
The network address is the beginning address of each block. It can be found by applying the default mask to any of the addresses in the block (including itself).
It retains the netid of the block and sets the hostid to zero.
o A process that extracts the address of the physical network (network/subnetwork portion) from an IP address
Determine to the network ID, sub-network ID and the host ID, given the IP address and the subnet mask
The network class (A or B or C) of a given IP address can be easily determined by looking at the value of the first 4 bits of the first byte. From the network class, the number of bytes used to represent the network can be determined and hence the network ID can be determined. By performing a “AND” logical operation of the IP address and the subnet mask, the sub-network ID can be determined. In the value resulting from the “AND” operation, by removing the bytes used for the network ID, the remaining bits for which the corresponding bit in the subnet mask is one, represents the sub-network ID.
Finding the Subnet Mask Address
o Given an IP address, we can find the subnet address the same way we found the network address. We apply the mask to the address.
we use binary notation for both the address and the mask and then apply the AND operation to find the subnet address.
What is the subnetwork address if the destination address is 188.8.131.52 and the subnet mask is 255.255.240.0?
Addressing mechanism in which the IP address space is not divided into classes
IP address block ranges are variable, as long as they are a power of 2 (2, 4, 8…)
Masking is also used as well as sub netting
Limitations of IPv4 address classes
The limitations of IPv4 address classes are:
1. A large number of IP addresses are wasted because of using IP address classes.
2. The routing tables will become large. A separate routing table entry is needed for each network resulting in a large number of routing table entries.
Classless Inter Domain Routing (CIDR) is a method for assigning IP addresses without using the standard IP address classes like Class A, Class B or Class C. In CIDR, depending on the number of hosts present in a network, IP addresses are assigned.
Representation of IP address in CIDR notation
In CIDR notation, an IP address is represented as A.B.C.D /n, where “/n” is called the IP prefix or network prefix. The IP prefix identifies the number of significant bits used to identify a network. For example, 184.108.40.206 /18 means, the first 18 bits are used to represent the network and the remaining 14 bits are used to identify hosts.
Advantages of CIDR
The advantages of CIDR over the classful IP addressing are:
1. CIDR can be used to effectively manage the available IP address space.
2. CIDR can reduce the number of routing table entries.
The difference between classful IP addressing and classless IP addressing
The difference between classful IP addressing and classless IP addressing is in selecting the number of bits used for the network ID portion of an IP address. In classful IP addressing, the network ID portion can take only the predefined number of bits 8, 16, or 24. In classless addressing, any number of bits can be assigned to the network ID.
The Address Resolution Protocol (ARP) is used to resolve IP addresses to MAC addresses. This is important because on a network, devices find each other using the IP address, but communication between devices requires the MAC address.
When a computer wants to send data to another computer on the network, it must know the MAC address of the destination system. To discover this information, ARP sends out a discovery packet to obtain the MAC address. When the destination computer is found, it sends its MAC address to the sending computer. The ARP-resolved MAC addresses are stored temporarily on a computer system in the ARP cache. Inside this ARP cache is a list of matching MAC and IP addresses. This ARP cache is checked before a discovery packet is sent on to the network to determine if there is an existing entry. Entries in the ARP cache are periodically flushed so that the cache doesn’t fill up with unused entries.
Media Access Control address is an identifier for assigned to most network adapters or Network Interface Cards by the manufacturer for the purpose of identification. MAC address is used in MAC protocol sub layer.
The Need for ARP
ARP is a protocol for mapping link layer addresses to a physical machine address that is recognized in the local network. For example, in IP Version 4, addresses are 32 bits long, are hardware independent, but are dependent upon the network to which a device is connected. In an Ethernet Local Area Network, addresses for attached devices are 48 bits long. In other words, the IP address of a device changes when the device is moved. So that we need a mapping mechanism to resolve IP addresses to Ethernet addresses.
ARP Packet format for mapping IP address into Ethernet address
• a HardwareType field, which specifies the type of physical network (e.g., Ethernet)
• a ProtocolType field, which specifies the higher-layer protocol (e.g., IP)
• HLen (“hardware” address length) and PLen (“protocol” address length) fields, which specify the length of the link-layer address and higher-layer protocol address, respectively
• an Operation field, which specifies whether this is a request or a response
• the source and target hardware (Ethernet) and protocol (IP) addresses
RARP is used to resolve a Ethernet MAC address to an IP address. All the mappings between the hardware MAC addresses and the IP addresses of the hosts are stored in a configuration file in a host in the network. This host is called the RARP server. This host responds to all the RARP requests. Normally, the IP address of a system is stored in a configuration file in the local disk. When the system is started, it determines its IP address from this file. In the case of a diskless workstation, its IP address cannot be stored in the system itself. In this case, RARP can be used to get the IP address from a RARP server. RARP uses the same packet format as ARP.
The differences between ARP and RARP:
• Address Resolution Protocol is utilized for mapping IP network address to the hardware address that uses data link protocol.
• Reverse Address Resolution Protocol is a protocol using which a physical machine in a LAN could request to find its IP address from ARP table or cache from a gateway server.
• IP address of destination to physical address conversion is done by ARP, by broadcasting in LAN.
• Physical address of source to IP address conversion is done by RARP.
• ARP associates 32 bit IP address with 48 bit physical address.
As its name indicates, DHCP provides dynamic IP address assignment. The Internet is a vast source of information that is continuously updated and accessed via computers and other devices. For a device to connect to the Internet, it is necessary that among other configurations, it must have an Internet Protocol (IP) address. The IP address is the computer’s address on the Internet. The protocol Bootstrap Protocol (BOOTP) was the first Transmission Control Protocol/Internet Protocol (TCP/IP) network configuration tool used to prevent the task of having to manually assign IP addresses by automating the process. The improvement of BOOTP is Dynamic Host Configuration Protocol (DHCP).
How DHCP Works
DHCP relies on the existence of a DHCP server that is responsible for providing configuration information to hosts. There is at least one DHCP server for an administrative domain. the DHCP server maintains a pool of available addresses that it hands out to hosts on demand. When a network device newly added to network, it searches the DHCP server to get an IP address. To contact a DHCP server, a newly attached host sends a DHCPDISCOVER message to a special IP address (255.255.255.255) that is an IP broadcast address. This means it will be received by all hosts and routers on that network. There is at least one DHCP relay agent on each network, and it is configured with the IP address of the DHCP server. When a relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP server and awaits the response, which it will then send back to the newly added host. The process of relaying a message from a host to a remote DHCP server is shown in figure
The Internet Control Message Protocol (ICMP) is a helper protocol that supports IP with facility for
– Error reporting
– Simple queries
ICMP messages are encapsulated as IP datagrams.
ICMP message format
Type (1 byte): type of ICMP message
• Code (1 byte): subtype of ICMP message
• Checksum (2 bytes): similar to IP header checksum.
Example of ICMP Queries
8/0 Echo Request
0/0 Echo Reply
13/0 Timestamp Request
14/0 Timestamp Reply
10/0 Router Solicitation
9/0 Router Advertisement
ICMP Error message
o ICMP error messages report error conditions
o Typically sent when a datagram is discarded
o Error message is often passed from ICMP to the application program
ICMP Error message
ICMP error messages include the complete IP header and the first 8 bytes of the payload
Frequent ICMP Error message
Routing tables are used to store information identifying the location of nodes on the network
Several techniques are used to make the size of the routing table manageable and to handle issues such as security
Static vs. Dynamic Routing
Static – contains information entered manually; cannot be updated automatically
Dynamic – updated periodically using dynamic routing protocols such as RIP, OSPF, or BGP
Router is assigned to receive all packets with no match in the routing table
The success of dynamic routing depends on two basic router functions:
1. Maintenance of a routing table
2. Timely distribution of knowledge, in the form of routing updates, to other routers.
Dynamic routing relies on the routing protocol. Routing Protocols can be Distant Vector or Link-State.
Distance-Vector Routing (RIP)
Each node constructs a vector containing the distances”(costs) to all other nodes and distributes that vector to its immediate neighbors.
1. The starting assumption for distance-vector routing is that each node knows the cost of the link to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost
Initial distances stored at each node(global view)
We can represent each node’s knowledge about the distances to all other nodes as a table like the one given in Table 1.
Note that each node only knows the information in one row of the table.
1. Every node sends a message to its directly connected neighbors containing its personal list of distance. ( for example, A sends its information to its neighbors B,C,E, and F. )
2. If any of the recipients of the information from A find that A is advertising a path shorter than the one they currently know about, they update their list to give the new path length and note that they should send packets for that destination through A. ( node B learns from A that node E can be reached at a cost of 1; B also knows it can reach A at a cost of 1, so it adds these to get the cost of reaching E by means of A. B records that it can reach E at a cost of 2 by going through A.)
3. After every node has exchanged a few updates with its directly connected neighbors, all nodes will know the least-cost path to all the other nodes.
4. In addition to updating their list of distances when they receive updates, the nodes need to keep track of which node told them about the path that they used to calculate the cost, so that they can create their forwarding table. ( for example, B knows that it was A who said ” I can reach E in one hop” and so B puts an entry in its table that says ” To reach E, use the link to A.)
In practice, each node’s forwarding table consists of a set of triples of the form:
( Destination, Cost, NextHop).
For example, Table 3 shows the complete routing table maintained at node B for the network in figure1.
is a class of intradomain routing protocol
Each node is assumed to be capable of finding out the state of the link to its neighbors (up or down) and the cost of each link.
1. Advertise about neighborhood: instead of sending its entire routing table, a router sends information about its neighborhood only
2. Flooding: Each router sends this information to every router on the internetwork not just to its neighbor. It does so process of flooding
3. Each router sends out information about the neighbor when there is a change in the table
4. Find and use the shortest path to reach any point in the network.
Two mechanisms needed
I. Reliable flooding
II. Route calculation using Dijkstra’s algorithm
Each router creates a packet called link state packet which contains the following information:
id of the node that created the LSP
cost of link to each directly connected neighbor
sequence number (SEQNO)
time-to-live (TTL) for this packet
Link State Database
Every router receives every LSP and then prepares a database, which represents a complete network topology. This Database is known as Link State Database
I. Reliable Flooding
As the term “flooding” suggests, the basic idea is for a node to send its link-state information out on its entire directly connected links, with each node that receives this information forwarding it out on its entire links. This process continues until the information has reached all the nodes in the network
–store most recent LSP from each node
–forward LSP to all nodes but one that sent it
–generate new LSP periodically
–start SEQNO at 0 when reboot
–decrement TTL of each stored LSP
–discard when TTL=0
Flooding works in the following way. Consider a node X that receives a copy of an LSP that originated at some other node Y. Note that Y may be any other router in the same routing domain as X. X checks to see if it has already stored a copy of an LSP from Y. If not, it stores the LSP. If it already has a copy, it compares the sequence numbers; if the new LSP has a larger sequence number, it is assumed to be the more recent, and that LSP is stored, replacing the old one. A smaller (or equal) sequence number would imply an LSP older (or not newer) than the one stored, so it would be discarded and no further action would be needed. If the received LSP was the newer one, X then sends a copy of that LSP to all of its neighbors except the neighbor from which the LSP was just received. Since X passes the LSP on to all its neighbors, who then turn around and do the same thing, the most recent copy of the LSP eventually
reaches all nodes
II. Route Calculation using Dijkstra’s algorithm
Each node maintains two lists, known as Tentative and Confirmed. Each of these lists contains a set of entries of the form (Destination, Cost, NextHop).
The algorithm works as follows:
Properties of Link State Routing
Keeps routing control traffic low
Responds rapidly to topology changes
The amount of information stored in each node is large.
Difference between DV and LS
–Talk only to neighbor
–Tell the neighbor everything that you learnt
– Talk to every node
– Tell them about the neighbor nodes link state
This protocol is open, which means that its specification is in the public domain. It means that anyone can implement it without paying license fees.
OSPF is based on the Dijkstra’s algorithm
The basic building block of link-state messages in OSPF is known as the link-state advertisement (LSA).
OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs) to all other routers within the same hierarchical area.
Figure shows the packet format for a link-state advertisement. LSAs advertise the cost of links between routers. The LS Age is the equivalent of a time to live. The Type field tells us that this is a type 1 LSA. the Link-state ID and the Advertising router field are identical.
Each carries a 32-bit identifier for the router that created this LSA. The LS sequence number is used to detect old or duplicate LSAs. The LS checksum is used to verify that data has not been corrupted . Length is the length in bytes of the complete LSA. Link ID, metric and Link Data fields are used identify the link; TOS is a type of service information
OSPF specifies that all the exchanges between routers must be authenticated.
OSPF provides Load Balancing. When several equal-cost routes to a destination exist, traffic is distributed equally among them
OSPF allows sets of networks to be grouped together. Such a grouping is called an Area. Each Area is self-contained
OSPF uses different message formats to distinguish the information acquired from within the network (internal sources) with that which is acquired from a router outside (external sources).
Network address translation (NAT)
is the process where a network device, assigns a public address to a host inside a private network. To separate the addresses used inside private network and the ones used for the public network (Internet), the Internet authorities have reserved three sets of addresses as private addresses, shown in table
The private addressing scheme works well for computers that only have to access resources inside the network. However, to access resources outside the network, like the Internet, these computers have to have a public address in order for responses to their requests to return to them. This is where NAT comes into play.
All the outgoing packets go through the NAT router, which replaces the source address in the packet with the global NAT address. All incoming packets also pass through the NAT router, which replaces the destination address in the packet with the appropriate private address
Intradomain versus Interdomain they are commonly known as Interior gateway protocols and Exterior-gateway protocols respectively.
Interior gateway protocols:
In small and slowly changing network the network administrator can establish or modify routes by hand i.e. manually. Administrator keeps a table of networks and updates the table whenever a network is added or deleted from the autonomous system. The disadvantage of the manual system is obvious; such systems are neither scalable nor adaptable to changes. Automated methods must be used to improve reliability and response to failure. To automate the task this task, interior router (within a autonomous system) usually communicate with one another, exchanging network routing information from which reachability can be deduced. These routing methods are known as Interior gateway Protocols (IGP).
Two Interior gateway protocols widely used namely,
1. routing Information Protocol (RIP)
2. Open Shortest path first (OSPF)
Interdomain routing protocols (BGP)
An autonomous system (AS) is a network or group of networks under a common administration and with common routing policies. BGP is used to exchange routing information for the Internet and is the protocol used between Internet service providers (ISP), which are different ASes. V The basic idea behind autonomous systems is to provide an additional way to hierarchically aggregate routing information in a large internet, thus improving scalability.
One feature of the autonomous system idea is that it enables some ASs to dramatically
reduce the amount of routing information they need to care about by using default routes. For example, if a corporate network is connected to the rest of the Internet by a single router (this router is typically called a border router since it sits at the boundary between the AS and the rest of the Internet), then it is easy for a host or router inside the autonomous system to figure out where it should send packets that are headed for a destination outside of this AS—they first go to the AS’s border router. This is the default route. Similarly, a regional Internet service provider can keep track of how to reach the networks of all its directly connected customers and can have a default route to some other provider (typically a backbone provider) for everyone else.
We now divide the routing problem into two parts: routing within a single autonomous system and routing between autonomous systems. Since another name for autonomous systems in the Internet is routing domains, we refer to the two parts of the routing problem as interdomain routing and intradomain routing. The interdomain routing problem is one of having different ASes share information with each other.
There have been two major interdomain routing protocols in the recent history of the Internet:
1. Exterior Gateway Protocol (EGP).
2. Border Gateway Protocol (BGP)
The Border Gateway Protocol (BGP) is an inter-autonomous system routing protocol. The replacement for EGP is the Border Gateway Protocol. Today’s Internet consists of an interconnection of multiple backbone networks (they are usually called service provider networks. The following figure illustrates the BGP Model of the Internet
Given this rough sketch of the Internet, if we define
local traffic as traffic that originates at or terminates on nodes within an AS, and
transit traffic as traffic that passes through an AS
Classification of AS:
we can classify ASs into three types:
1. Stub AS: Only one connection to another AS.
2. Multihomed AS: An AS with connections to multiple ASs, which refuses to carry external traffic.
3. Transit AS: An AS with connections to multiple AS and carries internal and external traffic.
One of the most important characteristics of BGP is its flexibility.
BGP Routing Challenges:
–Domains are autonomous and can assign arbitrary metrics to its internal paths,
–ASs can only cooperate if they can trust one another to publicize accurate routing information and to carry out their promises,
–Policies should be flexible to allow ASs freedom of action. This choice may override optimal paths and determine the use of paths that are “good enough”.
• Speaker for AS2 advertises reachability to P and Q
o network 128.96, 192.4.153, 192.4.32, and 192.4.3, can be reached directly from AS2
• Speaker for backbone advertises
o –networks 128.96, 192.4.153, 192.4.32, and 192.4.3 can be reached along the path (AS1, AS2).
• Speaker can cancel previously advertised paths
Few characteristics of BGP
Inter-Autonomous System Configuration: BGP’s primary role is to provide communication between two autonomous systems.
Next-Hop paradigm: Like RIP, BGP supplies next hop information for each destination.
Coordination among multiple BGP speakers within the autonomous system: If an Autonomous system has multiple routers each communicating with a peer in other autonomous system, BGP can be used to coordinate among these routers, in order to ensure that they all propagate consistent information.
Path information: BGP advertisements also include path information, along with the reachable destination and next destination pair, which allows a receiver to learn a series of autonomous system along the path to the destination.
Runs over TCP: BGP uses TCP for all communication. So the reliability issues are taken care by TCP.
Support for CIDR: BGP supports classless addressing (CIDR).
That it supports a way to send the network mask along with the addresses.
Security: BGP allows a receiver to authenticate messages, so that the identity of the sender can be verified.
Integrating Interdomain and Intradomain Routing
• Stub AS: The border router injects a default route into the intradomain routing protocol.
• Multihomed and Transit ASs: The border routers inject routes that they have learned from outside the AS.
• Transit ASs: The information learned from BGP may be “too much” to inject into the intradomain protocol: if a large number of prefixes in inserted, large link-state packets will be circulated and path calculations will get very complex.
IP Version 6 (IPV6)
–128-bit addresses (classless)
–authentication and security
–40-byte “base” header
–extension headers (fixed order, mostly fixed length)
–authentication and security
Address Space Allocation
IPv6 addresses do not have classes, but the address space is still subdivided in various ways based on the leading bits. Rather than specifying different address classes, the leading bits specify different uses of the IPv6 address. The current assignment of prefixes is listed in Table
“link local use” addresses is useful for auto configuration`, “site local use” addresses are intended to allow valid addresses to be constructed on a site and the multicast address space is for multicast, thereby serving the same role as class D addresses in IPv4
An example would be
• When there are many consecutive 0s, omit them:
47CD::A456:0124 (double colon means a group of 0s)
• Two types of IPv6 address can contain embedded IPv4 addresses. For example, an IPv4 host address 220.127.116.11 becomes
::FFFF:18.104.22.168 (the last 32 bits are an IPv4 address)
This notation facilitates the extraction of an IPv4 address from an IPv6 address
Aggregatable Global Unicast Addresses
Subscriber is: non-transit AS (stub and multihomed)
Provider is : transit AS
o Direct: connected directly to subscribers
o Indirect or Backbone network: connected to other providers
Plan: Aggregate routing information to reduce the burden on intradomain routers. Assign a prefix to a direct provider; within the provider assign longer prefixes that reach its subscribers.
Congestion control in network layer(Refer: Andrew S. Tanenbaum, “Computer Networks”)
In a packet switching network, packets are introduced in the nodes (i.e. offered load), and the nodes in-turn forward the packets (i.e. throughput) into the network. When the “offered load” crosses certain limit, then there is a sharp fall in the throughput. This phenomenon is known as congestion. In other words, when too much traffic is offered, congestion sets in and performance degrades sharply.
Congestion affects two vital parameters of the network performance, namely throughput and delay. The throughput can be defined as the percentage utilization of the network capacity.
Congestion Control Techniques
Congestion control refers to the mechanisms and techniques used to control congestion and keep the traffic below the capacity of the network. The congestion control techniques can be broadly classified two broad categories:
Open loop: Protocols to prevent or avoid congestion, ensuring that the system never enters a Congested State.
Close loop: Protocols that allow system to enter congested state, detect it, and remove it.
1 Admission control is one such closed-loop technique, used in virtual circuit subnets where action is taken once congestion is detected in the network. Different approaches can be followed:
First approach is once the congestion has been signaled, do not set-up new connections, once the congestion is signaled. This type of approach is often used in normal telephone networks. When the exchange is overloaded, then no new calls are established.
Second approach is to allow new virtual connections, but route these carefully so that none of the congested router or none of the problem area is a part of this route.
Third approach is to negotiate different parameters between the host and the network, when the connection is setup. During the setup time itself, Host specifies the volume and shape of traffic, quality of service, maximum delay and other parameters, related to the traffic it would be offering to the network. Once the host specifies its requirement, the resources needed are reserved along the path, before the actual packet follows.
2 Choke Packet Technique a closed loop control technique, can be applied in both virtual circuit and datagram subnets.
Each router monitors the utilisation of its outgoing lines
Whenever the utilization rises above some threshold, a warning flag is set for that link
When a newly arriving data packet arrives for routing over that link, the router extracts the packet’s source address and sends a “choke packet” back to the source. This choke packet contains the destination address
The original data packet is tagged so that it will not generate any more choke packets, then forwarded
When the source host gets the choke packet, it is required to reduce the traffic sent to the particular destination by X%; it ignores other choke packets for the same destination for a fixed time interval
The following figure depicts the functioning of choke packets.
Figure depicts the functioning of choke packets, (a) Heavy traffic between nodes P and Q, (b) Node Q sends the Choke packet to P, (c) Choke packet reaches P, (d) P reduces the flow and send a reduced flow out, (e) Reduced flow reaches node Q
3 Hop-by-Hop Choke Packets
This technique is advancement over Choked packet method. In this approach, the choke packet affects each and every intermediate router through which it passes by. Here, as soon as choke packet reaches a router back to its path to the source, it curtails down the traffic between those intermediate routers. The following figure illustrates this.
Figure depicts the functioning of Hop-by-Hop choke packets, (a) Heavy traffic between nodes P and Q, (b) Node Q sends the Choke packet to P, (c) Choke packet reaches R, and the flow between R and Q is curtail down, d) Choke packer reaches P, and P reduces the flow out
4 Load Shedding
Another simple closed loop technique is Load Shedding; it is one of the simplest and more effective techniques. In this method, whenever a router finds that there is congestion in the network, it simply starts dropping out the packets. One of the technique is Random Early Detection.
Random Early Detection (RED): There are different methods by which a host can find out which packets to drop. Simplest way can be just choose the packets randomly which has to be dropped. More effective ways are there but they require some kind of cooperation from the sender too. For many applications, some packets are more important than others. So, sender can mark the packets in priority classes to indicate how important they are. If such a priority policy is implemented than intermediate nodes can drop packets from the lower priority classes and use the available bandwidth for the more important packets.
5 Jitter is the variation in the packet arrival times belonging to the same flow. For example, in a real time video broad casts, 99% of packets to be delivered with delay in the range of 24.5msec to 25.5msec might be acceptable. The chosen range must be feasible.
When a packet arrives at a router, the router checks to see how much packet is behind or ahead of its schedule. If the packet is ahead of its schedule, it is held just long enough to get it back on schedule. If its behind schedule, the router tries to get it out quickly.
UDP – TCP – Adaptive Flow Control – Adaptive Retransmission – Congestion control – Congestion avoidance – QoS
– Stands for Transmission Control Protocol
– TCP provides a connection oriented, reliable, byte stream service. The term connection-oriented means the two applications using TCP must establish a TCP connection with each other before they can exchange data. It is a full duplex protocol, meaning that each TCP connection supports a pair of byte streams, one flowing in each direction. TCP includes a flow-control mechanism for each of these byte streams that allow the receiver to limit how much data the sender can transmit. TCP also implements a congestion-control mechanism.
TCP frame format
TCP data is encapsulated in an IP datagram. The figure shows the format of the TCP header. Its normal size is 20 bytes unless options are present. Each of the fields is discussed below:
– Source port (16 bits) – identifies the sending port
– Destination port (16 bits) – identifies the receiving
– The sequence number identifies the byte in the stream of data from the sending TCP to the receiving TCP that the first byte of data in this segment represents.
– Acknowledgment number (32 bits) – if the ACK flag is set then the value of this field is the next sequence number that the receiver is expecting.
– Header length (4 bits) – specifies the size of the TCP header in 32-bit words.
– Reserved (4 bits) – for future use and should be set to zero
– URG (1 bit) – indicates that the Urgent pointer field is significant
– ACK (1 bit) – indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set.
– PSH (1 bit) – Push function. Asks to push the buffered data to the receiving application.
– RST (1 bit) – Reset the connection
– SYN (1 bit) – Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags change meaning based on this flag, and some are only valid for when it is set, and others when it is clear.
– FIN (1 bit) – No more data from sender
– Window (16 bits) – the size of the receive window, which specifies the number of bytes (beyond the sequence number in the acknowledgment field) that the receiver is currently willing to receive
– Checksum (16 bits) – The 16-bit checksum field is used for error-checking of the header and data
– Urgent pointer (16 bits) – if the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte
– Options (Variable 0-320 bits, divisible by 32) – The length of this field is determined by the data offset field. Options 0 and 1 are a single byte (8 bits) in length
– The data portion of the TCP segment is optional.
TCP vs UDP
– Transmission control protocol
– Slower than UDP
– Connection oriented
– Provides flow control and error control
– TCP offers error connection and Guaranteed Delivery
– Provides or sends larger packets
– Ordered message delivery
– Heavyweight-when the segment arrives in the wrong order, resend requests have to be sent, and all the out of sequence parts have to be put back together
– If any segments are lost, the receiver Will send ACK to intimate lost segments
– User Datagram protocol
– Faster than TCP
– Connection less
– No flow control
– UDP doesn’t offer error connection & delivery
– Provides or sends smaller packets
– Unordered message delivery
– Light weight- No ordering of messages and no tracking connections etc
– No Acknowledgement
TCP Connection establishment and Termination
The algorithm used by TCP to establish and terminate a connection is called a three-way handshake.
Three way handshake for connection establishment
It involves exchange of three messages between client and server as illustrated by the timeline given in figure
1. The client sends the starting sequence number (x) it plans to use.
2. The server responds with a ACK segment (x+1) that acknowledges the client sequence number and starts its own sequence number (y).
3. Finally client responds with a third segment (y+1) that acknowledges the server sequence number
Three way handshake for connection termination
Three-way handshaking for connection termination as shown in figure
1. The client TCP, after receiving a close command from the client process, sends the first segment, a FIN segment. The FIN segment consumes one sequence number(x)
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the client and at the same time to announce the closing of the connection in the other direction. The FIN +ACK segment consumes one sequence number(y)
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN segment from the TCP server. This segment contains the acknowledgment number(y+1), which is 1 plus the sequence number received in the FIN segment from the server.
TCP’s State Transition Diagram
TCP uses a finite state machine to keep track of all the events happening during the connection establishment, termination and data transfer. The following figure shows that.
CLOSED No connection
LISTEN The server is waiting for call from client
SYN-SENT A connection request is sent, waiting for ACK
SYN-RECD A connection request is received
ESTABLISHED A connection is established
FIN-WAIT1 The application has requested the closing connection
FIN-WAIT2 The other side has accepted the closing connection request
TIME-WAIT Waiting for retransmitted segment to die
CLOSE-WAIT The server is waiting for the application to close
LAST-ACK The server is waiting for the LAST-ACK
The client program can be in one of the following states:
CLOSED, SYS-SENT, ESTABLISHED, FIN-WAIT1, FIN-WAIT2 and TIME-WAIT
1. The client TCP starts at CLOSED-STATE
2. The client TCP can receive an active open request from the client application program. It sends SYN segment to the server TCP and goes to the SYN-SENT state.
3. While in this state, the client TCP can receive a SYN+ACK segment from the other TCP and goes to the ESTABLISHED state. This is data transfer state. The client remains in this state as long as sending and receiving data.
4. While in this state, the client can receive a close request from the client application program. It sends FIN segment to the other TCP and goes to FIN-WAIT1 state.
5. While in this state, the client TCP waits to receive an ACK from the server TCP. When the ACK is received, it goes to the FIN-WAITE2 state. It does not send any thing. Now the connection is closed in one direction.
6. The client remains in this state, waiting for the server TCP to close the connection from the other end. If the client TCP receives FIN segment from the other end, it sends an ACK and goes to TIME-WAIT state.
7. When the client in this state, it starts time waited timer and waits until this timer goes off. After the time out the client goes to CLOSED state.
The server program can be in one of the following states:
CLOSED, LISTEN, SYN-RECD, ESTABLISHED, CLOSE-WAIT, LAST-ACK
1. The server TCP starts in the CLOSED-STATE.
2. While in this state, the server TCP can receive a passive open request from the server application program. It goes to LISTEN state.
3. While in this state, the server TCP can receive a SYN segment from the client TCP. It sends a SYN+ACK segment to the client TCP and goes to the SYN-RECD state.
4. While in this state, the server TCP can receive ACK segment from the client TCP. Then it goes to ESTABLISHED state. This is data transfer state. The sender remains this state as long as it is receiving and sending data.
5. While in this state, the server TCP can receive a FIN segment from the client, which means that the client wishes to close the connection. It can send an ACK segment to the client and goes to the CLOSE-WAIT state.
6. While in this state, the server waits until it receives a close request from the server application program. It then sends a FIN segment to the client and goes to LAST-ACK state.
7. While in this state, the server waits for the last ACK segment. If then goes to the CLOSED state.
TCP Flow control
– Flow control defines the amount of data a source can send before receiving an ACK from the receiver.
– Sliding window protocol is used by TCP for flow control.
Sliding window for flow control by TCP
The sliding window serves several purposes:
(1) It guarantees the reliable delivery of data
(2) It ensures that the data is delivered in order,
(3) It enforces flow control between the sender and the receiver.
For reliable and ordered delivery
The sending and receiving sides of TCP interact in the following manner to implement reliable and ordered delivery:
Each byte has a sequence number.
ACKs are cumulative.