UNIT 3
NETWORK LAYER and TRANSPORT LAYER
In large networks, there can be multiple paths from sender to receiver. The switching technique will decide the best route for data transmission.
Switching technique is used to connect the systems for making one-to-one communication.
Classification of Switching Techniques
Circuit Switching
- Circuit switching is a switching technique that establishes a dedicated path between sender and receiver.
- In the Circuit Switching Technique, once the connection is established then the dedicated path will remain to exist until the connection is terminated.
- Circuit switching in a network operates in a similar way as the telephone works.
- A complete end-to-end path must exist before the communication takes place.
Space Division Switches
- Space Division Switching is a circuit switching technology in which a single transmission path is accomplished in a switch by using a physically separate set of cross points.
- Space Division Switching can be achieved by using crossbar switch. A crossbar switch is a metallic Crosspoint or semiconductor gate that can be enabled or disabled by a control unit.
- The Crossbar switch is made by using the semiconductor. For example, Xilinx crossbar switch using FPGAs.
- Space Division Switching has high speed, high capacity, and nonblocking switches.
Message Switching
- Message Switching is a switching technique in which a message is transferred as a complete unit and routed through intermediate nodes at which it is stored and forwarded.
- In Message Switching technique, there is no establishment of a dedicated path between the sender and receiver.
- The destination address is appended to the message. Message Switching provides a dynamic routing as the message is routed through the intermediate nodes based on the information available in the message.
- Message switches are programmed in such a way so that they can provide the most efficient routes.
Packet Switching
- The packet switching is a switching technique in which the message is sent in one go, but it is divided into smaller pieces, and they are sent individually.
- The message splits into smaller pieces known as packets and packets are given a unique number to identify their order at the receiving end.
- Every packet contains some information in its headers such as source address, destination address and sequence number.
- Packets will travel across the network, taking the shortest path as possible.
- All the packets are reassembled at the receiving end in correct order.
- If any packet is missing or corrupted, then the message will be sent to resend the message.
- If the correct order of the packets is reached, then the acknowledgment message will be sent.
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was the primary version brought into action for production within the ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in hexadecimal notation.
Example- 192.0.2.126 could be an IPv4 address
Parts of IPv4
Network part
The network part indicates the distinctive variety that’s appointed to the network. The network part conjointly identifies the category of the network that’s assigned.
Host part
The host part uniquely identifies the machine on your network. This a part of the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host half must vary
Subnet number
This is the non-obligatory part of IPv4. Local networks that have massive numbers of hosts are divided into subnets and subnet numbers are appointed to that.
IPv6
Internet Protocol Version 6 (IPv6) is a network layer protocol that enables data communications over a packet switched network. Packet switching involves the sending and receiving of data in packets between two nodes in a network. The working standard for the IPv6 protocol was published by the Internet Engineering Task Force (IETF) in 1998. The IETF specification for IPv6 is RFC 2460. IPv6 was intended to replace the widely used Internet Protocol Version 4 (IPv4) that is considered the backbone of the modern Internet. IPv6 is often referred to as the "next generation Internet" because of its expanded capabilities and its growth through recent large-scale deployments.
Address Resolution Protocol (ARP) is a communication protocol used to find the MAC (Media Access Control) address of a device from its IP address. This protocol is used when a device wants to communicate with another device on a Local Area Network or Ethernet.
Types of ARP
- Proxy ARP
- Gratuitous ARP
- Reverse ARP (RARP)
- Inverse ARP
Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may respond to ARP requests for a target that is in a different network from the sender. The Proxy ARP configured router responds to the ARP and map the MAC address of the router with the target IP address and fool the sender that it is reached at its destination.
Gratuitous ARP - Gratuitous ARP is an ARP request of the host that helps to identify the duplicate IP address. It is a broadcast request for the IP address of the router. If an ARP request is sent by a switch or router to get its IP address and no ARP responses are received, so all other nodes cannot use the IP address allocated to that switch or router. Yet if a router or switch sends an ARP request for its IP address and receives an ARP response, another node uses the IP address allocated to the switch or router.
Reverse ARP (RARP) - It is a networking protocol used by the client system in a local area network (LAN) to request its IPv4 address from the ARP gateway router table. A table is created by the network administrator in the gateway-router that is used to find out the MAC address to the corresponding IP address.
Inverse ARP (InARP) - Inverse ARP is inverse of the ARP, and it is used to find the IP addresses of the nodes from the data link layer addresses. These are mainly used for the frame relays, and ATM networks, where Layer 2 virtual circuit addressing are often acquired from Layer 2 signalling. When using these virtual circuits, the relevant Layer 3 addresses are available.
BOOTP
The Bootstrap Protocol is a networking protocol used to by a client for obtaining an IP address from a server. It was originally defined as specification RFC 951 and was designed to replace the Reverse Address Resolution Protocol (RARP), also known as RFC 903. Bootstrap protocol was intended to allow computers to find what they need to function properly after booting up. BOOTP uses a relay agent, which allows packet forwarding from the local network using standard IP routing, allowing one BOOTP server to serve hosts on multiple subnets.
A DHCP Server is a network server that automatically provides and assigns IP addresses, default gateways and other network parameters to client devices. It relies on the standard protocol known as Dynamic Host Configuration Protocol or DHCP to respond to broadcast queries by clients.
Unicast Routing
Unicast means the transmission from a single sender to a single receiver. It is a point to point communication between sender and receiver. There are various unicast protocols such as TCP, HTTP, etc.
- TCP is the most commonly used unicast protocol. It is a connection-oriented protocol that relay on acknowledgement from the receiver side.
- HTTP stands for Hyper Text Transfer Protocol. It is an object-oriented protocol for communication.
There are three major protocols for unicast routing:
- Distance Vector Routing
- Link State Routing
- Path-Vector Routing
TCP provides process to process communication, i.e., the transfer of data takes place between individual processes executing on end systems. This is done using port numbers or port addresses.
The role of Transport layer (Layer 4) is to establish a logical end to end connection between two systems in a network. The protocols used in Transport layer is TCP and UDP.
All transport layer protocols provide multiplexing/demultiplexing service. It also provides other services such as reliable data transfer, bandwidth guarantees, and delay guarantees. Each of the applications in the application layer has the ability to send a message by using TCP or UDP.
User Datagram Protocol (UDP) – a communications protocol that facilitates the exchange of messages between computing devices in a network. It's an alternative to the transmission control protocol (TCP). In a network that uses the Internet Protocol (IP), it is sometimes referred to as UDP/IP.
UDP is commonly used for applications that are “lossy” (can handle some packet loss), such as streaming audio and video. It is also used for query-response applications, such as DNS queries.
The most common UDP packets—DNS registrations and name-resolution queries—are sent to port 53. In contrast, TCP ports support only connection-oriented protocols. A connection-oriented protocol requires that network endpoints establish a channel between them before they transmit messages.
Transmission Control Protocol (TCP) – a connection-oriented communications protocol that facilitates the exchange of messages between computing devices in a network. It is the most common protocol in networks that use the Internet Protocol (IP); together they are sometimes referred to as TCP/IP.
TCP allows for transmission of information in both directions. This means that computer systems that communicate over TCP can send and receive data at the same time, similar to a telephone conversation. The protocol uses segments (packets) as the basic units of data transmission.
Stream Control Transmission Protocol (SCTP) is a transport-layer protocol that ensures reliable, in-sequence transport of data. SCTP provides multihoming support where one or both endpoints of a connection can consist of more than one IP address. This enables transparent failover between redundant network paths.
He SCTP sender splits user messages to DATA chunks and sends them to the receiver. The SCTP receiver uses the SACK chunk to acknowledge incoming data. ... The sender can now distinguish when a SACK acknowledges the originally sent DATA or retransmitted one. The extension requires support by the sender and the receiver.
A mobile network operator's most common use cases for SCTP security are roaming security and radio access network (RAN) security. GTP Deployments also include roaming security and RAN security. The best practice is for you to configure both GTP and SCTP security when you have a roaming or a RAN security use case
QoS is a set of technologies that work on a network to guarantee its ability to dependably run high-priority applications and traffic under limited network capacity. QoS technologies accomplish this by providing differentiated handling and capacity allocation to specific flows in network traffic.
QoS or Quality of Service in networking is the process of managing network resources to reduce packet loss as well as lower network jitter and latency. QoS technology can manage resources by assigning the various types of network data different priority levels.
Quality of Service" (QOS) refers to certain characteristics of a data link connection as observed between the connection endpoints. QOS describes the specific aspects of a data link connection that are attributable to the DLS provider. QOS is defined in terms of QOS parameters.
The leaky bucket algorithm is a method of temporarily storing a variable number of requests and organizing them into a set-rate output of packets in an asynchronous transfer mode (ATM) network. The leaky bucket is used to implement traffic policing and traffic shaping in Ethernet and cellular data networks.
The token bucket is an algorithm used in packet switched computer networks and telecommunications networks. It can be used to check that data transmissions in the form of packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow).
While in leaky bucket, packets are discarded. Token Bucket can send Large bursts at a faster rate while leaky bucket always sends packets at constant rate.