A Network Testing Program is Designed to Continually Transmit 10baset Ethernet Frames Containing 640
Sachidananda Kangovi , in Peering Carrier Ethernet Networks, 2017
3.1 Ethernet Protocol for Data Link Layer
Metcalfe and Boggs started work on the development of Ethernet around 1973 and published 11 the Ethernet protocol in 1976 to connect computers in a building or in a campus to form a LAN using coaxial cables. In 1979 they collaborated with a consortium of Digital Equipment Corporation, Intel and Xerox Corporation known in short as DIX consortium to promote and standardize this Ethernet protocol. The original design is shown in Fig. 3.1A signed by Metcalfe himself. This network was originally called Alto Aloha Network. Its name was changed to Ethernet to make it clear that this system could support any computer and not just Altos computers because just as "ether" carried electromagnetic waves to all radio stations, coaxial cables carried bits to all computers. The DIX specification was submitted to IEEE in 1980, and a second version designated as Ethernet II was submitted in 1982. 32 This implementation is shown in Fig. 3.1B which is widely available on the Internet and is presumed to be drawn by Metcalfe but despite our efforts that could not be established with certainty. The Ethernet II frame format is shown in Fig. 3.1C.
Figure 3.1. Original Ethernet design and Ethernet II frame format. (A) Sketch of original Ethernet design drawn by Metcalfe. (B) Schematic of Ethernet II implementation. (C) Ethernet II frame format.
Image in Fig. 3.1 (A) is courtesy of PARC, a xerox Company http://www.parc.com/content/news/media-library/historical_ethernet_composite_sketch_1973withcc_1.2x8.7_parc.jpg.As shown in Fig. 3.1B, in this implementation, the end stations were connected by tapping the shared medium and by using transceivers at the taps. Transceiver was connected by an interface cable to an interface on the end station. There was a controller behind the interface on the end station. The bits coming from end station were arranged in Ethernet frames by the controller. The format of the Ethernet II frame, as shown in Fig. 3.1C, has the address of the receiving end station in the 6-byte–long field for destination address (DA). It has the transmitting end station address in the 6-byte–long source address (SA) field. There is a 2-byte–long field called type for identifying type of application using this frame. This is followed by a field for payload which is the actual data to be transmitted. The minimum length of the payload is 46 bytes, and maximum is 1500 bytes. There is a provision for adding some padding if the actual payload is less than the minimum 46 bytes, so that the minimum size of the payload is maintained at 46 bytes. Finally, there is a 4-byte–long frame check sum (FCS) field generated by a special algorithm. These FCS bits are checked by the receiving end station to assure integrity of the received frame. The Ethernet II frame is preceded by a 64-bit–long preamble consisting of 62-bit–long preamble with alternating sequence of 1s and 0s and a 2-bit–long sync character of "11."
The controller shown in Fig. 3.1B was named as the data link layer in the DIX specification, and the interface was called physical layer. DIX specification also divides data link layer into four functional components namely, transmit (Tx) data encapsulation, Tx link management, receive (Rx) link management, and receive data decapsulation. Similarly, the physical layer has four functional components namely, Tx data encoding, Tx channel access, receive channel access, and receive data decoding. For the sake of brevity, the following description only includes data link layer and physical layer. For additional details, please refer to DIX specification. 32
The process starts when the transmitting end station requests the transmission of a frame and the data link layer of the transmitting end station constructs the frame from the data supplied by the end station and appends a frame check sequence to provide for error detection. The data link layer then attempts to avoid contention with other traffic on the channel by monitoring the carrier sense signal and deferring to passing traffic. When the channel is clear, frame transmission is initiated after a brief delay called interframe gap (IFG) of 9.6 μs to provide recovery time for other data link layers and for the physical channel. The data link layer then provides a serial stream of bits to the physical layer for transmission.
The physical layer performs the task of actually generating the electrical signals on the medium which represent the bits of the frame. The physical layer also monitors the medium and generates the collision detect signal, which, in the contention-free case, remains off for the duration of the frame. It is the physical layer that provides the clock to the data link layer for transmitting bits, and it is the physical layer that converts bits into electrical signal and puts that signal on the interface cable to be transmitted to the transceiver. The transceiver provides the functional electrical and mechanical interface to the shared medium.
The physical layer, before sending the actual bits of the frame, sends the encoded first 62 bits of the preamble to allow the receivers and repeaters along the channel to synchronize their clocks and other circuitry. The preamble is then followed by 2-bit–long sync character to indicate that the Ethernet frame would follow next. It then begins translating the bits of the frame into encoded form and passes them to the transceiver for actual transmission over the medium. When transmission has completed without contention, the data link layer informs the transmitting end station and awaits the next request for frame transmission.
At the receiving end station, the arrival of a frame is first detected by the physical layer, which responds by synchronizing with the incoming preamble and by turning on the carrier sense signal. As the encoded bits arrive from the medium, they are decoded in order to translate the signal back into binary data. The leading bits, up to and including the end of the preamble, are discarded. The receiving end station's physical layer then passes remaining bits to the data link layer.
Meanwhile, the receiving data link layer, having seen carrier sense go on, has been waiting for the incoming bits to be delivered. Receiving data link layer collects bits from the physical layer as long as the carrier sense signal remains on. When the carrier sense signal goes off, the frame is decapsulated for processing.
After decapsulation, the receiving data link layer checks the frame's DA field to decide whether the frame should be received by this station. If so, it passes the contents of the frame to the end station along with an appropriate status code. The status code is generated by inspecting the frame check sequence to detect any damage to the frame enroute and by checking for proper octet-boundary alignment of the end of the frame.
If multiple stations attempt to transmit at the same time, it is possible for their transmitting data link controllers to interfere with each other's transmissions, in spite of their attempts to avoid this by deferring. When two stations' transmissions overlap, the resulting contention is called a collision. A given station can experience a collision during the initial part of its transmission called the "collision window," before its transmitted signal has had time to propagate to all parts of the Ethernet channel. Once the collision window has passed, the end station is said to have acquired the channel and subsequent collisions are avoided, since all other properly functioning end stations can be assumed to have noticed the signal via carrier sense and to be deferring to it. The time to acquire the channel is thus based on the round-trip propagation time of the physical channel.
In the event of a collision, the transmitting station's physical layer first notices the interference on the channel and turns on the collision detect signal. This is noticed in turn by the transmitting data link layer, and collision handling begins. The transmitting data link layer enforces the collision by initiating a backoff algorithm by first transmitting a bit sequence called the jam. This is typically a 32-bit–long frame. This is just the part of a frame that the first end station managed to transmit before the collision occurred. If the collision occurs during preamble, then the jam is returned appended to preamble of 64 bits making a total length of 96 bits which takes 9.6 μs at 10 Mbps, that is, where the IFG of 9.6 μs has come from. If needed, the transmitting data link layer can use a higher IFG with a corresponding decrease in maximum throughput; however, the IFG cannot exceed 10.6 μs. This insures that the duration of the collision is sufficient to be noticed by the other transmitting station(s) involved in the collision. After the jam is sent, the transmitting data link layer terminates the transmission and schedules a retransmission attempt for a randomly selected time in the near future. Since collisions indicate a busy channel, transmitting data link layer attempts to adjust to the channel load by voluntarily delaying its own retransmissions to reduce its load on the channel. This is accomplished by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The retransmission interval is computed using an algorithm called truncated binary exponential backoff algorithm. Here, the station always waits for some multiple "k" of a 51.2-μs time interval, known as a slot time. The station chooses a random number "k" where after first collision, k is chosen from the set (0, 1) and waits for that number of slot time. If there is another collision, it waits again, but this time for a number "k" chosen from the set (0, 1, 2, 3). After three collisions, "k" is chosen from the set (0, 1, 2, 3, 4, 5, 6, 7). After "k" collisions on the same transmission, it chooses its number randomly from (0…{2k − 1}), until k = 10, when the set is frozen. Eventually, either the transmission succeeds or the attempt is abandoned after 15 unsuccessful attempts, the so-called attempt limit, and the transmitting data link layer gives up and reports a failure to the transmitting end station about the failure because either the channel has failed or has become overloaded.
At the receiving end station, the bits resulting from a collision are received and decoded by the receiving physical layer just as are the bits of a valid frame. The receiving physical layer is not required to assert the collision detect signal during frame reception; however, the assertion of the collision detect signal indicates a true collision in the physical layer. Instead, the fragmentary frames received during collisions are distinguished from valid frames by the receiving data link layer because a collision fragment is always smaller than the shortest valid frame. Such fragments are discarded by the receiving data link layer.
This DIX proposal 32 was approved with minor changes as IEEE 802.3 standard 25 in 1983, and this standard was designated as 10Base5 because it provided 10 Mbps over a thick coaxial cable same as that used in Ethernet II where the signal could be driven up to 500 m. It is important to understand this nomenclature as it will be used for designating variations of Ethernet standards that evolved since 1983. Ten in 10Base5 stands for 10-Mbps bandwidth, "Base" stands for baseband, the fact that the entire medium is used for transmission in contrast to broadband where the medium is divided into several channels at different frequencies for transmission of the signal simultaneously, and 5 represents that the signal could travel without attenuation to a distance of 500 m after that it attenuated. That is why the limit was 500 m. If it was required to extend the distance, then regeneration was needed and up to four repeaters could be used to regenerate and extend the Ethernet resulting in a maximum distance of 2500 m. The standard is designated as IEEE 802.3 because IEEE's subcommittee dealing with LAN began its work on LAN standards in February 1980; therefore, all standards coming out of this subcommittee are numbered starting with 802. Work of this subcommittee was further subdivided into three categories, and all LAN-related specifications started with 802.1, all logical link control (LLC) sublayer–related specifications started with 802.2 and all media access control (MAC) sublayer–related specifications started with 802.3. In 1984 when OSI seven-layer model was published, Ethernet was recognized as one of the protocols for layer 2 and layer 1 for LANs. Since then, it has become the most dominant protocol for these two layers.
Based on the DIX proposal, the IEEE 802.3 standard required that the entire frame length is from a minimum of 64 bytes to a maximum of 1518 bytes. In order to understand how the minimum frame size of 64 bytes was arrived at, let us examine the end-to-end one-way delay time and then arrive at the round-trip end-to-end delay time using the topology of a 10Base5 Ethernet LAN shown in Fig. 3.2.
Figure 3.2. Topology of a 10Base5 Ethernet LAN. (A) Topology for explaining end-to-end one way delay time. (B) Topology of 10Base5 for end-to-end round-trip delay time audit.
In order to understand the end-to-end delay time, a simplified topology is shown in Fig. 3.2A. Here, the end-to-end delay time is the sum of serialization time in the interface cable at the source, the processing, queuing, and serialization time at repeater 1, the propagation time in the long coaxial cable plus any point-to-point link, serialization time at repeater 2, and finally serialization time on the interface cable at the destination. The Ethernet II specification recommended slot time to be twice the one-way delay time, in other words, to be equal to the round-trip delay time to ensure that source can detect a collision while it is still transmitting. To get this round-trip delay time, the topology 32 of 10Base5 as shown in Fig. 3.2B was used. This topology is based on the interface cable length of 50 and 500 m of the medium in each LAN segment. Repeaters connected the LAN segments, so that signal was not attenuated between sending end station 1 and receiving end station n. The round-trip delay time was obtained by adding time taken, as described previously, to construct a frame, time taken for serialization of the frame and the propagation time of the electrical signal. This audit shown in Table 3.1 is from the DIX specification 32 for Ethernet II and was based on the assumption that the electrical signal travels at speed that is 0.77 times the speed of light which comes to 0.77 × 300,000,000 = 230,000,000 m/s. The DIX specification 32 also specifies a minimum propagation speed of 0.65 times the speed of light which comes to about 195,000,000 m/s. Based on this audit, the total round-trip delay for worst-case scenario comes to 46.38 μs. At 10-Mbps bandwidth, this translates to about 464 bits which was rounded off to 512 bits to be safe. A total of 512 bits are equal to 64 bytes. That is how the minimum size of 64 bytes or 512 bits was chosen.
Table 3.1. Round-Trip Propagation Delay Time
| Item | Round-Trip Delay Time (μs) |
|---|---|
| Encoder | 2.0 |
| Transceiver cable | 3.08 |
| Transceiver transmit path | 2.1 |
| Transceiver receive path | 1.95 |
| Transceiver collision path | 2.7 |
| Coaxial cable | 12.99 |
| Point-to-point link | 10.26 |
| Point-to-point link driver | 0.40 |
| Point-to-point link receiver | 0.40 |
| Repeater path | 0.80 |
| Repeater collision path | 0.80 |
| Carrier sense | 1.0 |
| Collision detect | 1.0 |
| Signal rise time | 6.3 |
| Collision fragment time tolerance | 0.2 |
| Total worst case delay | 46.38 |
That minimum frame size at 10 Mbps gives a time slot of 51.2 μs which ensures that the tail of the Ethernet frame has still not left the sending end station when the head reaches the receiving end station which then detects a collision and sends a collision detection signal to the sending end station and is received by the sending end station, so that it can stop transmitting and implement back off algorithm. The maximum frame size of 1518 bytes, on the other hand, was determined based on two factors. First factor was that if the packets are too long, they introduce extra delays to other traffic using the shared medium of Ethernet cable. The second factor was based on a safety device built into the early shared cable transceivers. This safety device was an antibabble system. If the device connected to a transceiver developed a fault and started transmitting continuously, then it would effectively block any other traffic from using that Ethernet cable segment. To protect from this happening, the early transceivers were designed to shut off automatically if the transmission exceeded about 1.25 ms. This equates to a data content of just over 12,500 bits or about 1563 bytes at 10 Mbps. However, as the transceiver used a simple analog timer to shut off the transmission if babbling was detected, therefore a limit of 1518 bytes which includes 1500 bytes of payload and 18 bytes of Ethernet header and FCS, was selected as a safe approximation to the maximum data size that would not trigger the safety device. A detailed analysis of the measured capacity can be found in a report 33 by Boggs et al.
The IEEE 802.3 design and frame format is shown in Fig. 3.3 along with mapping to the original design as well as to the layer 1 and 2 of the OSI seven-layer model.
Figure 3.3. IEEE 802.3-approved design and frame format. (A) IEEE 802.3 design of Ethernet interface. (B) IEEE 802.3-approved Ethernet frame format.
This standard divided data link layer into two sublayers called LLC layer and MAC layer. LLC allows for many higher level protocols defined by IEEE 802.1 standard to share and use same MAC layer. IEEE 802.2 standard defines LLC layer. LLC uses service access points (SAPs) which represents higher network layer protocols being supported by the frames assembled by the MAC layer. There are destination SAP (DSAP) and source SAP (SSAP) and together DSAP and SSAP are called link SAP. Information stored in the SAP fields also determines the type of connection including (1) unacknowledged connectionless, (2) connection oriented, or (3) acknowledged connectionless. In other words, LLC has four main functions:
- 1.
-
indicates the higher layer protocol using frames at layer 2,
- 2.
-
provides flow control,
- 3.
-
provides error recovery, and
- 4.
-
recovery from loss of connection.
In Ethernet II, SAP fields were referred by Ethertype or Type field (Fig. 3.1). There is another variation of LLC protocol where SAP was replaced by subnet access protocol (SNAP) and is designed to support extended addressing capabilities of transmission control protocol/internet protocol (TCP/IP) and AppleTalk protocol stacks. LLC field is 1- to 2-byte long, and length depends on the higher layer protocol. The IEEE 802.3 standard moved LLC header to payload field and replaced the Type field by a Length field in the Ethernet frame as shown in Fig. 3.3 to indicate total length of the payload.
The MAC layer portion of the layer 2 (data link layer) constructs frames for transmission and analyzes the received frames. MAC layer also determines when the end station can access physical medium and how to access it. MAC functionality is specified by IEEE 802.3 standard. This layer provides a means for the network interface card (NIC) on the end station to access the physical medium.
The physical layer, which is the OSI layer 1, on the other hand is responsible for clocking, encoding the bits into electrical signals and pinouts. Fig. 3.3 shows that IEEE 802.3 standard divided the physical layer into the upper physical layer signaling (PLS) sublayer and a lower physical medium attachment (PMA) sublayer. Between PMA and medium, there is a medium-dependent interface (MDI) sublayer which includes connectors. The PMA and MDI sublayers together is known as medium attachment unit (MAU). The MAU attaches directly to the medium, transmits and receives signals from medium, and identifies collisions. The PLS sublayer is responsible for generating and detecting the Manchester code which ensures that clocking information is transmitted along with the data. The interface between the MAU and PLS sublayers is known as the attachment unit interface (AUI). The AUI in 10Base5 implementation is an interface cable up to 50-m long which carries five twisted pairs connecting the station's NIC (which implements the MAC and PLS) to MAU. In 10Base2 standard which evolved in 1985 for thin coaxial cable and 10Base-T which came out in 1990 for Ethernet over twisted-wire pair, the MAU and AUI are themselves integrated into the NIC connecting directly to the medium. The standard ensured that the MAC sublayer is unchanged in all variations of 10 Mbps 802.3, and its PDUs or frames have a simple structure, shown in Fig. 3.3.
IEEE 802.3 standard replaced 2-bit–long synch with a 1-byte (8-bit)–long start frame delimiter (SFD) while still keeping the overall preamble to 8 bytes. The preamble in this standard consists of 7 bytes of the form 10101010 and is used by the receiver to allow it to establish bit synchronization because there is no clocking information on the medium when nothing is being sent. The SFD is a single byte, 10101011, which is a frame flag, indicating the start of a frame.
The MAC addresses used in 802.3 are always 48-bit (6-byte) long. Each NIC has its own unique address embedded in to the read-only memory chip on the NIC itself. Although the address is hard coded on the NIC, there is an option for the user-defined 2 bytes of this 6-byte–long address field. The embedded address has two components, an IEEE assigned 3 bytes for the NIC card manufacturer's address and 3 bytes assigned by the manufacturer to each NIC card. In the first component, 1 byte is reserved for broadcast or multicast, and the other 2 bytes are for the manufacturer's ID. As a result of these two components, each NIC has a unique address. By normal convention, Ethernet addresses are usually quoted as a sequence of 6 bytes (in hexadecimal) with each byte quoted in normal order but transmitted in reverse order, this arrangement is driven by the transmission order. The mechanism to distinguish universally administered and locally administered addresses is based on setting the second least significant bit of the most significant byte of the address. This bit is also referred to as the U/L bit, short for universal/local bit, which identifies how the address is administered. If the bit is 0, the address is universally administered. If it is 1, the address is locally administered. For example, in address 06-00-00-00-00-00, the most significant byte is 06 (hexadecimal), the binary form of which is 00000110, where the second least significant bit is 1. Therefore, it is a locally administered address. There is also a mechanism to distinguish unicast, multicast, and broadcast frames. If the least significant bit of the most significant octet of a DA is set to 0 (zero), the frame is meant to reach only one receiving NIC, and this type of transmission is called unicast. If the least significant bit of the most significant address octet is set to 1, then it is for multicast transmission. If the most significant address octet of a DA is set to all 1s, then it is a broadcast to all stations on the local network.
The length field is the only one which differs between 802.3 and Ethernet II specifications. In 802.3, it indicates the number of bytes of data in the frame's payload and can be anything from 0 to 1500 bytes. Frames must be at least 64-byte long, not including the preamble, so, if the data field is shorter than 46 bytes, it must be compensated by padding. The reason for specifying a minimum length as explained before lies with the collision detection mechanism. Referring to Fig. 3.3, this includes a minimum payload of 46 bytes plus Ethernet headers and FCS which is 18 bytes thus giving a total of 46 + 18 = 64 bytes = 512 bits.
The last field in the IEEE 802.3 Ethernet frame format is the FCS field which is 4-byte long and is based on a (cyclic redundancy check) CRC-32 polynomial code.
Connecting end stations to a shared coaxial cable was not easy, and also if one end station was removed, it would make the whole LAN inaccessible to other end stations. In 1985, IEEE issued specification 802.3c for 10-Mbps repeaters or hubs to address issues with coaxial cable–based shared medium. And, with the release of 10Base-T standard in 1990, it became possible to implement Ethernet-based LANs on commonly available unshielded twisted-pair (UTP) telephone wires to connect end stations to a hub using RJ45 style connector. This new standard made Ethernet LAN implementation much simpler and faster, and this led to rapid increase of Ethernet adoption in LANs. Fig. 3.4 shows the schematic of both the shared medium and hub-based LANs.
Figure 3.4. Schematic diagram of (A) shared medium and (B) hub-based LAN.
Hub is a multiport device that functions like a shared medium, but it is easier to deploy and improves signal quality over greater distances. It has no layer 2 functionalities in the sense that it does not look at the layer 2 frames. It is basically a layer 1 device that transmits the bits and just broadcasts them through a shared electrical bus or back plane to all end stations connected to it; therefore, it is a multiport repeater. Although, use of pair of wires allowed separation of transmitted signal from the receiving signal, but the hub's back plane acted as a shared medium, and so collision was still an issue that MAC layer had to deal with by monitoring and implementing backoff algorithm. Fig. 3.4 shows that use of hub did not change the collision and broadcast domains.
As the Ethernet LANs grew in size, the collision issues became critical. This led to breaking up larger LANs into smaller ones. But this led to the desire to interconnect these smaller LANs. This was resolved by the development of bridge. Bridge was built by implementing software on a generic hardware platform. Because of this software-driven intelligence, a bridge is a layer 2 device in the sense that it is able to analyze Ethernet frames and forward the frames to the destination end stations only. Also, because of the RJ45-based connection using UTP wire pair, a bridge supports full-duplex (FDX) mode (ignoring the first 4 years or so of bridges) and not the half-duplex (HDX) mode as the case was with the shared medium. For these reasons, MAC layer on end stations are not required to run carrier sense multiple access/collision detection (CSMA/CD) for collision detection. A bridge is also capable of buffering frames which allows multiple end stations to transmit at the same time. A bridge connects two or more hubs to create a larger LAN. End stations can also be directly connected to a bridge. Each subnetwork of the larger or aggregate LAN is called LAN segment. Every port on a bridge has a MAC address but unlike end station, which only accepts frames addressed to it, a port on the bridge accepts all frames even if it is not addressed to the port. A bridge implements MAC address learning which allows it to gradually build a forwarding database consisting of MAC addresses of all the end stations connected on the LAN and the ports on the bridge by which they can be reached. It builds this table by examining the Ethernet frame whenever it receives it and stores the source MAC address and the port on which the frame arrived. When the bridge receives an Ethernet frame for a destination which is in the forwarding database, then it will send the frame to the port from which the destination can be reached. This type of frames is known as the known unicast frames. If a destination is reachable on the same port as the source, then the bridge discards it because this frame would have been already broadcasted by the hub to which both the source and destination are attached. On the other hand, if the bridge receives an Ethernet frame for a destination that is not in the forwarding database, then it broadcasts the frame to all end stations, except the one from which it received the frame to avoid loop back. This process of flooding unknown unicast frames allows the bridge to determine the port that reaches the destination.
By implementation of LAN segments, MAC learning and intelligently forwarding Ethernet frames, bridge improved the performance of the aggregate LAN by filtering traffic that is local to a LAN segment and forwarding nonlocal traffic to only the correct segment.
The bridge later evolved into Ethernet switch by improving performance further through dedicated hardware instead of software and having large number of ports, so that each end station can be on its own port. The cost also reduced with time, so that Ethernet switch became a cheap device. Now the lines of distinction between a switch and a bridge have gone away, and it is common to use the terminology of switch and bridge interchangeably. Fig. 3.5 shows the LAN arrangement based on a bridge and another LAN arrangement based on an Ethernet switch, and they are functionally the same. As seen from Fig. 3.5, now the collision domains have been broken up and are confined to LAN segments only; however, the broadcast domain still remains intact and is not broken up.
Figure 3.5. Schematic of (A) Bridge and (B) Ethernet switch–based LAN.
When Ethernet was HDX data could be transmitted in only one direction at a time. With the development of FDX, this situation changed and switches made it easier to exploit FDX mode. In a fully switched network, each node communicates only with the switch, not directly with other nodes. Information travels from node to switch and from switch to node simultaneously. Fully switched networks employ either twisted-pair or fiber optic cabling, both of which use separate medium for transmitting and receiving data. In this type of environment, Ethernet end stations need not implement the collision detection process since they are the only potential devices that can access the medium. In other words, traffic flowing in each direction has a lane to itself. This allows nodes to transmit to the switch as the switch transmits to them in a collision-free environment. Transmitting in both directions can effectively double the apparent speed of the network when two nodes are exchanging information. If, for example, the speed of the network is 10 Mbps, then each node can transmit simultaneously at 10 Mbps.
A switch is also capable of buffering frames. The switch establishes a connection between two segments just long enough to send the current frames. Incoming Ethernet frames are saved to a temporary memory area or buffer in the switch; the MAC address contained in the frame's header is read and then compared to a list of addresses maintained in the switch's forwarding database. For routing traffic, Ethernet switch uses one of three methods:
- 1.
-
cut-through,
- 2.
-
store-and-forward, and
- 3.
-
fragment-free.
Cut-through switch reads the MAC address as soon as a frame is detected by the switch. After storing the 6 bytes that make up the address information, the switch immediately begins sending the frame to the destination node, even as the rest of the frame is coming into the switch. A switch using store-and-forward will save the entire frame to the buffer and check it for CRC errors or other problems before sending. If the frame has an error, it is discarded. Otherwise, the switch looks up the MAC address and sends the frame on to the destination node. Many switches combine the two methods, using cut-through until a certain error level is reached and then changing over to store-and-forward. Very few switches are strictly cut-through since this provides no error correction. A less common method is fragment-free. It works like cut-through except that it stores the first 64 bytes of the frame before sending it on. The reason for this is that most errors, and all collisions, occur during the initial 64 bytes of a frame.
Ethernet LAN switches vary in their physical design. Currently, there are three popular configurations in use: (1) shared memory type of switch stores all incoming frames in a common memory buffer shared by all the switch input and output ports, and the switch then sends frames out via the correct output ports to the destination nodes; (2) a matrix type of switch has an internal grid with the input ports and the output ports crossing each other; when a frame is detected on an input port, the MAC DA is compared to the lookup table in the forwarding database to find the appropriate output port; the switch then makes a connection on the grid where these two ports intersect so that the frames are sent to the destination nodes; and (3) a bus architecture type of switch, where instead of using a grid, a switch uses an internal transmission path consisting of a bus shared by all the ports using time-division multiple access. A switch based on this bus architecture has a dedicated memory buffer for each port and an application-specific integrated circuit (ASIC) to control the internal bus access.
The development of Ethernet switch was a big improvement due to the implementation of learning, filtering, and intelligent forwarding, and it broke up collision domains by confining them to LAN segments; however, it still had the issues due to one large broadcast domain. As the number of end stations increased, bandwidth was consumed by broadcast traffic, multicast traffic, and unknown unicast traffic. Second, any failure in any link would break the aggregate LAN and communications between end stations connected by that failed link would stop. So there was a need for network protection or redundancy in such a way that did not result in loop-back and resultant broadcast storms.
The issue of redundancy was resolved by the Spanning Tree Protocol (STP) which was later enhanced to Rapid STP (RSTP). Working of this protocol is shown in Fig. 3.6. Here, the aggregate LAN is divided into seven LAN segments. Let us assume that in the beginning, switches 2 and 3 are not connected by LAN segment 4. If node 1 wants to transmit frames to node 2, then the only way is to send it through switch 1 via LAN segments 1 and 2.
Figure 3.6. Spanning Tree Protocol for redundancy in Ethernet LAN.
Now consider a situation where switch 1 has failed. In this case, there is no way for node 1 to transmit to node 2. To avoid this situation, we connect switches 2 and 3 with a LAN segment 4. Now, even if the switch 1 fails, frames can go through LAN segment 4. However this causes an issue. To understand this issue, let us assume that switches 1, 2, and 3 are not aware of node 2. Once the frame comes from node 1, it is added to the forwarding database in each of the switches. Since these switches do not know about node 2, they will broadcast the frame to all the LAN segments that are connected to them. Since switch 3 will get these broadcasts from both switches 1 and 2, it will forward broadcast coming from switch 1 to switch 2 and broadcast coming from switch 2 to switch 1. This loop back causes a broadcast storm as the frames are broadcasted, received, and rebroadcasted by each switch, resulting in potentially severe network congestion.
To avoid this broadcast storm while providing redundancy, the STP was developed by Digital Equipment Corporation which has been standardized as the IEEE 802.1D specification. Essentially, a spanning tree uses the spanning tree algorithm (STA), which senses that the switch has more than one way to communicate with a node, determines which way is best, and blocks out the other path(s). Second, it keeps track of the other path(s), just in case the primary path is unavailable. In this protocol, each switch is assigned a group of IDs, one for the switch itself and one for each port on the switch. The switch's identifier, called the bridge ID (BID), is 8-byte long and contains a bridge priority (2 bytes) along with one of the switch's MAC addresses (6 bytes). Each port ID is 16-bit long with two parts: a 6-bit priority setting and a 10-bit port number. Next, a path cost value is given to each port. The cost is typically based on a guideline established as part of IEEE 802.1D. According to the original specification, cost is 1000 Mbps (1 gigabit per second) divided by the bandwidth of the segment connected to the port. Therefore, a 10-Mbps connection would have a cost of (1000/10) 100. To compensate for the speed of networks increasing beyond the gigabit range, the standard cost has been slightly modified. The new cost values are given in Table 3.2.
Table 3.2. Spanning Tree Cost Values
| Bandwidth | Spanning Tree Protocol Cost Value |
|---|---|
| 4 Mbps | 250 |
| 10 Mbps | 100 |
| 16 Mbps | 62 |
| 45 Mbps | 39 |
| 100 Mbps | 19 |
| 155 Mbps | 14 |
| 622 Mbps | 6 |
| 1 Gbps | 4 |
| 10 Gbps | 2 |
The path cost can also be an arbitrary value assigned by the network administrator, instead of one of the standard cost values. Each switch begins a discovery process to choose which network paths it should use for each segment. This information is shared between all the switches by way of special network frames called bridge protocol data units (BPDUs). The parts of a BPDU are as follows:
- 1.
-
Root BID: This is the BID of the current root bridge.
- 2.
-
Path cost to root bridge: This determines how far away the root bridge is located. For example, if the data has to travel over three 1 Gbps segments to reach the root bridge, then the cost from Table 3.2 is equal to (4 + 4 + 0) which comes to 8. The segment attached to the root bridge will normally have a path cost of 0.
- 3.
-
Sender BID: This is the BID of the switch that sends the BPDU.
- 4.
-
Port ID: This is the actual port on the switch that the BPDU was sent from.
All the switches initially send BPDUs to all their neighbor switches, trying to determine the best path between various segments. When a switch receives a BPDU from another switch that is better than the one, it is broadcasting for the same segment, it will stop broadcasting its BPDU out for that segment. Instead, it will store the other switch's BPDU for reference and for broadcasting out to inferior segments, such as those that are farther away from the root bridge. A root bridge is chosen based on the results of the BPDU process between the switches. Initially, every switch considers itself the root bridge. When a switch first powers up on the network, it sends out a BPDU with its own BID as the root BID. When the other switches receive the BPDU, they compare the BID to the one they already have stored as the root BID. If the new root BID has a lower value, they replace the saved one. But if the saved root BID is lower, a BPDU is sent to the new switch with this BID as the root BID. When the new switch receives the BPDU, it realizes that it is not the root bridge and replaces the root BID in its table with the one it just received. The result is that the switch that has the lowest BID is elected by other switches as the root bridge. Based on the location of the root bridge, the other switches determine which of their ports have the lowest path cost to the root bridge. These ports are called root ports, and each switch (other than the current root bridge) must have one. It is important to note that a bridge/switch has two or more ports. The one that is connected on the side where the root resides are the root port. A port not facing the root but forwarding traffic at lowest cost from another segment is called designated port. Next, the switches determine designated ports. A designated port is the connection used to send and receive packets on a specific segment. Designated ports are selected based on the lowest path cost to the root bridge for a segment. Since the root bridge will have a path cost of "0," any ports on it that are connected to segments will become designated ports. For the other switches, the path cost is compared for a given segment. If one port is determined to have a lower path cost, it becomes the designated port for that segment. If two or more ports have the same path cost, then the switch with the lowest BID is chosen. Once the designated port for a network segment has been chosen, any other ports that connect to that segment become nondesignated ports. They block network traffic from taking that path, so it can only access that segment through the designated port. By having only one designated port per segment, all looping issues are resolved.
Each switch has a table of BPDUs that it continually updated. The network is now configured as a single spanning tree, with the root bridge as the trunk and all the other switches as branches. Each switch communicates with the root bridge through the root port and with each segment through the designated port, thereby maintaining a loop-free network. In the event that the root bridge begins to fail or have network problems, STP allows other switches to immediately reconfigure the network with another switch acting as the root bridge. This STP process gives the ability to have a complex network that is fault tolerant and yet fairly easy to maintain.
This STP was enhanced in IEEE 802.1w and is called RSTP. In this enhancement, the reconfiguration time due to failure was reduced to 10 s from 50 s that existed for STP. RSTP also supports virtual LANs (VLANs).
The other problem related to one large broadcast domain was resolved by VLANs. Before VLANs, the only way to separate broadcast domains was to use routers. But, routers are layer 3 devices and processing at layer 3 would increase latency. In addition to breaking up broadcast domains, VLANs provided other benefits as well. A VLAN is a logical broadcast domain. Frames sent to a broadcast domain on a specific VLAN are only forwarded to nodes belonging to that VLAN. Initial implementation of VLAN on a switch was proprietary and not based on standards. This hindered adoption of VLANs. Once VLAN implementation was standardized, its adoption grew rapidly.
Standardization of VLANs traces their origin to the IEEE 802.1D standard which describes LAN that include all end stations physically connected to a LAN. In this standard, multicast frames are forwarded to all ports. There was no capability for a switch to determine if the end station connected to a port needed the multicast frames or not. The IEEE 802.1p extension (the lower case "p" indicates this standard is only an extension of 802.1D and not a standalone standard) provided this capability to Ethernet switch to dynamically update the filtering database so that multicast frames are sent to only those ports that have end stations connected to these ports that needed these multicast frames. This extension also provided a capability to prioritize frames to expedite transmission of frames required by time critical applications like voice communications and video conferencing. The specification that standardized VLAN was IEEE 802.1Q–2005 later enhanced by IEEE 802.1Q–2011. It extended the concepts of IEEE 802.1p to provide capabilities to define and support VLANs by defining VLAN tags for identification of VLAN membership and associated priority defined by class of service (CoS). This specification also defined an approach to extend VLANs between switches using trunk lines and multiplexing VLANs over these trunk lines using VLAN tagging. After this specification was issued, the Ethernet frame format was amended in 1998 by IEEE 803.2ac to account for IEEE 802.1Q—defined VLAN tags. Fig. 3.7 shows the VLAN tags as defined by IEEE 802.1Q and the Ethernet frame that includes this VLAN tag. The VLAN tag is 4-byte long. The first 2 bytes are called tag protocol identifier (TPID), and as per 802.1Q–2005, it was set to hexadecimal 8100 represented as 0x8100 or could be set to 0x88a8 (TPID of 0x88a8 was added in the 2011 version of the standard as a result of the 802.1ad amendment. It was not in the 2005 version). Commonly, a VLAN tag with TPID set to 0x8100 is called Customer VLAN tag or C-tag and TPID equal to 0x88a8 is called Service Provider tag or S-tag. It should be noted that an Ethernet frame could have both C-tag and S-tag as we will see in next chapter. The 2-byte–long field following TPID is called Tag Control Information (TCI) field.
Figure 3.7. VLAN tag and the modified Ethernet frame format. (A) IEEE 802.1Q defined VLAN tag (B) IEEE 802.3ac defined Ethernet frame format to include VLAN tag.
This TCI is divided in to three fields. The first field in the Priority Code Point (PCP) field is commonly known as P-bits. It is 3-bit long that gives 23 or eight possible values and is used to set the CoS value from 0 to 7, 0 is lowest priority and 7 being highest priority as defined in IEEE 802.1p. Next, there is a 1-bit–long canonical format identifier (CFI) field (this CFI field was eliminated from the 0x8100 tag in the 2011 standard and never existed in 0x88a8 tag. In both cases, it is now designated as Discard Eligibility Indicator (DEI). We will cover it in more detail in Chapters 4 and 5 Chapter 4 Chapter 5 ). When it is 0, it indicates an Ethernet frame format, and when it is 1, it indicates a Token Ring frame format. Since majority of the cases, it is Ethernet frame format; it is set to 0. After the CFI field, there is the VLAN-identifier (VID) field which is 12-bit long and gives 212 or 4096 possible VIDs starting from 0 to 4095 identifying which VLAN the Ethernet frame belongs to. This VLAN tag as specified by the IEEE 802.1Q standard is inserted in the Ethernet frame after the SA and before the Length/Type field as shown in Fig. 3.7. To account for this VLAN tag, the Ethernet frame format was modified by the IEEE802.3ac standard. A network administrator can create a VLAN using most switches simply by logging into the switch via Telnet and entering the parameters for the VLAN including name, domain, and port assignment. Once an Ethernet frame with or without C-tag arrives at the switch port, the switch assigns the S-tag to the frame with port VID as specified in the VLAN configuration for that port. This use of S-tag for switching is known as Provider Bridging, and we will cover it in more detail in Chapters 4 and 5 Chapter 4 Chapter 5 . After this, the switch applies the filtering rule, so that the frame is not sent to the port from which it has just arrived. The filtering process also examines the DA field in the Ethernet frame and checks if the destination MAC address is in its forwarding database to determine the port from which destination can be reached. The switch also ensures that the output port is part of the VLAN as defined by the VID. This check ensures that the frame is not transmitted by crossing VLAN boundary. The switch then examines the CoS value for the VLAN and, based on that CoS value as determined from the PCP field, assigns the frame to an output queue or buffer for the output port. The switch also determines if the S-tag is to be assigned or removed when egressing from output port. The S-tag is retained when egressing to a trunk link and is removed when egressing to an access link going to an end station. This process is better explained by an example. Topologies of port-based and extended port-based VLANs are shown in Fig. 3.8. In case of port-based VLAN, four nodes are connected to four ports on the Ethernet switch where nodes 1 and 3 are made members of VLAN 10 and nodes 2 and 4 are members of VLAN 12. So when node 1 sends a broadcast message, it is sent only to members of VLAN 10 that is only to node 3. These port-based VLANs are created by the network administrator manually by logging on to the switch interface and configuring the ports 1 and 3 to be members of VLAN 10 and ports 2 and 4 to be members of VLAN 12. Typically VLANs are identified by numeric assignment as per VID.
Figure 3.8. Port-based VLAN topology. (A) Port-based VLAN. (B) Extended VLAN.
The real advantage of VLAN is derived when it is extended to more than one switch as shown in Fig. 3.8B. This is illustrated by having two Ethernet switches connect by a trunk line. Node 1 and 2 are connected to switch 1, and node 3 and 4 are connected to switch 2 and similar to case (A); nodes 1 and 3 are part of VLAN 10 and nodes 2 and 4 are part of VLAN 12. Although frames for both VLANs 10 and 12 travel through the same trunk line between switches 1 and 2, they are kept separate by their respective VLAN tags. Also, explicit tagging of the frames on trunk line with a VLAN identifier VID reduces the processing at the switch because a receiving switch does not have to process the frame to determine VLAN membership. It is important to note that S-tags are relevant to ports on the switch only and not to the end stations, they do not know which VLAN they belong to.
We considered the example of port-based VLANs here because they are the most popular VLANs. However, there are other types of VLANs as well including MAC address-based, protocol-based, and policy-based VLANs. As mentioned before, network administrator can create a VLAN using most switches simply by logging into the switch via Telnet and entering the parameters for the VLAN including name, domain, and port assignments. After creation of the VLAN, any network segments or end stations connected to the assigned ports will become part of that VLAN. Some of the common benefits of VLANs are as follows.
- 1.
-
Security—separating systems that have sensitive data from the rest of the network increases security and decreases unauthorized access.
- 2.
-
Projects/special applications—allow better management of a project because a VLAN brings all the required nodes together.
- 3.
-
Performance/bandwidth—careful monitoring of network use allows the network administrator to create VLANs that implements "switch many, route once" strategy by reducing the number of router hops and increasing the apparent bandwidth for network users.
- 4.
-
Broadcasts/traffic flow—since an important element of a VLAN is the fact that it does not pass broadcast traffic to nodes that are not part of the VLAN, it automatically reduces broadcasts.
- 5.
-
Access list creation—an access list is a table that the network administrator creates that lists which addresses have access to that network. This allows a network administrator with a way to control who sees what network traffic. It is easy to create.
- 6.
-
Departments/specific job types—companies may want VLANs setup for departments that are heavy network users (such as multimedia or engineering), or a VLAN across departments that is dedicated to specific types of employees (such as managers or sales people).
It should be noted that an Ethernet switch allows only intra-VLAN communications. If there is a need for inter-VLAN communications, then a router is needed. Routers are layer 3 devices, however, they still use Ethernet at layer 1 and 2 due to the availability of Ethernet not only in LAN but also in MAN. Additionally, the availability of Ethernet over dense wavelength-division multiplexing (DWDM) technology in the optical packet networks has now extended the reach of Ethernet in RAN and WAN as well.
With these fundamental developments involving Ethernet frame format definitions, CSMA/CD process for carrier sensing and collision detection, evolution from shared medium to hub to bridge to switch-based LANs, provision of redundancy based on STP and RSTP, and finally the capability of VLAN creation and handling by Ethernet switches, the adoption of Ethernet for LANs grew rapidly and led to other important enhancements mostly related to increasing bandwidth and distance. This will be covered in the next section.
charetteanistring1947.blogspot.com
Source: https://www.sciencedirect.com/topics/computer-science/logical-link-control
0 Response to "A Network Testing Program is Designed to Continually Transmit 10baset Ethernet Frames Containing 640"
Post a Comment