Unit-3 Medium Access Sublayer

Unit-3 Medium Access Sublayer

Structure:

3.0 Objectives

3.1 LAN and WAN

3.2 ALOHA Protcols

3.3 LAN Protocols

3.4 IEEE 802 Standards for LANs

3.5 Fiber Optic Networks

3.6 Summary

3.7 Self Assessment Questions

3.8 Terminal Questions

3.9 Answers to Self Assessment Questions

3.10 Answers to Terminal Questions

3.0 Objectives

This unit provides the reader the necessary theory for understanding the Medium Access (MAC) sublayer of the data link layer.

After completion of this unit you will be able to:

· Define LAN and MAN

· Describe the channel allocation mechanisms used in various LANs and MANs

· Describe ALOHA protocols

· Compare and Contrast various LAN protocols

· Explain various IEEE standards for LANs

3.1 LAN and WAN

i) Static Channel Allocation in LAN and MAN

ii) Dynamic Channel Allocation in LAN and MAN

As the data link layer is overloaded, it is split into MAC and LLC sub layers. MAC sub-layer is the bottom part of the data link layer. Medium access control is often used as a synonym to multiple access protocol, since the MAC sub layer provides the protocol and control mechanisms that are required for a certain channel access method. This unit deals with broadcast networks and their protocols.

In any broadcast network, the key issue is how to determine who gets to use the channel when there is a competition. When only one single channel is available, determining who should get access to the channel for transmission is a very complex task. Many protocols for solving the problem are known and they form the contents of this unit.

Thus unit provides an insight of a channel access control mechanism that makes it possible for several terminals or network nodes to communicate within a multipoint network. The MAC layer is essentially important in local area networks(LAN’s), many of which use a multi-access channel as the basis for communication. WAN’s in contrast use a point to point networks.

To get a head start, let us define LANs and MANs.

Definition: A Local Area Network (LAN) is a network of systems spread over small geographical area, for example a network of computers within a building or small campus.

The owner of a LAN may be the same organization within which the LAN network is set up. It has higher data rates i.e. in scales of Mbps (Rates at which the data are transferred from one system to another) because the systems to be spanned are very close to each other in proximity.

Definition: A WAN (Wide Area Network) typically spans a set of countries that have data rates less than 1Mbps, because of the distance criteria.

The LANs may be owned by multiple organizations since the spanned distance is spread over some countries.

i) Static Channel Allocation in LAN and MAN

Before going for the exact theory behind the methods of channel allocations, we need to understand the base behind this theory, which is given below:

The channel allocation problem

We can classify the channels as static and dynamic. The static channel is where the number of users are stable and the traffic is not bursty. When the number of users using the channel keeps on varying the channel is considered as a dynamic channel. The traffic on these dynamic channels also keeps on varying. For example: In most computer systems, the data traffic is extremely bursty. We see that in this system, the peak traffic to mean traffic ratios of 1000:1 are common.

· Static channel allocation

The usual way of allocating a single channel among the multiple users is frequency division multiplexing (FDM). If there are N users, the bandwidth allocated is split into N equal sized portions. FDM is simple and efficient technique for small number of users. However when the number of senders is large and continuously varying or the traffic is bursty, FDM is not suitable.

The same arguments that apply to FDM also apply to TDM. Thus none of the static channels allocation methods work well with bursty traffic we explore the dynamic channels.

· Dynamic channels allocation in LAN’s and MAN’s

Before discussing the channel allocation problems that is multiple access methods we will see the assumptions that we are using so that the analysis will become simple.

Assumptions:

1. The Station Model:

The model consists of N users or independent stations. Stations are sometimes called terminals. The probability of frame being generated in an interval of length ∆t is λ.Δt, where λ is a constant and defines the arrival rate of new frames. Once the frame has been generated, the station is blocked and does nothing until the frame has been successfully transmitted.

2. Single Channel Assumption:

A single channel is available for all communication. All stations can transmit using this single channel. All can receive from this channel. As far as the hard is concerned, all stations are equivalent. It is possible the software or the protocols used may assign the priorities to them.

3. Collisions:

If two frames are transmitted simultaneously, they overlap in time and the resulting signal is distorted or garbled. This event or situation is called a collision. We assume that all stations can detect collisions. A collided frame must be retransmitted again later. Here we consider no other errors for retransmission other than those generated because of collisions.

4. Continuous Time

For a continuous time assumption we mean, that the frame transmission on the channel can begin any instant of time. There is no master clock dividing the time into discrete intervals.

5. Slotted Time

In case of slotted time assumption, the time is divided into discrete slots or intervals. The frame transmission on the channel begins only at the start of a slot. A slot may contain 0, 1, or more frames. The 0 frame transmission corresponds to idle slot, 1 frame transmission corresponds to successful transmission, and more frame transmission corresponds to a collision.

6. Carrier Sense

Using this facility the users can sense the channel. i.e. the stations can tell if the channel is in use before trying to use it. If the channel is sensed as busy, no station will attempt to transmit on the channel unless and until it goes idle.

7. No Carrier Sense:

This assumption implies that this facility is not available to the stations. i.e. the stations cannot tell if the channel is in use before trying to use it. They just go ahead and transmit. It is only after transmission of the frame they determine whether the transmission was successful or not.

The first assumption states that the station is independent and work is generated at a constant rate. It also assumes that each station has only one program or one user. Thus when the station is blocked no new work is generated. The single channel assumption is the heart of this station model and this unit. The collision assumption is also very basic. Two alternate assumptions about time are discussed. For a given system only one assumption about time holds good, i.e. either the channel is considered to be continuous time based or slotted time based. Also a channel can be sensed or not sensed by the stations. Generally LANs can sense the channel but wireless networks cannot sense the channel effectively. Also stations on wired carrier sense networks can terminate their transmission prematurely if they discover collision. But in case of wireless networks collision detection is rarely done.

3.2 ALOHA Protocols

In 1970s, Norman Abramson and his colleagues at University of Hawaii devised a new and elegant method to solve the channel allocation problem. Their work has been extended by many researchers since then. His work is called the ALOHA system which uses ground-based radio broadcasting. This basic idea is applicable to any system in which uncoordinated users are competing for the use of a shared channel.

Pure or Un-slotted Aloha

The ALOHA network was created at the University of Hawaii in 1970 under the leadership of Norman Abramson. The Aloha protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.

The first version of the protocol was basic:

· If you have data to send, send the data

· If the message collides with another transmission, try resending it later

Figure 3.1: Pure ALOHA

Figure 3.2: Vulnerable period for the node: frame

The Aloha protocol is an OSI layer 2 protocol used for LAN. A user is assumed to be always in two states: typing or waiting. The station transmits a frame and checks the channel to see if it was successful. If so the user sees the reply and continues to type. If the frame transmission is not successful, the user waits and retransmits the frame over and over until it has been successfully sent.

Let the frame time denote the amount of time needed to transmit the standard fixed length frame. We assume the there are infinite users and generate the new frames according Poisson distribution with the mean N frames per frame time.

· If N>1 the users are generating the frames at higher rate than the channel can handle. Hence all frames will suffer collision.

· Hence the range for N is

0<N<1

· If N>1 there are collisions and hence retransmission frames are also added with the new frames for transmissions.

Let us consider the probability of k transmission attempts per frame time. Here the transmission of frames includes the new frames as well as the frames that are given for retransmission. This total traffic is also poisoned with the mean G per frame time. That is G ≥ N

· At low load: N is approximately =0, there will be few collisions. Hence few retransmissions that is G=N

· At high load: N >>1, many retransmissions and hence G>N.

· Under all loads: throughput S is just the offered load G times the probability of successful transmission P0

S = G*P0

The probability that k frames are generated during a given frame time is given by Poisson distribution

P[k]= Gke-G / K!

So the probability of zero frames is just e-G. The basic throughput calculation follows a Poisson distribution with an average number of arrivals of 2G arrivals per two frame time. Therefore, the lambda parameter in the Poisson distribution becomes 2G.

Hence P0 = e-2G

Hence the throughput S = GP0 = Ge-2G

We get for G = 0.5 resulting in a maximum throughput of 0.184, i.e. 18.4%.

Pure Aloha had a maximum throughput of about 18.4%. This means that about 81.6% of the total available bandwidth was essentially wasted due to losses from packet collisions.

Slotted or Impure ALOHA

An improvement to the original Aloha protocol was Slotted Aloha. It is in 1972 Roberts published a method to double the throughput of a pure ALOHA by using discrete time-slots. His proposal was to divide the time into discrete slots corresponding to one frame time. This approach requires the users to agree to the frame boundaries. To achieve synchronization one special station emits a pip at the start of each interval similar to a clock. Thus the capacity of slotted ALOHA increased to the maximum throughput of 36.8%.

The throughput for pure and slotted ALOHA system is shown in figure 3.3. A station can send only at the beginning of a timeslot and thus collisions are reduced. In this case, the average number of aggregate arrivals is G arrivals per 2X seconds. This leverages the lambda parameter to be G. The maximum throughput is reached for G = 1.

Figure 3.3: Throughput versus offered load traffic

With Slotted Aloha, a centralized clock sends out small clock tick packets to the outlying stations. Outlying stations are allowed to send their packets immediately after receiving a clock tick. If there is only one station with a packet to send, this guarantees that there will never be a collision for that packet. On the other hand if there are two stations with packets to send, this algorithm guarantees that there will be a collision, and the whole of the slot period up to the next clock tick is wasted. With some mathematics, it is possible to demonstrate that this protocol does improve the overall channel utilization, by reducing the probability of collisions by a half.

It should be noted that Aloha’s characteristics are still not much different from those experienced today by Wi - Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount of inherent inefficiency in these systems. It is typical to see these types of networks’ throughput break down significantly as the number of users and message burstiness increase. For these reasons, applications which need highly deterministic load behavior often use token-passing schemes (such as token ring) instead of contention systems.

For instance ARCNET is very popular in embedded applications. Nonetheless, contention based systems also have significant advantages, including ease of management and speed in initial communication. Slotted Aloha is used on low bandwidth tactical Satellite communications networks by the US Military, subscriber based Satellite communications networks, and contact less RFID technologies.

3.3 LAN Protocols

With slotted ALOHA, the best channel utilization that can be achieved is 1 / e. This is hardly surprising since with stations transmitting at will, without paying attention to what other stations are doing, there are bound to be many collisions. In LANs, it is possible to detect what other stations are doing, and adapt their behavior accordingly. These networks can achieve a better utilization than 1 / e.

CSMA Protocols:

Protocols in which stations listen for a carrier (a transmission) and act accordingly are called Carrier Sense Protocols. "Multiple Access" describes the fact that multiple nodes send and receive on the medium. Transmissions by one node are generally received by all other nodes using the medium. Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared physical medium, such as an electrical bus, or a band of electromagnetic spectrum.

The following three protocols discuss the various implementations of the above discussed concepts:

i) Protocol 1. 1-persistent CSMA:

When a station has data to send, it first listens to the channel to see if any one else is transmitting. If the channel is busy, the station waits until it becomes idle. When the station detects an idle channel, it transmits a frame. If a collision occurs, the station waits a random amount of time and starts retransmission.

The protocol is so called because the station transmits with a probability of a whenever it finds the channel idle.

ii) Protocol 2. Non-persistent CSMA:

In this protocol, a conscious attempt is made to be less greedy than in the
1-persistent CSMA protocol. Before sending a station senses the channel. If no one else is sending, the station begins doing so itself. However, if the channel is already in use, the station does not continuously sense the channel for the purpose of seizing it immediately upon detecting the end of previous transmission. Instead, it waits for a random period of time and then repeats the algorithm. Intuitively, this algorithm should lead to better channel utilization and longer delays than 1-persistent CSMA.


iii) Protocol 3. p - persistent CSMA

It applies to slotted channels and the working of this protocol is given below:

When a station becomes ready to send, it senses the channel. If it is idle, it transmits with a probability p. With a probability of q = 1 – p, it defers until the next slot. If that slot is also idle, it either transmits or defers again, with probabilities p and q. This process is repeated until either the frame has been transmitted or another station has begun transmitting. In the latter case, it acts as if there had been a collision. If the station initially senses the channel busy, it waits until the next slot and applies the above algorithm.

CSMA/CD Protocol

In computer networking, Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network control protocol in which a carrier sensing scheme is used. A transmitting data station that detects another signal while transmitting a frame, stops transmitting that frame, transmits a jam signal, and then waits for a random time interval. The random time interval also known as "backoff delay" is determined using the truncated binary exponential backoff algorithm. This delay is used before trying to send that frame again. CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).

Collision detection is used to improve CSMA performance by terminating transmission as soon as a collision is detected, and reducing the probability of a second collision on retry. Methods for collision detection are media dependent, but on an electrical bus such as Ethernet, collisions can be detected by comparing transmitted data with received data. If they differ, another transmitter is overlaying the first transmitter’s signal (a collision), and transmission terminates immediately. Here the collision recovery algorithm is nothing but an binary exponential algorithm that determines the waiting time for retransmission. If the number of collisions for the frame hits 16 then the frame is considered as not recoverable.

CSMA/CD can be in anyone of the following three states as shown in figure 3.4.

1. Contention period

2. transmission period

3. Idle period

Figure 3.4: States of CSMA / CD: Contention, Transmission, or Idle

A jam signal is sent which will cause all transmitters to back off by random intervals, reducing the probability of a collision when the first retry is attempted. CSMA/CD is a layer 2 protocol in the OSI model. Ethernet is the classic CSMA/CD protocol.

Collision Free Protocols

Although collisions do not occur with CSMA/CD once a station has unambiguously seized the channel, they can still occur during the contention period. These collisions adversely affect the system performance especially when the cable is long and the frames are short. And also CSMA/CD is not universally applicable. In this section, we examine some protocols that resolve the contention for the channel without any collisions at all, not even during the contention period.

In the protocols to be described, we assume that there exists exactly N stations, each with a unique address from 0 to N-1 “wired” into it. We assume that the propagation delay is negligible.

i) A Bit Map Protocol

In this method, each contention period consists of exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth slot. No other station is allowed to transmit during this slot. Regardless of what station 0 is doing, station 1 gets the opportunity to transmit a 1 during slot 1, but only if it has a frame queued. In general, station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N stations have passed by, each station has complete knowledge of which stations wish to transmit. At that point, they begin transmitting in a numerical order.

Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted its frame, an event all stations can monitor, another N bit contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a chance and the bit map has come around again.

Protocols like this in which the desire to transmit is broadcast before the actual transmission are called Reservation Protocols.

ii) Binary Countdown

A problem with basic bit map protocol is that the overhead is 1 bit per station, so it odes not scale well to networks with thousands of stations. We can do better by using binary station address.

A station wanting to use the channel now broadcasts its address as a binary bit string, starting with the high-order bit. All addresses are assumed to be of the same length. The bits in each address position from different stations are Boolean ORed together. We call this protocol as Binary countdown, which was used in Datakit. It implicitly assumes that the transmission delays are negligible so that all stations see asserted bits essentially simultaneously.

To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives up.

Example: If stations 0010, 0100, 1001, and 1010 are all trying to get the channel for transmission, these are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a higher numbered station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.

The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010 because t has the highest address. After winning the bidding, it may now transmit a frame, after which another bidding cycle starts.

This protocol has the property that higher numbered stations have a higher priority than lower numbered stations, which may be either good or bad depending on the context.

iii) Limited Contention Protocols

Until now we have considered two basic strategies for channel acquisition in a cable network: Contention as in CSMA, and collision – free methods. Each strategy can be rated as to how well it does with respect to the two important performance measures, delay at low load, and channel efficiency at high load.

Under conditions of light load, contention (i.e. pure or slotted ALOHA) is preferable due to its low delay. As the load increases, contention becomes increasingly less attractive, because the overhead associated with channel arbitration becomes greater. Just the reverse is true for collision free protocols. At low load, they have high delay, but as the load increases, the channel efficiency improves.

It would be more beneficial if we could combine the best features of contention and collision free protocols and arrive at a protocol that uses contention at low load to provide low delay, but uses a collision free technique at high load to provide good channel efficiency. Such protocols can be called Limited Contention protocols.

iv) Adaptive Tree Walk Protocol

A simple way of performing the necessary channel assignment is to use the algorithm devised by US army for testing soldiers for syphilis during World War II. The Army took a blood sample from N soldiers. A portion of each sample was poured into a single test tube. This mixed sample was then tested for antibodies. If none were found, all the soldiers in the group were declared healthy. If antibodies were present, two new mixed samples were prepared, one from soldiers 1 through N/2 and one from the rest. The process was repeated recursively until the infected soldiers were detected.

For the computerized version of this algorithm, let us assume that stations are arranged as the leaves of a binary tree as shown in figure 3.4 below:

Figure 3.5: A tree for four stations

In the first contention slot following a successful frame transmission, slot 0, all stations are permitted to acquire the channel. If one of them does, so fine. If there is a collision, then during slot 1 only stations falling under node 2 in the tree may compete. If one of them acquires the channel, the slot following the frame is reserved for those stations under node 3. If on the other hand, two or more stations under node 2 want to transmit, there will be a collisions during slot 1, in which case it is node 4’s turn during slot 2.

In essence, if a collision occurs during slot 0, the entire tree is searched, depth first to locate all ready stations. Each bit slot is associated with some particular node in a tree. If a collision occurs, the search continues recursively with the node’s left and right children. If a bit slot is idle or if only one station transmits in it, the searching of its node can stop because all ready stations have been located.

When the load on the system is heavy, it is hardly worth the effort to dedicate slot 0 to node 1, because that makes sense only in the unlikely event that precisely one station has a frame to send.

At what level in the tree should the search begin? Clearly, the heavier the load, the farther down the tree the search should begin.

3.4 IEEE 802 standards for LANs

IEEE has standardized a number of LAN’s and MAN’s under the name of IEEE 802. Few of the standards are listed in figure 3.6. The most important of the survivor’s are 802.3 (Ethernet) and 802.11 (wireless LAN). Both these two standards have different physical layers and different MAC sub layers but converge on the same logical link control sub layer so they have same interface to the network layer.

IEEE No

Name

Title

802.3

Ethernet

CSMA/CD Networks (Ethernet)

802.4

 

Token Bus Networks

802.5

 

Token Ring Networks

802.6

 

Metropolitan Area Networks

802.11

WiFi

Wireless Local Area Networks

802.15.1

Bluetooth

Wireless Personal Area Networks

802.15.4

ZigBee

Wireless Sensor Networks

802.16

WiMa

Wireless Metropolitan Area Networks

Figure 3.6: List of IEEE 802 Standards for LAN and MAN

Ethernets

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference the name "Ethernet" was derived.

From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. Star LAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network.

Above the physical layer, Ethernet stations communicate by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.

The most kinds of Ethernets used were with the data rate of 10Mbps. The table 3.1 gives the details of the medium used, number of nodes per segment and distance it supported, along with the application.

Table 3.1 Different 10Mbps Ethernets used

Name

Cable Type

Max Segment Length

Nodes per Segment

Advantages

10Base5

Thick coax

500 m

100

Original Cable; Now Obsolete

10Base2

Thin coax

185 m

30

No hub needed

10Base-T

Twisted Pair

100 m

1024

Cheapest system

10Base-F

Fiber Optics

2000 m

1024

Best between buildings

Fast Ethernet

Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards 100baseTX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet.

A fast Ethernet adaptor can be logically divided into a medium access controller (MAC) which deals with the higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces.

· 100BASE-T is any of several Fast Ethernet standards for twisted pair cables.

· 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable),

· 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct),

· 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct).

The segment length for a 100BASE-T cable is limited to 100 meters. Most networks had to be rewired for 100-megabit speed whether or not they had supposedly been CAT3 or CAT5 cable plants. The vast majority of common implementations or installations of 100BASE-T are done with 100BASE-TX.

100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable. A typical category 5 cable contains 4 pairs and can therefore support two 100BASE-TX links. Each network segment can have a maximum distance of 100 metres. In its typical configuration, 100BASE-TX uses one pair of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex).

The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network are typically connected to a hub or switch, creating a star network. Alternatively it is possible to connect two devices directly using a crossover cable.

In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear feedback shift register.

100BASE-FX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive (RX) and transmit (TX). Maximum length is 400 metres for half-duplex connections or 2 kilometers for full-duplex.

100BASE-SX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive and transmit. It is a lower cost alternative to using 100BASE-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100BASE-FX. 100BASE-SX can operate at distances up to 300 meters.

100BASE-BX is a version of Fast Ethernet over a single strand of optical fiber (unlike 100BASE-FX, which uses a pair of fibers). Single-mode fiber is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths.

Gigabit Ethernet

Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet packets at a rate of a gigabit per second, as defined by the IEEE 802.3-2005 standard. Half duplex gigabit links connected through hubs are allowed by the specification but in the marketplace full duplex with switches is the norm.

Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for gigabit Ethernet was standardized by the IEEE in June 1998 as IEEE 802.3z. 802.3z is commonly referred to as 1000BASE-X (where -X refers to either -CX, -SX, -LX, or -ZX).

IEEE 802.3ab, ratified in 1999, defines gigabit Ethernet transmission over unshielded twisted pair (UTP) category 5, 5e, or 6 cabling and became known as 1000BASE-T. With the ratification of 802.3ab, gigabit Ethernet became a desktop technology as organizations could utilize their existing copper cabling infrastructure.

Initially, gigabit Ethernet was deployed in high-capacity backbone network links (for instance, on a high-capacity campus network). Fiber gigabit Ethernet has recently been overtaken by 10 gigabit Ethernet which was ratified by the IEEE in 2002 and provided data rates 10 times that of gigabit Ethernet. Work on copper 10 gigabit Ethernet over twisted pair has been completed, but as of July 2006, the only currently available adapters for 10 gigabit Ethernet over copper requires specialized cabling.

InfiniBand connectors and is limited to 15 m. However, the 10GBASE-T standard specifies use of the traditional RJ-45 connectors and longer maximum cable length. Different gigabits Ethernet are listed in table 3.2.

Table 3.2 Different Gigabit Ethernets

Name

medium

1000BASE-T

unshielded twisted pair

1000BASE-SX

multi-mode fiber

1000BASE-LX

single-mode fiber

1000BASE-CX

balanced copper cabling

1000BASE-ZX

single-mode fiber

IEEE 802.3 Frame format

Preamble

SOF

Destination Address

Source Address

Length

Data

Pad

Checksum

Figure 3.7: Frame format of IEEE 802.3

· Preamble field

Each frame starts with a preamble of 8 bytes, each containing bit patterns “10101010”. Preamble is encoded using Manchester encoding. Thus the bit patterns produce a 10MHz square wave for 6.4 micro sec to allow the receiver’s clock to synchronize with the sender’s clock.

· Address field

The frame contains two addresses, one for the destination and another for the sender. The length of address field is 6 bytes. The MSB of destination address is ‘0’ for ordinary addresses and ‘1’ for group addresses. Group addresses allow multiple stations to listen to a single address. When a frame is sent to a group of users, all stations in that group receive it. This type of transmission is referred to as multicasting. The address consisting of all ‘1’ bits is reserved for broadcasting.

· SOF: This field is 1 byte long and is used to indicate the start of the frame.

· Length:

This field is of 2 bytes long. It is used to specify the length of the data in terms of bytes that is present in the frame. Thus the combination of the SOF and the length field is used to mark the end of the frame.

· Data :

The length of this field ranges from zero to a maximum of 1500 bytes. This is the place where the actual message bits are to be placed.

· Pad:

When a transceiver detects a collision, it truncates the current frame, which means the stray bits and pieces of frames appear on the cable all the time. To make it easier to distinguish valid frames from garbage, Ethernet specifies that valid frame must be at least 64 bytes long, from the destination address to the checksum, including both. That means the data field come must be of 46 bytes. But if there is no data to be transmitted and only some acknowledgement is to be transmitted then the length of the frame is less than what is specified for the valid frame. Hence these pad fields are provided, i.e. if the data field is less than 46 bytes then the pad field comes into picture such that the total data and pad field must be equal to 46 bytes minimum. If the data field is greater than 46 bytes then pad field is not used.

· Checksum:

It is 4 byte long. It uses a 32-bit hash code of the data. If some data bits are in error, then the checksum will be wrong and the error will be detected. It uses CRC method and it is used only for error detection and not for forward error correction.

IEEE 802.4 Standard - Token Bus

This standard was proposed by Dirvin and Miller in 1986.

In this standard, physically the token bus is a linear or tree-shaped cable onto which the stations are attached. Logically, the stations are organized into a ring, with each station knowing the address of the station to its “left” or “right”. When the logical ring is initialized, the highest numbered station may send the first frame. After it is done, it passes permission to its immediate neighbor by sending the neighbor a special control frame called a token. The token propagates around the logical ring, with only the token holder being permitted to transmit frames. Since only one station at a time holds the token, collisions do not occur.

Note: The physical order in which the stations are connected to the cable is not important.

Since the cable is inherently a broadcast medium, each station receives each frame, discarding those not addressed to it. When a station passes the token, it sends a token frame specifically addressed to its logical neighbor in the ring, irrespective of where the station is physically located on the cable.

Figure 3.8: Token Passing

IEEE 802.5 Standard - Token Ring

A ring is really not a broadcast medium, but a collection of individual point-to-point links that happen to form a circle. Ring engineering is almost entirely digital. A ring is also fair and has a known upper bound on channel access.

A major issue in the design and analysis of any ring network is the “physical length” of a bit. If the data rate of the ring is R Mbps, a bit is emitted every 1/R  sec. With a typical propagation speed of about 200 m/ sec, each bit occupies 200/R meters on the ring. This means, for example, that a 1-Mbps ring whose circumference is 1000 meters can contain only 5 bits on it at once.

A ring really consists of a collection of ring interfaces connected by point-to-point lines. Each bit arriving at an interface is copied into a 1-bit buffer and then copied out onto the ring again. While in the buffer, the bit can be inspected and possibly modified before being written out. This copying step introduces a 1-bit delay at each interface.

In a token ring a special bit pattern, called the token, circulates around the ring whenever all stations are idle. When a station wants to transmit a frame, it is required to seize the token and remove it from the ring before transmitting. Since there is only one token, only one station can transmit at a given instant, thus solving the channel access problem the same way the token bus solves it.

3.5 Fiber Optic Networks

Fiber optics is becoming increasingly important, not only for wide area point-to-point links, but also for MANs and LANs. Fiber has high bandwidth, is thin and lightweight, is not affected by electromagnetic interference from heavy machinery, power surges or lightning, and has excellent security because it is nearly impossible to wiretap without detection.

FDDI (Fiber Distributed Data Interface)

It is a high performance fiber optic token ring LAN running at 100 Mbps over distances up to 200 km with up to 1000 stations connected. It can be used in the same way as any of the 802 LANs, but with its high bandwidth, another common use is as a backbone to connect copper LANs.

FDDI – II is a successor of FDDI modified to handle synchronous circuit switched PCM data for voice or ISDN traffic, in addition to ordinary data.

FDDI uses multimode fibers. It also uses LEDs rather than lasers because FDDI may sometimes be used to connect directly to workstations.

The FDDI cabling consists of two fiber rings, one transmitting clockwise and the other transmitting counter clockwise. If any one breaks, the other can be used as a backup.

FDDI defines two classes of stations A and B. Class A stations connect to both rings. The cheaper class B stations only connect to one of the rings. Depending on how important fault tolerance is, an installation can choose class A or class B stations, or some of each.

S/NET

It is another kind of fiber optic network with an active star for switching. It was designed and implemented at Bell laboratories. The goal of S/NET is very fast switching.

Each computer in the network has two 20-Mbps fibers running to the switch, one for input and one for output. The fibers terminate in a BIB (Bus Interface Board). The CPUs each have an I/O device register that acts like a one-word window into BIB memory. When a word is written to that device register, the interface board in the CPU transmits the bits serially over the fiber to the BIB, where they are reassembled as a word in BIB memory. When the whole frame to be transmitted has been copied to BIB memory, the CPU writes a command to another I/O device register to cause the switch to copy the frame to the memory of the destination BIB and interrupt the destination CPU.

Access to this bus is done by a priority algorithm. Each BIB has a unique priority. When a BIB wants access to the bus it asserts a signal on the bus corresponding to its priority. The requests are recorded and granted in priority order, with one word transferred (16 bits in parallel) at a time. When all requests have been granted, another round of bidding is started and BIBs can again request the bus. No bus cycles are lost to contention, so switching speed is 16 bits every 200 nsec, or 80 Mbps.

3.6 Summary

This unit discusses the Medium Access Sublayer. It discusses in detail about LANs and WANs. It discusses the basic LAN protocols called ALOHA protocols. It describes the IEEE 802 standards for LANS. It discusses the importance of Fiber Optic Networks and cabling used as backbone for LAN connectivity.

3.7 Self Assessment Questions

1. The Data Link Layer of the ISO OSI model is divided into ______ sublayers

a) 1 b) 4 c) 3 d) 2

2. The ______ layer is responsible for resolving access to the shared media or resources.

a) physical b) MAC sublayer c) Network d) Transport

3. A WAN typically spans a set of countries that have data rates less than _______ Mbps

a) 2 b) 1 c) 4 d) 100

4. The ________ model consists of N users or independent stations.

5. The Aloha protocol is an OSI _______ protocol for LAN networks with broadcast topology

6. In ______ method, each contention period consists of exactly N slots

3.8 Terminal Questions

1. Discuss ALOHA protocols

2. Discuss various LAN protocols

3. Discuss IEEE 802 standards for LANs

3.9 Answers to Self Assessment Questions

1. d

2. b

3. b

4. Station

5. layer 2

6. A Bit Map Protocol

3.10 Answers to Terminal Questions

1. Refer to section 3.2

2. Refer to section 3.3

3. Refer to section 3.4