Senior Acquisitions Editor: Kenyon Brown Development Editor: Kim Wimpsett


Should I Replace My Existing 10/100 Mbps Switches?



Yüklə 22,5 Mb.
Pdf görüntüsü
səhifə8/69
tarix26.10.2019
ölçüsü22,5 Mb.
#29436
1   ...   4   5   6   7   8   9   10   11   ...   69
Todd Lammle CCNA Routing and Switching


Should I Replace My Existing 10/100 Mbps Switches?

Let’s say you’re a network administrator at a large company. The boss

comes to you and says that he got your requisition to buy a bunch of

new switches but he’s really freaking out about the price tag! Should

you push it—do you really need to go this far?

Absolutely! Make your case and go for it because the newest switches

add really huge capacity to a network that older 10/100 Mbps

switches just can’t touch. And yes, five-year-old switches are

considered pretty Pleistocene these days. But in reality, most of us

just don’t have an unlimited budget to buy all new gigabit switches;

however, 10/100 switches are just not good enough in today’s

networks.

Another good question: Do you really need low-latency 1 Gbps or

better switch ports for all your users, servers, and other devices? Yes,

you absolutely need new higher-end switches! This is because servers

and hosts are no longer the bottlenecks of our internetworks, our

routers and switches are—especially legacy ones. We now need gigabit

on the desktop and on every router interface; 10 Gbps is now the

minimum between switch uplinks, so go to 40 or even 100 Gbps as

uplinks if you can afford it.

Go ahead. Put in that requisition for all new switches. You’ll be a hero

before long!

Okay, so now that you’ve gotten a pretty thorough introduction to

internetworking and the various devices that populate an internetwork,

it’s time to head into exploring the internetworking models.



Internetworking Models

First a little history: When networks first came into being, computers

could typically communicate only with computers from the same

manufacturer. For example, companies ran either a complete DECnet

solution or an IBM solution, never both together. In the late 1970s, the

Open Systems Interconnection (OSI) reference model was created by the

International Organization for Standardization (ISO) to break through

this barrier.

The OSI model was meant to help vendors create interoperable network



devices and software in the form of protocols so that different vendor

networks could work in peaceable accord with each other. Like world

peace, it’ll probably never happen completely, but it’s still a great goal!

Anyway the OSI model is the primary architectural model for networks. It

describes how data and network information are communicated from an

application on one computer through the network media to an

application on another computer. The OSI reference model breaks this

approach into layers.

Coming up, I’ll explain the layered approach to you plus how we can use

it to help us troubleshoot our internetworks.

Goodness! ISO, OSI, and soon you’ll hear about IOS! Just

remember that the ISO created the OSI and that Cisco created the

Internetworking Operating System (IOS), which is what this book is

all-so-about.



The Layered Approach

Understand that a reference model is a conceptual blueprint of how

communications should take place. It addresses all the processes

required for effective communication and divides them into logical

groupings called layers. When a communication system is designed in

this manner, it’s known as a hierarchical or layered architecture.

Think of it like this: You and some friends want to start a company. One

of the first things you’ll do is sort out every task that must be done and

decide who will do what. You would move on to determine the order in

which you would like everything to be done with careful consideration of

how all your specific operations relate to each other. You would then

organize everything into departments (e.g., sales, inventory, and

shipping), with each department dealing with its specific responsibilities

and keeping its own staff busy enough to focus on their own particular

area of the enterprise.

In this scenario, departments are a metaphor for the layers in a

communication system. For things to run smoothly, the staff of each

department has to trust in and rely heavily upon those in the others to do



their jobs well. During planning sessions, you would take notes, recording

the entire process to guide later discussions and clarify standards of

operation, thereby creating your business blueprint—your own reference

model.


And once your business is launched, your department heads, each armed

with the part of the blueprint relevant to their own department, will

develop practical ways to implement their distinct tasks. These practical

methods, or protocols, will then be compiled into a standard operating

procedures manual and followed closely because each procedure will

have been included for different reasons, delimiting their various degrees

of importance and implementation. All of this will become vital if you

form a partnership or acquire another company because then it will be

really important that the new company’s business model is compatible

with yours!

Models happen to be really important to software developers too. They

often use a reference model to understand computer communication

processes so they can determine which functions should be accomplished

on a given layer. This means that if someone is creating a protocol for a

certain layer, they only need to be concerned with their target layer’s

function. Software that maps to another layer’s protocols and is

specifically designed to be deployed there will handle additional

functions. The technical term for this idea is binding. The communication

processes that are related to each other are bound, or grouped together,

at a particular layer.



Advantages of Reference Models

The OSI model is hierarchical, and there are many advantages that can be

applied to any layered model, but as I said, the OSI model’s primary

purpose is to allow different vendors’ networks to interoperate.

Here’s a list of some of the more important benefits of using the OSI

layered model:

It divides the network communication process into smaller and

simpler components, facilitating component development, design,

and troubleshooting.

It allows multiple-vendor development through the standardization of

network components.


It encourages industry standardization by clearly defining what

functions occur at each layer of the model.

It allows various types of network hardware and software to

communicate.

It prevents changes in one layer from affecting other layers to expedite

development.



The OSI Reference Model

One of best gifts the OSI specifications gives us is paving the way for the

data transfer between disparate hosts running different operating

systems, like Unix hosts, Windows machines, Macs, smartphones, and so

on.

And remember, the OSI is a logical model, not a physical one. It’s



essentially a set of guidelines that developers can use to create and

implement applications to run on a network. It also provides a framework

for creating and implementing networking standards, devices, and

internetworking schemes.

The OSI has seven different layers, divided into two groups. The top three

layers define how the applications within the end stations will

communicate with each other as well as with users. The bottom four

layers define how data is transmitted end to end.

Figure 1.7

shows the three upper layers and their functions.



FIGURE 1.7

The upper layers

When looking at

Figure 1.6

, understand that users interact with the

computer at the Application layer and also that the upper layers are



responsible for applications communicating between hosts. None of the

upper layers knows anything about networking or network addresses

because that’s the responsibility of the four bottom layers.

In


Figure 1.8

, which shows the four lower layers and their functions, you

can see that it’s these four bottom layers that define how data is

transferred through physical media like wire, cable, fiber optics, switches,

and routers. These bottom layers also determine how to rebuild a data

stream from a transmitting host to a destination host’s application.



FIGURE 1.8

The lower layers

The following network devices operate at all seven layers of the OSI

model:


Network management stations (NMSs)

Web and application servers

Gateways (not default gateways)

Servers


Network hosts

Basically, the ISO is pretty much the Emily Post of the network protocol

world. Just as Ms. Post wrote the book setting the standards—or

protocols—for human social interaction, the ISO developed the OSI

reference model as the precedent and guide for an open network protocol

set. Defining the etiquette of communication models, it remains the most

popular means of comparison for protocol suites today.

The OSI reference model has the following seven layers:



Application layer (layer 7)

Presentation layer (layer 6)

Session layer (layer 5)

Transport layer (layer 4)

Network layer (layer 3)

Data Link layer (layer 2)

Physical layer (layer 1)

Some people like to use a mnemonic to remember the seven layers, such

as All People Seem To Need Data Processing.

Figure 1.9

shows a

summary of the functions defined at each layer of the OSI model.



FIGURE 1.9

OSI layer functions

I’ve separated the seven-layer model into three different functions: the

upper layers, the middle layers, and the bottom layers. The upper layers

communicate with the user interface and application, the middle layers

do reliable communication and routing to a remote network, and the

bottom layers communicate to the local network.

With this in hand, you’re now ready to explore each layer’s function in

detail!


The Application Layer

The Application layer of the OSI model marks the spot where users

actually communicate to the computer and comes into play only when it’s

clear that access to the network will be needed soon. Take the case of

Internet Explorer (IE). You could actually uninstall every trace of

networking components like TCP/IP, the NIC card, and so on and still

use IE to view a local HTML document. But things would get ugly if you

tried to do things like view a remote HTML document that must be

retrieved because IE and other browsers act on these types of requests by

attempting to access the Application layer. So basically, the Application

layer is working as the interface between the actual application program

and the next layer down by providing ways for the application to send

information down through the protocol stack. This isn’t actually part of

the layered structure, because browsers don’t live in the Application

layer, but they interface with it as well as the relevant protocols when

asked to access remote resources.

Identifying and confirming the communication partner’s availability and

verifying the required resources to permit the specified type of

communication to take place also occurs at the Application layer. This is

important because, like the lion’s share of browser functions, computer

applications sometimes need more than desktop resources. It’s more

typical than you would think for the communicating components of

several network applications to come together to carry out a requested

function. Here are a few good examples of these kinds of events:

File transfers

Email


Enabling remote access

Network management activities

Client/server processes

Information location

Many network applications provide services for communication over

enterprise networks, but for present and future internetworking, the need

is fast developing to reach beyond the limits of current physical

networking.



The Application layer works as the interface between actual

application programs. This means end-user programs like Microsoft

Word don’t reside at the Application layer, they interface with the

Application layer protocols. Later, in Chapter 3, “Introduction to

TCP/IP,” I’ll talk in detail about a few important programs that

actually reside at the Application layer, like Telnet, FTP, and TFTP.



The Presentation Layer

The Presentation layer gets its name from its purpose: It presents data to

the Application layer and is responsible for data translation and code

formatting. Think of it as the OSI model’s translator, providing coding

and conversion services. One very effective way of ensuring a successful

data transfer is to convert the data into a standard format before

transmission. Computers are configured to receive this generically

formatted data and then reformat it back into its native state to read it.

An example of this type of translation service occurs when translating old

Extended Binary Coded Decimal Interchange Code (EBCDIC) data to

ASCII, the American Standard Code for Information Interchange (often

pronounced “askee”). So just remember that by providing translation

services, the Presentation layer ensures that data transferred from the

Application layer of one system can be read by the Application layer of

another one.

With this in mind, it follows that the OSI would include protocols that

define how standard data should be formatted, so key functions like data

compression, decompression, encryption, and decryption are also

associated with this layer. Some Presentation layer standards are

involved in multimedia operations as well.



The Session Layer

The Session layer is responsible for setting up, managing, and

dismantling sessions between Presentation layer entities and keeping

user data separate. Dialog control between devices also occurs at this

layer.

Communication between hosts’ various applications at the Session layer,



as from a client to a server, is coordinated and organized via three

different modes: simplex, half-duplex, and full-duplex. Simplex is simple

one-way communication, kind of like saying something and not getting a

reply. Half-duplex is actual two-way communication, but it can take place

in only one direction at a time, preventing the interruption of the

transmitting device. It’s like when pilots and ship captains communicate

over their radios, or even a walkie-talkie. But full-duplex is exactly like a

real conversation where devices can transmit and receive at the same

time, much like two people arguing or interrupting each other during a

telephone conversation.

The Transport Layer

The Transport layer segments and reassembles data into a single data

stream. Services located at this layer take all the various data received

from upper-layer applications, then combine it into the same, concise

data stream. These protocols provide end-to-end data transport services

and can establish a logical connection between the sending host and

destination host on an internetwork.

A pair of well-known protocols called TCP and UDP are integral to this

layer, but no worries if you’re not already familiar with them because I’ll

bring you up to speed later, in Chapter 3. For now, understand that

although both work at the Transport layer, TCP is known as a reliable

service but UDP is not. This distinction gives application developers more

options because they have a choice between the two protocols when they

are designing products for this layer.

The Transport layer is responsible for providing mechanisms for

multiplexing upper-layer applications, establishing sessions, and tearing

down virtual circuits. It can also hide the details of network-dependent

information from the higher layers as well as provide transparent data

transfer.

The term reliable networking can be used at the Transport

layer. Reliable networking requires that acknowledgments,

sequencing, and flow control will all be used.

The Transport layer can be either connectionless or connection-oriented,

but because Cisco really wants you to understand the connection-



oriented function of the Transport layer, I’m going to go into that in more

detail here.



Connection-Oriented Communication

For reliable transport to occur, a device that wants to transmit must first

establish a connection-oriented communication session with a remote

device—its peer system—known as a call setup or a three-way



handshake. Once this process is complete, the data transfer occurs, and

when it’s finished, a call termination takes place to tear down the virtual

circuit.

Figure 1.10

depicts a typical reliable session taking place between sending

and receiving systems. In it, you can see that both hosts’ application

programs begin by notifying their individual operating systems that a

connection is about to be initiated. The two operating systems

communicate by sending messages over the network confirming that the

transfer is approved and that both sides are ready for it to take place.

After all of this required synchronization takes place, a connection is fully

established and the data transfer begins. And by the way, it’s really

helpful to understand that this virtual circuit setup is often referred to as

overhead!



FIGURE 1.10

Establishing a connection-oriented session

Okay, now while the information is being transferred between hosts, the

two machines periodically check in with each other, communicating

through their protocol software to ensure that all is going well and that

the data is being received properly.

Here’s a summary of the steps in the connection-oriented session—that

three-way handshake—pictured in

Figure 1.9

:

The first “connection agreement” segment is a request for



synchronization (SYN).

The next segments acknowledge (ACK) the request and establish

connection parameters—the rules—between hosts. These segments


request that the receiver’s sequencing is synchronized here as well so

that a bidirectional connection can be formed.

The final segment is also an acknowledgment, which notifies the

destination host that the connection agreement has been accepted and

that the actual connection has been established. Data transfer can

now begin.

Sounds pretty simple, but things don’t always flow so smoothly.

Sometimes during a transfer, congestion can occur because a high-speed

computer is generating data traffic a lot faster than the network itself can

process it! And a whole bunch of computers simultaneously sending

datagrams through a single gateway or destination can also jam things up

pretty badly. In the latter case, a gateway or destination can become

congested even though no single source caused the problem. Either way,

the problem is basically akin to a freeway bottleneck—too much traffic for

too small a capacity. It’s not usually one car that’s the problem; it’s just

that there are way too many cars on that freeway at once!

But what actually happens when a machine receives a flood of datagrams

too quickly for it to process? It stores them in a memory section called a



buffer. Sounds great; it’s just that this buffering action can solve the

problem only if the datagrams are part of a small burst. If the datagram

deluge continues, eventually exhausting the device’s memory, its flood

capacity will be exceeded and it will dump any and all additional

datagrams it receives just like an inundated overflowing bucket!

Flow Control

Since floods and losing data can both be tragic, we have a fail-safe

solution in place known as flow control. Its job is to ensure data integrity

at the Transport layer by allowing applications to request reliable data

transport between systems. Flow control prevents a sending host on one

side of the connection from overflowing the buffers in the receiving host.

Reliable data transport employs a connection-oriented communications

session between systems, and the protocols involved ensure that the

following will be achieved:

The segments delivered are acknowledged back to the sender upon

their reception.

Any segments not acknowledged are retransmitted.



Segments are sequenced back into their proper order upon arrival at

their destination.

A manageable data flow is maintained in order to avoid congestion,

overloading, or worse, data loss.

The purpose of flow control is to provide a way for the

receiving device to control the amount of data sent by the sender.

Because of the transport function, network flood control systems really

work well. Instead of dumping and losing data, the Transport layer can

issue a “not ready” indicator to the sender, or potential source of the

flood. This mechanism works kind of like a stoplight, signaling the

sending device to stop transmitting segment traffic to its overwhelmed

peer. After the peer receiver processes the segments already in its

memory reservoir—its buffer—it sends out a “ready” transport indicator.

When the machine waiting to transmit the rest of its datagrams receives

this “go” indicator, it resumes its transmission. The process is pictured in

Figure 1.11

.


Yüklə 22,5 Mb.

Dostları ilə paylaş:
1   ...   4   5   6   7   8   9   10   11   ...   69




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©www.azkurs.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin