TCP/IP Reference Model Continued

This model is divided into 4 layers named as

  1. Application Layer
  2. Transport Layer
  3. Internet Layer
  4. Host-to-Network Layer/Network Access Layer/Link Layer

 

Network Access Layer:

This is the first of four layers of TCP/IP model. There is not much said about this layer except the fact that this layer was designed so that host can have connection to the network by using some protocols. We will try to see what is available in this layer in further discussion of this layer. This layer defines how data is send across a network, practically, how bits will be transferred as signals across the network by hardware through co-axial cables, optical fiber, twisted coaxial cables etc…

The variety of protocols included in this layer are Ethernet, token ring, X.25, FDDI, Frame relay RS-232 and v.35. We will discuss about these protocols as we proceed further

Internet Layer:

One major requirement from DOD was to have a network where connections remain intact till the time source and destination was available even if some machines were put off. This necessity led to a design of packet switching network based on connection less internetwork layer. This layer is second layer after Network Access Layer. The job assigned to this layer is to make data packets out of data which are called IP datagrams with source and destination address. These packets travel independently to destination and may arrive in any order at destination. It is a destination side job to re-arrange these packets (we will take up how later on). IP protocol and packet format is defined at this layer. The protocols used in this layer are IP, ICMP, ARP and RARP. We will be talking details about this layer later

Published in: on November 18, 2015 at 2:32 pm  Comments (1)  
Tags: , , ,

Resurrection

Well it has been years in since I started this blog and I feel now it time to give it a new life. I am starting on writing about the networks again and to add I will be extending my opinion and views on Testing, Open Source and SAN.

Hope for update soon

Published in: on November 22, 2011 at 9:40 am  Leave a Comment  

TCP/IP Reference Model

From the starting itself the major goal of the design issues were to connect the multiple networks in a seamless way and this was later known to be “TCP/IP Reference Model”. The internet protocol suite came from work done by DARPA in the early 1970s. Kahn and Cerf had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. DARPA then contracted with BBN, Stanford, and The University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed — TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.

Now we will have a look into the layers that our reference model consists of. Derived from the OSI structure we have only four layers in the TCP/IP reference model. We can consider the layers in the OSI are merged together to form the layers in TCP/IP. The IP suite uses encapsulation to provide abstraction of protocols and services. Generally a protocol at a higher level uses a protocol at a lower level to help accomplish its aims. The layers near the top are logically closer to the user while those near the bottom are logically closer to the physical transmission of the data. Each layer has an upper layer protocol and a lower layer protocol (except the top/bottom protocols, of course) that either use said layer's service or provide a service, respectively. Viewing layers as providing or consuming a service is a method of abstraction to isolate upper layer protocols from the nitty gritty detail of transmitting bits over, say, ethernet and collision detection while the lower layers avoid having to know the details of each and every application and its protocol.

This abstraction also allows upper layers to provide services that the lower layers cannot, or choose not, to provide. For example, IP is designed to not be reliable and is a best effort delivery protocol. This means that all transport layer must address whether or not to provide reliability and to what degree. UDP provides data integrity (via a checksum) but does not guarantee delivery; TCP provides both data integrity and delivery guarantee (by retransmitting until the receiver receives the packet).This model is in some ways lacking.

  1. For multipoint links with their own addressing systems (e.g. ethernet) an address mapping protocol is needed. Such protocols can be considered to be below IP but above the existing link system.
  2. ICMP & IGMP operate on top of IP but does not transport data like UDP or TCP.
  3. The SSL/TLS library operates above the transport layer (utilizes TCP) but below application protocols.
  4. The link is treated like a black box here. This is fine for discussing ip (since the whole point of IP is it will run over virtually anything) but is less helpful when considering the network as a whole.

Note: The upper layer protocol (ULP) refers to the more abstract protocol when performing encapsulation whereas the lower layer protocol (LLP) refers to the more specific protocol when performing encapsulation. For example In the internet protocol suite, IP is the lower layer protocol to UDP and TCP; likewise UDP and TCP are two upper layer protocols for IP.

Published in: on May 2, 2006 at 1:34 pm  Comments (4)  

OSI Reference Model

So we have seen the different kind of networks available (not all, other we will cover later). We get a small idea what is the meaning of each type of network. As we procced we will be learning more about the structure and their standatds. So in order to understand these variations in structure and the standards we will be starting what are the layers in the OSI and the TCP/IP model and do all these depend on each other.To start with we will proceed with the OSI layer model.

OSI Reference Model

This model was developed on the proposal given by International Standards Organization which was in a sense first effort to standardize the protocols used in the various layers. It is well known that OSI has seven layers.Earlier networking was completely vendor-developed and proprietary, with protocol standards such as SNA and DECnet. OSI was a new industry effort, attempting to get everyone to agree to common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to talk to other devices because of a lack of common protocols between them. Many of the protocols and specifications in the OSI stack are long-gone or have been superseded, such as token-bus media, CLNP packet delivery, FTAM file transfer, and X.400 e-mail. Some of them are still alive like X.500 directory srtucture because the original unwieldy protocol has been stripped away and effectively replaced with LDAP.

 

The principles that were applied to arrive at the seven layers can be briefly summerized as follows:

  1. A layer should be created where a different abstrction is needed.
  2. Each layer should perform a well defined fuction.
  3. The function of each layer should be chosen with an eye towards defining internationally standardized protocols
  4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
  5. The number of layers should be large enough that distinct functions need not be thrown together in the same layer out of necessity and small enough that the architecture does not become unwieldy.

     

    Note: OSI model itself is not a network architecture because it does not specify the exact services and protocols to be used in each layer. It just tells what each layer should do.

    The collapse of the OSI project in 1996 severely damaged the reputation and legitimacy of the organizations involved, especially ISO. This model is now overshadowed by the TCP/IP model.

    In the next sections we will be covering what layers are there in the network and there fuctionalities but this will be in reference to the TCP/IP model.

Published in: on May 1, 2006 at 2:23 pm  Leave a Comment  

WAN and Internetworks

Wide Area Networks: 

A WAN spans a large geographical area often a country or continent. There are a large number of machines connected on this network and are generally called as hosts. These hosts are connected by a communication subnet or just subnet in short. These subnets are used to carry messages from one host to the other. There are two distinct components  in the WAN:

  1. Transmission Lines: They move bits between the machines and can be made of copper wire, optical fibre or even radio transmission lines or it can be defined as the material medium or structure that forms all or part of a path from one place to another for directing the transmission of energy.
  2. Switching Elements: They are the specialized computers that connect three or more transmission lines. As the data arrives on the incoming line it is for the switching element to decide the outgoing line on which it has to be forwarded. For example routers are among the switching elements.

WANs are used to connect local area networks (LANs) together, so that users and computers in one location can communicate with users and computers in other locations. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay. Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation.

Internetworks:

 There has always been a want from the people in different networks to communicate among each other. The desire of the people, to connect to different network, were  accomplished by connecting these networks through routers (earlier called as gateways) where these machines provided connections and necessary translation, both in terms of hardware and software. A collection of interconnected networks is called an internetwork or internet or alternatively it can be defined as connecting of two or more distinct computer networks together into an internetwork using devices called routers to connect them together, to allow traffic to flow back and forth between them.Subnets, networks and internetworks are generally confused terms. Subnet refers to the collection of routers and communication lines owned by network operators.The combination of subnet and its hosts form the network and an internetwork is formed when distinct networks are interconnected. 

Published in: on April 28, 2006 at 2:21 pm  Leave a Comment  

LAN and MAN

Local Area Network: They are the privately-owned network within a single building or campus of upto a few kilometers in size. In the days before personal computers, a site might have just one central computer, with users accessing this via computer terminals over simple low-speed cabling. Networks such as IBM's SNA (Systems Network Architecture) were aimed at linking terminals or other mainframes at remote sites over leased lines—hence these were wide area networks. The first LANs were created in the late 1970s and used to create high-speed links between several large central computers at one site. It was the the time for the growth of LAN with the introduction of  CP/M and DOS based personel computers the requirement for such a kind of system began where the connectivity between the dozens of computers was needed. The initial attraction of networking then was generally to share disk space and printers, which were both very expensive at the time. There was much enthusiasm for the concept and for several years from about 1983 onward computer industry gurus would regularly declare the coming year to be “the year of the LAN”. LAN’s are distinguished from other kinds of networks by three characteristics: 1. their size 2. their transmission technology and 3. their topology. They are restricted in size which means that the worst case transmissiontime is bounded and known in advance. There are various standards set by IEEE for LAN for example IEEE 802.3 which I spopularly known as Ethernet. 

Metropolitan Area Network: 

Metropolitan Area Networks or MANs are large computer networks usually spanning a campus or a city. The best way to assume the picture of this type of network is to have example of the cable television network available in many cities. Also the universities and colleges may have many LAN connected among themselves to form a MAN situated in a site of a few square kilometers. These man could link among themselves to form a WAN. The technologies used for the puprose are ATM, FDDI and SMDS. The standard provided by IEEE for such a kind of network is IEEE 802.6 or popularly known as Distributed Queue Dual Bus (DQDB). Using this standard the network can extend upto 30 miles and the operating speed is 34 to 155 Mbps 

Published in: on April 24, 2006 at 2:42 pm  Comments (3)  

Overview of Transmission Technology

So now we have seen how this network all started and what it has went through. Mentioned above are the few important dates where we have more to come on the way. Let us now proceed with the talks and to start with we have the two types of transmission technology that are in use, namely:

  1. Broadcast Links
  2. Point-to-Point Links

Now the question arises what these links are and so we go ahead to the answer of these questions.

Broadcast networks have a single communication channel that is shared by all the machines on the network. Packets (short messages or fundamental unit of message carriages) are sent by a machine and received by all others. There is an address field in the packet and when the machines on the network receive the packets it checks for the address field. If the packet is for the intended machine the packet is processed or else ignored. This kind of system generally gives a possibility to the packet that all the machines on the network can address it by use of a special code in the address field. When such a packet is received all the machines in the network process it. This is called as “Broadcasting”.

Note: Broadcasting is not supported by all computer networks; for example, neither X.25 nor frame relay supply a broadcast capability, nor is any form of Internet-wide broadcast. Broadcasting is largely confined to local area network (LAN) technologies, most notably Ethernet and Token Ring, where the performance impact of broadcasting is not as large as it would be in a wide area network.

The second type of technique called point to point network actually consist of many connections between the individual pairs of machines. For moving from source to destination, a packet on such a network may has first to visit one or more intermidiary machine. In such a network it is possible to have many different routes for reaching out the destination with different leangth. So finding a good path is a necessity in such a network. A pont to point transmission where there is only one sender and one receiver is said to be “Unicasting”.

Now when we have seen about the most widespread transmission technology on the networks we need to discuss about the classification of the network. Basically the networks are classified on their physical sizes (distances). Here are some classifications

Distances             Area(sq. meter)                Type

1m                      Square Meter                   Personnel Area Network

10m-1km            Building, Campus, Room    Local Area Network

10km                  City                               Metropolitan Area Network

100km-1000km   Country,Continent           Wide Area Network

10000Km            Planet                            The Internet

We can now start our discussion based on the classification on the networks

Published in: on April 18, 2006 at 1:09 pm  Comments (6)  

Network Basic (contd.)

Now not talking more about the Fourier transforms (as itself is a big to be dealt here) we will now look in the matter of theoretical and practical networks. Now we know how the foundation stone of the communication was laid but when and how was this network developed is a different story.

 Carrying instructions between calculation machines and early computers was done by human users. In September, 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model K at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency ARPA when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet. In 1964 researchers at Dartmouth developed a time sharing system for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer (DEC's PDP-8) to route and manage telephone connections. In 1968 Paul Baran proposed a network system consisting of datagrams or packets that could be used in a packet switching network between computer systems. In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANet network using 50 kbit/s circuits So this found the beginning of what we say today as WWW or the internet. It has been a long time and many technologies have evolved which are now a part of this network. Some of the Important dates in the computer network history: 

  1. Timesharing, the concept of linking a large numbers of users to a single computer via remote terminals, is developed at MIT in the late 50s and early 60s.

  2. 1962: Paul Baran of RAND develops the idea of distributed, packet-switching networks.
  3. ARPANET goes online in 1969.
  4. Bob Kahn and Vint Cerf develop the basic ideas of the Internet in 1973.
  5. In 1974 BBN opens the first public packet-switched network – Telenet.
  6. A UUCP link between the University of North Carolina at Chapel Hill and Duke University establishes USENET in 1979. The first MUD is also developed in 1979, at the University of Essex.
  7. TCP/IP (Transmission Control Protocol and Internet Protocol) is established as the standard for ARPANET in 1982.
  8. 1987: the number of network hosts breaks 10,000.
  9. 1989: the number of hosts breaks 100,000.
  10. Tim Berners-Lee develops the World Wide Web. CERN releases the first Web server in 1991.

  11. 1992: the number of hosts breaks 1,000,000.
  12. The World Wide Web sports a growth rate of 341,634% in service traffic in its third year, 1993.
  13. The main U.S. Internet backbone traffic begins routing through commercial providers as NSFNET reverts to a research network in 1994.
  14. The Internet 1996 World Exposition is the first World's Fair to be held on the internet.
Published in: on April 14, 2006 at 4:16 pm  Leave a Comment  

Network Basic

Well it has been long since I have come across the word Network. It seems to be a simple word with a general definition an interconnected system of things or people”. But has ever anyone thought what would be the end line definition of this word. Well till date I have also not come across such a definition, which would limit the meaning of this word.

I have tried a lot to find the exact meaning of the word network but in vain because the domain expands to each and every thing we can imagine be it computers, humans etc.

Here what I am trying to deal is the computer networking concepts, their solutions and security that haunt us the most.

A general definition of the computer network could be “a computer network is a system for communication between computers” or “a data communications system that interconnects computer systems at various different sites”. Now the question lies in the point that how this data communication started or what was the base for this kind of communication to begin and so is the answer. 

Theoretical Basis for Data Communication

Communication has been of prime importance in human life without which all of us cannot perform even our daily activities. As rightly said, “Necessity is the mother of all inventions” so was the idea that information can be transmitted on wires by varying some physical properties such as voltage or current. By representing the value of this voltage and current as a single valued function of time, f(t), we can model the behavior of signal and analyze it mathematically.

The Fourier Analysis provided the answer for the question and is said to be as base of networking concept.

Fourier Analysis:  

It says “reasonably behaved periodic function, g(t), with period T can be constructed as a sum of a (possibly infinite) number sines and cosines”.

See the image for equation Fourier Equation

So this was the starting of the communication over the electronic media. Here is a brief idea of relation between Fourier Analysis and Computers.

 

Fourier Analysis with Computers

There have been several techniques (tried and tested) that enable a computer to calculate the Frequency Spectrum of a signal. The first step in all cases is to convert the signal to a set of numbers for the computer to use. This is done by sampling the signal at a regular interval so that a table of values is created. Each sample value is separated from the next by a fixed period of time.

The number of points obtained and the time between samples combine to determine the length of time we look at the signal. Following definitions we take into consideration:

fs = sample rate in Hz

dT = 1/fs = interval between samples

N = number of samples taken

T = N x dT = total time period

f1 = 1/T = frequency of the first harmonic in Hz

For example, if we are interested in the AC power line and suppose we choose a sample rate (fs) of 6000 Hz and collect 100 samples (n=100). It gives the result as   dT = 0.1167 msec and T = 16.667 msec. Then the first harmonic would be f1 = 60 Hz.

The traditional mathematical approach to Fourier analysis was based on approximating continuous waveforms but computer techniques can only deal with a set of samples. This does not change the basic idea of harmonic analysis but now we keep in mind the following:

1. The spectrums based on sampled waveforms can generate only N/2 harmonics.

2. If the original signal contains more than N/2 harmonics, the higher frequency harmonics will cause errors in the magnitude spectrum.

This type of error is referred to as Aliasing. Aliasing can be prevented by filtering the input signal before performing the Fourier analysis so that there are no frequency components above f1 and N/2.

The most popular computer algorithm for generating a frequency spectrum is the FFT or Fast Fourier Transform. As the name implies, the FFT is very efficient but it does have one quirk that affects the way it is used. The FFT can only process a sampled waveform where N (number of samples) is a power of 2. Acceptable values of N include 128, 256, 512 and 1024.

 Note: 

The FFT, like most computer algorithms, generates an Exponential Fourier Series, instead of a Trigonometric Fourier Series. The two series are identical except that the magnitude generated by the exponential series are half the value of the trigonometric series. Most application software automatically compensates for this and presents the magnitude spectrum as a Trigonometric series.

Published in: on April 14, 2006 at 12:15 pm  Comments (4)