World's most popular travel blog for travel bloggers.

 The primary goal of the Internet is to provide an abstract view of the complexities

involved in it. Internet must appear as single network of computers. At the same time
network administrators or users must be free to choose hardware or various
internetworking technologies like Ethernet, Token ring etc. Different networking
technologies have different physical addressing mechanisms. Therefore, identifying a
computer on Internet is a challenge. To have uniform addressing for computers over
the Internet, IP software defines an IP address which is a logical address. Now, when
a computer wants to communicate to another computer on the Internet, it can use
logical address and is not bothered with the physical address of the destination and
hence the format and size of data packed

 Ans : The Internet layer is an important layer in the protocol suite. At this layer, TCP/IP

supports Internetworking Protocol (IP). IP is a host-to-host protocol. This layer is
responsible for the format of datagrams as defined by IP, and routing a datagram or
packet to the next hop, but is not responsible for the accurate and timely delivery of
datagrams to the destination in proper sequence. IP allows raw transmission functions
allowing user to add functionalities necessary for given application. Ensuring
maximum efficiency, TCP/IP supports four other protocols: ARP, RARP, ICMP and
IGMP in this layer.

• Address Resolution Protocol (ARP)
On a LAN, each machine is identified with a unique physical address imprinted
on the network interface card. ARP is used to find the physical address of a
machine when its IP address is known.

• Reverse Address Resolution Protocol (RARP)
It is used to find the IP address of a machine when its physical address is
known. It is used when a diskless computer is booted or a computer is
connected to the network for the first time.

• Internet Control Message Protocol (ICMP)
IP is unreliable are best effort delivery. In case of failures ICMP is used to send
notifications to the sender about packet problems. It sends error and query

• Internet Group Message Protocol (IGMP)
It is used for multicasting, which is transmission of a single message to a group
of recipients.

Transport Layer
At this layer, TCP/IP supports two protocols: TCP, UDP, IP is host-to-host protocol,
which can deliver the packet from one physical device to another physical device.
TCP, UDP, are transport level protocols, responsible for delivering a packet from one
process on a device to another process on the other device.

User Datagram Protocol (UDP)
It is simpler of the two protocols. It does not provide reliability. It is, therefore faster,
and using for applications in which delay is intolerable (in case of audio and video).

Transmission Control Protocol (TCP)
TCP is reliable, connection oriented protocol. By connection oriented, we mean that a
connection must be established between both ends before either can transmit data. It
ensures that communication is error-free and in sequence.

Application Layer
As said earlier, it is closer to combined session, presentation, and application layer of
OSI model. It allows the user to run various applications on Internet. These
applications are File Transfer Protocol (FTP), remote login (TELNET), email
(SMTP), WWW (HTTP). The session layer of OSI model is almost dropped in

 The main reason is that each computer network is designed with a specific purpose.

For example, LAN is used to connect computers in a smaller area, and it provides fast
communication. As a result, networks become specialized identify. In many cases,
these networks do not use the same hardware and software technology. It means that,
a computer can communicate with the computers attached to the same network,
because they are indecomposable. As more and more organizations had multiple
computer networks, this became a major issue. As a result, the concept of
internetworking (internet) came into being. This means that there should be a network
of all physically separate networks.


Communication the process of sharing ideas, information, and messages with others
at a particular time and place. Communication is a vital part of personal life and is
also important in business, education, and any other situation where people encounter
each other. Communication between two people is an outgrowth of methods
developed over centuries of expression. Gestures, the development of language, and
the necessity to engage in joint action all play a part. Communication, as we see it
today, has evolved a long way. We will discuss the primitive modes of
communication briefly.

i) Early Methods
Early societies developed systems for sending simple messages or signals that could
be seen or heard over a short distance, such as drumbeats, fire and smoke signals, or
lantern beacons. Messages were attached to the legs of carrier pigeons that were
released to fly home (this system was used until World War I, which started in 1914).
Semaphore systems (visual codes) of flags or flashing lights were employed to send
messages over relatively short but difficult-to-cross distances, such as from hilltop to
hilltop, or between ships at sea.

ii) Postal Services
The postal system is a system by which written documents normally enclosed in
envelopes, and also small packages containing other matter, are delivered to destinations
around the world. Anything sent through the postal system is called post.
In India the East India Company in Mumbai, Chennai and Calcutta introduced the
postal system in 1766, further these postal service became available to the general
public. Even after implementing different electronic communication mediums, postal
system is still one of the popular communication systems available.

iii) Telegraph
The first truly electronic medium for communication was the telegraph, which sent
and received electrical signals over long-distance wires. The first practical
commercial systems were developed by the physicist, Sir Charles Wheatstone and the
inventor Sir William F. Cooke in Great Britain, and by the artist and inventor Samuel
F. B. Morse in the United States. Morse demonstrated the first telegraph system in
New York in 1837. But regular telegraph service, relaying Morse code (system of
code using on and off signals), was not established until 1844. Telegraphers would
translate the letters of the alphabet into Morse code, tapping on an electrical switch,
or key. The telegrapher at the other end of the line would decode the tapping as it
came in, write down the message, and send it to the recipient by messenger. The
telegraph made it possible for many companies to conduct their business globally for
the first time.

iv) Telephone
Early devices capable of transmitting sound vibrations and even human speech
appeared in the 1850s and 1860s. The first person to patent and effectively
commercialize an electric telephone was Scottish-born American inventor Alexander
Graham Bell. Originally, Bell thought that the telephone would be used to transmit
musical concerts, lectures, or sermons.
The telephone network has also provided the electronic network for new
computer-based systems like the Internet facsimile transmissions, and the World
Wide Web. The memory and data-processing power of individual computers can be
linked together to exchange the data transmitted over telephone line, by connecting
computers to the telephone network through devices called modems (modulator demodulators).

v) Computers and Internet
The earliest computers were machines built to make repetitive numerical calculations
that had previously been done by hand. While computers continued to improve, they
were used primarily for mathematical and scientific calculations, and for encoding
and decoding messages. Computer technology was finally applied to printed
communication in the 1970s when the first word processors were created.

At the same time computers were becoming faster, more-powerful and smaller, and
networks were developed for interconnecting computers. In the 1960’s the Advanced
Research Projects Agency (ARPA) of the U.S. Department of Defense, along with
researchers working on military projects at research centers and universities across
the country, developed a network called the ARPANET, for sharing data and
processing time of uniform computer connection over specially equipped telephone
lines and satellite links. The network was designed to survive the attack or destruction
of some of its parts and continue to work.

Soon, however, scientists using the ARPANET realized that they could send and
receive messages as well as data and programs over the network. The ARPANET
became the first major electronic-mail network; soon thousands of researchers all
over the world used it. Later on the National Science Foundation (NSF) helped
connect more universities and non-military research sites to the ARPANET, and
renamed it the Internet because it was a network of networks among many different

TCP/IP Protocols
Today, the Internet is the widely known computer network. It uses interconnection of
computer system by both wired and wireless. Smaller networks of computers, called
Local Area Networks (LANs), can be installed, in a single building or for a whole
organization. Wide Area Networks (WANs) can be used to span a large geographical
area. LANs and WANs use telephone lines, computer cables, and microwave and
laser beams to carry digital information around a smaller area, such as a single
college campus. Internet can carry any digital signals, including video images,
sounds, graphics, animations, and text, therefore it has became very popular
communication tool.

 Ans : Originally, the IP address space was divided into a few fixed-length structures called

address classes. The three main address classes are class A, class B, and class C. By
examining the first few bits of an address, IP software can quickly determine the
class, and therefore the structure, of an address. IP follows these rules to determine
the address class:

• Class A: If the first bit of an IP address is 0, it is the address of a class A
network. The first bit of a class A address identifies the address class. The next 7
bits identify the network, and the last 24 bits identify the host. There are fewer
than 128 class A network numbers, but each class A network can be composed of
millions of hosts.

 Class B: If the first 2 bits of the address are 1 0, it is a class B network address.
The first 2 bits identify class; the next 14 bits identify the network, and the last 16
bits identify the host. There are thousands of class B network numbers and each
class B network can contain thousands of hosts.

• Class C: If the first 3 bits of the address are 1 1 0, it is Internet Protocol a class C network
address. In a class C address, the first 3 bits are class identifiers; the next 21 bits
are the network address, and the last 8 bits identify the host. There are millions of
class C network numbers, but each class C network is composed of fewer than
254 hosts.

• Class D: If the first 4 bits of the address are 1 1 1 0, it is a multicast address.
These addresses are sometimes called class D addresses, but they don’t really
refer to specific networks. Multicast addresses are used to address groups of
computers all at one time. Multicast addresses identify a group of computers that
share a common application, such as a video conference, as opposed to a group of
computers that share a common network.

• Class E: If the first four bits of the address are 1 1 1 1, it is a special reserved
address. These addresses are called class E addresses, but they don’t really refer
to specific networks. No numbers are currently assigned in this range.

IP addresses are usually written as four decimal numbers separated by dots (periods).
Each of the four numbers is in the range 0-255 (the decimal values possible for a
single byte). Because the bits that identify class are contiguous with the network bits
of the address, we can lump them together and look at the address as composed of full
bytes of network address and full bytes of host address. If the value of the first byte

• Less than 128, the address is class A; the first byte is the network number, and the
next three bytes are the host address.

• From 128 to 191, the address is class B; the first two bytes identify the network,
and the last two bytes identify the host.

• From 192 to 223, the address is class C; the first three bytes are the network
address, and the last byte is the host number.

• From 224 to 239, the address is multicast. There is no network part. The entire
address identifies a specific multicast group.

• Greater than 239, the address is reserved.

IP Address
FormatRangeNo. of
No. of
A0N.H.H.H1.0.0.0 to
7242^24 -2Few Large
B1,0N.N.H.H128.1.0.0 to
14162^16 -2Medium-size
C1,1,0N.N.N.H192.0.1.0 to
2182^8 -2Relatively small
D1,1,1,0N/A224.0.0.0 to
N/AN/AN/AMulticast groups
(REC 1112)
E1,1,1,1N/A240.0.0.0 to
N/AN/AN/AFuture Use

The IP address, which provides universal addressing across all of the networks of the
Internet, is one of the great strengths of the TCP/IP protocol suite. However, the
original class structure of the IP address has weaknesses. The TCP/IP designers did
not envision the enormous scale of today’s network. When TCP/IP was being

designed, networking was limited to large organization’s that could afford substantial
computer systems. The idea of a powerful UNIX system on every desktop did not
exist. At that time, a 32-bit address seemed so large that it was divided into classes to
reduce the processing load on routers, even though dividing the address into classes
sharply reduced the number of host addresses actually available for use. For example,
assigning a large network a single class B address, instead of six class C addresses,
reduced the load on the router because the router needed to keep only one route for
that entire organization. However, an organization that was given the class B address
probably did not have 64,000 computers, so most of the host addresses available to
the organization were never assigned.
The class-structured address design was critically strained by the rapid growth of the
Internet. At one point it appeared that all class B addresses might be rapidly
exhausted. To prevent this, a new way of looking at IP addresses without a class
structure was developed.

 The incompatibility issues are handled at two levels:

i) Hardware Issues
At the hardware level, an additional component called router is used to connect
physically distinct networks as shown in Figure 1. A router connects to the network
in the same way as any other computer. Any computer connected to the network has a
Network Interface Card (NIC), which has the address (network id+host id), hard
coded into it. A router is a device with more than one NICs. Router can connect
incompatible networks as it has the necessary hardware (NIC) and protocols

ii) Software Issues
The routers must agree about the way information would be transmitted to the
destination computer on a different network, since the information is likely to travel
through different routers, there must be a predefined standard to which routers must
confirm. Packet formats and addressing mechanism used by the networks may differ.
One approach could be to perform conversion and reconversion corresponding to
different networks. But this approach is difficult and cumbersome. Therefore, the
Internet communication follows one protocol, the TCP/IP protocol suite. The basic
idea is that it defines a packet size, routing algorithms, error control, flow control
methods universally.

TCP/IP Protocols It would be unwise to club all these features in a single piece of software ─ it would
make it very bulky. Therefore, all these features are logically sub-grouped and then
the sub-groups are further grouped into groups called layers. Each layer has an
interface with the adjacent layers, and performs specific functions.

Need for Layering
Since it is difficult to deal with complex set of rules, and functions required for
computer networking, these rules and functions are divided with logical groups called
layers. Each layer can be implemented interdependently with an interface to other
layers providing with services to it or taking its services like flow control and error
control functions are grouped together and the layer is called data link layer. Speech
in telephone conversation is translated, with electrical segments and vice-versa.
Similarly in computer system the data or pattern are converted into signals before
transmitting and receiving. These function and rules are grouped together in a layer
called physical layer.

We have seen the positive benefits of SRS in previous section. Let us look at a scenario wherein the SRS is not properly defined and its impact on the project. This will enable us to understand the importance of SRS on the project:

Impact on cost and schedule: Without a complete and accurate SRS, it
would be difficult to properly estimate and plan the overall cost of the
project. This would have ripple' effect on resource staffing, milestone
planning and overall project budget. As a result the entire project schedule,
will be in jeopardy.

Quality Impact: Incomplete requirements specification would manifest
itself into incomplete test plan and impacts the quality of all project
deliverables. This negatively impacts the project by re-testing, re-coding and
re-design efforts leading to cost and effort overruns.

Impact on overall customer/user satisfaction: An improperly translated
user requirements would damage the customer confidence on the software
product and reduces the usability ilrrd overall satisfaction index.

Impact on maintenance: Without proper traceability, it would be difficult
to extend the software enhance it and to fix the issues. '

  Correct: SRS should specify the functionality correctly from all aspects. It

also be continually updated to reflect all the software updates and

• Unambiguous: As SRS is written iri natural language, it is possible for it to
be interpreted in multiple ways based on the context, cultural background
etc. So, SRS should consider these things and define and refine in most
unambiguous fashion as possible. This would include providing references,
elaborating any abstract requirement with example scenarios etc. It is a good
practice to get a proof read of SRS by another person to weed out an)’
ambiguous descriptions.

• Precise: The description should not contain fizzy words so as to make it

• Complete: SRS should provide all the details required by software
.designers for design and implementation of the intended software.

• Consistent: The terminologies, definitions and others used throughout the
SRS should be consistent. It is a good practice to pre-define all definitions,
abbreviations and refer them consistently throughout SRS.

• Verifiable: This supplements the unambiguous characteristic. All
requirements should be quantified with exact and verifiable numbers. For
instance “The home page should load quickly” is non-verifiable as “quickly”
is subjective; it is also not mentioned if the page should load quickly across
all geographies. Instead of these subjective terms the requirement should
quantify it with the exact response time: “The home page should load within
2 seconds in North America region”.

 Modifiable: The requirements should be detailed only once throughout the
document so that it is easy to modify and maintain the document in long run.
To ensure that SRS is modifiable it should:
1. Be coherent, well-organized and contain cross-referencing
2. Avoid redundancy
3. State each requirement separately

• Traceable: SRS should map the requirements to other business/user
requirement documents so that it is possible to trace the requirements. It
should also support backward-traceability and forward traceability.

 Ranked for importance/stability: The requirements should be ranked
based on its deemed business/user importance. Ranking is done based on:
1. Degree of stability: Stability is related to number of changes required
for implementing functionality.
2. Degree of importance: In this case, the requirements are classified
into categories such as essential, conditional and optional.

Few other characteristics of a good SRS are that it should be understandable by people
of varied backgrounds and it should be design independent. That is, without favoring
any particular design.

Functionality: Complete details of the software.

External Interfaces: Details of how the software interacts with external
systems, and end users.

Performance: Provides details of transaction speed, software availability,
response time, failover conditions, disaster recovery scenarios, etc.

Attributes: Provides details about portability, correctness, maintainability,
security, extensibility, flexibility, etc.

Constraints: All applicable architecture and design constraints including
the maximum load supported, supported browsers, JavaScript dependency
and others should be detailed.

  Forms the basis of agreement between customers and suppliers about ,

the software functionality: SRS serves as a structured contract between these parties specifying all functionalities along with constraints and mentions the behavior of the intended software. End user/customer can verify if the intended software meets all the needs and requirements stated in user requirements document.

• Optimizes development effort:
As the requirements are fully specified beforehand, the implementation team can design the system accurately
thereby reducing the effort in re-design, re-work, re-testing and defect fixing.

• Forms basis for cost and schedule estimation: Using the functional and’ non-functional requirements specified in SRS, the project management team can estimate the overall project cost and schedule in more accurate fashion
and make informed decisions about risk identification and mitigation.

• Forms basis for verification and validation: Quality team can design the validation and testing strategy including various kinds of test cases based on the requirements specified in SRS.

• Helps software portability and installation: The software usability information contained in SRS helps to transfer the software across various locations including multiple inter-company departments and other external customers.

• Helps in enhancement: As SRS specifies each requirement in fullest details, it would be easier to assess the impact of any enhancement planned providing the cost and schedule estimate of the enhancement