This poster illustrates the main components in a unicast IPv6 address and some interesting facts about it.
To help design your IPv6 addressing plan, make sure you download our IP Address Calculator.
Back in the eighties, John Gage of Sun Microsystems coined the phrase, “The network is the computer.” Today, the services made possible by the Internet and our use of them as individuals and organizations affirm John’s statement perhaps in ways they were not even imaginable three decades ago. Technology trends such as the Cloud, Big Data, the Internet of Things, and others, make the Internet seems less like a data transport mechanism and more like a global computer that drives the world’s economy.
In the basic computer architectural model, known as the Von Neumann architecture, an input device is used to enter the data into the computer’s memory. The central processing unit (CPU) process the data following precise instructions (programs) also stored in memory. The processing results are sent to an output device. All these components are connected by a data bus (a set of wires and connectors) that carries the data back and forth between the components. The computer also needs power, cooling, and housing.
The Internet allows its users to generate massive amount of data and share it: emails, blogs, social media posts, digital photos, online orders, and many others. Connected machines also generate data: event logs, communications protocols, sensor information, GPS tracking, activity and health monitors … etc. The ability to connect everything that produces data to the network, not only desktops and network devices, gives rise to the concept of Internet of Things (IoT). Extending the Internet’s connectivity beyond the traditional computers to a vast number of other devices is expected to generate even larger amount of data of different formats, using multiple technologies. The devices of the IoT are the Input in the computer model.
People also consume large amount of information from the Internet: news, music, videos, books, weather forecasts, maps, chat, and social networks notifications. Traditional “stationary” computers have long lost their status as the dominant technology used to access the Internet. Nowadays, laptops, tablets, smart phones, smart watches, entertainment devices (including smart TVs) and game consoles are extensively used at home, at work, and on the road to consume information in various forms from the Internet. People’s attachment to these devices and their need to stay connected anytime and anywhere created a demand for mobile connectivity to the Internet. This assortment of devices and the technologies that support their mobile connectivity are the Output in the global computer model.
A software application is a set of instruction that reside on computers and process data to turn it into useful information. Since the dawn of the world-wide-web (WWW) and the browser, applications have shifted from being locally run on a computer to being run somewhere in the Internet. Web applications allow the capture, processing, storage and transmission of all forms of data supplied by the users. The results are then presented to the user within their browser to be downloaded to the local machine or kept stored remotely. Web applications opened the door for businesses, schools, and governments to provide services to the users in a way that had never been possible before. Another significant advantage of web applications is that they function regardless of the type of device the user has. This means the user can run the web applications on virtually any device and anywhere.
Organizations that need to run web applications also need to maintain sufficient processing, storage, and environmental resources to handle growing demands for these applications. The challenges of meeting such demands and others, such as security, are difficult to meet for most organizations. The Cloud computing model and virtualization technologies offer ubiquitous processing power and storage components that satisfy organizations need for their applications. The Cloud services move all servers and storage an organization relies on to host its applications and store its data to a third-party organization. The absence of in-premise servers also eliminates the need for support infrastructure such as housing space, power, and air-conditioning.
Whether it is within a single building or covering the entire world, the network provides connectivity and enables data transport between independent nodes. The Internet, however, is not just the data transport mechanism between different components, the Internet is what makes all these components possible. Hence, the Internet is the Computer.
The Internet may not be an exact Von Nauman machine as it is portrayed here. The individual computer did not disappear either. In fact, all user devices remain fundamentally computers of the same basic architecture. But the need for an individual computing device is diminished tremendously in favour of the mobile, distributed, scalable, (and possibly fractal) computing platform that the Internet represents.
The Internet is made of light.
That is because the Internet’s backbone is mostly made of optical fibre links that guide light pulses representing data streams. Optical fiber technology permits the transmission of data over longer distances and at higher rates than what is possible by wire cables. Fibres are also immune to electromagnetic interference, a common problem in other communications media.
The light that traverses an optical link is emitted by lasers in the infrared range of light wavelengths. Since light from the lasers can have a single wavelength, multiple light streams can travel through the fibre using multiple wavelengths. This technique, known as wavelength division multiplexing (WDM), allows the transmission of hundreds of gigabits of data per second.
The optical and metal cables that are used to construct the networks of the Internet, the mechanical connectors needed to bond them together, and all related specifications, are known as the physical layer. This layer is the first of seven-layer model (known as the OSI model) that the experts in the field use to study and build networks. The technologies responsible for moving packets of data across the Internet links are known as Link layer technologies.
Several technologies have been used to facilitate the transmission of data across the Internet including SONET, ISDN, Frame Relay, and ATM. However, the technology most prevalent in the Internet today is the Ethernet. The Ethernet was originally developed in the 1970s to connect devices in a local area network, but the technology has been modified and enhanced over the years to enable transmission of data at higher rates, for longer distances, and over various media.
The Ethernet today is used to construct the networks of small organizations or large ISPs. It may be used to connect devices in the same room or hundreds of kilometres apart. The common data transmission rates that Ethernet provides today are 100Mbps (a hundred million bits per second), 1Gbps, and 10Gbps (about three 2-hour HD movies in a second). The use of 40Gbps and 100Gbps links is also on the rise. This huge amount of data can be sent over copper wires or fibre optical cables. Although wireless technologies such as Wi-Fi are not technically Ethernet but they are based on it and use similar concepts.
For more information about the use of Ethernet technologies in the backbone see my post Connectivity with Dark Fibre and Carrier Ethernet Services.
Next Post: The Internet IS the Computer
The Internet is a network of networks.
Every machine that is connected to the Internet is part of a network. At home, you are likely connected to an Internet Service Provider (ISP). At work, your computer is part of the organization’s local area network (LAN), a network that is owned and managed by the organization you work for. This LAN must be connected to an ISP for your computer to communicate with others outside your organization.
ISPs are communication companies that have extended network infrastructure spanning a wide geographic area. ISPs also need to connect to each other to be able reach regional, national, and global customers. The is not different from the way traditional phone companies connect to each other in order to facilitate national and international calls. In fact, many ISPs today are commercial phone companies that use their extensive infrastructure to reach a large customer base. Therefore, the Internet consists of a vast number of ISPs connected in a hierarchical structure in which smaller ISPs connect to a larger upstream ISPs. ISPs may also engage in business and technical arrangements with other ISPs, called peering, to exchange data traffic without charging fees. ISPs requiring no upstream links and connect only to customers and peers are called Tier 1 ISPs.
An ISP’s infrastructure typically consists of a number of a Point of Presences (POPs) that covers several regions. The POP serves as a hub that connects local customers to the ISP’s network. The POPs are interconnected via a web of backbone links that are capable of carrying large amount of data over long distances. Technologies such as SONET, ATM, or Ethernet over optical fibre are typically used in the backbone.
Backbone technologies are not suited to connect to individual homes and the majority of organizations due to the cost and availability of these technologies. Instead, homes and most businesses connect to the nearest POP of the chosen ISP using what is known as last-mile technologies. Today, the most common last-mile technologies used for data communications include:
Several characteristics are common among all last-miles technologies, including:
Not all backbones are part of commercial ISPs. Governments, academic, and community organizations also build backbone infrastructure to promote economic development, advance research, or support specific applications. Examples of these networks in the region include:
It started when my cousin asked, “So what exactly DO you do for a living?”
“Do you have a computer tethered by a data cable to the wall?” I responded.
“Well my job is to build what goes on behind the wall.”
This conversation happened just a few years ago, but the technologies that make up the networks have changed so much that I doubt the explanation I gave to my cousin will work at this time where most user devices are mobile.
This post will be the first in a series in which I will explain what goes behind the wall from the simple cable to the Internet. I will describe the Internet, what the Cloud means and explain concepts such as Big Data and the Internet of Things. I will not be diving into much technical details but the posts will include some technology jargon and hopefully there will be some useful information for everyone.
The Internet is an interconnected mesh of networks operated by private, public, academic, business, and government organizations. The Internet has no single owner nor a single governing body, but it is glued together by some rules that all these organizations have to agree to follow. Among these rules are the so called TCP/IP protocol, the IP addresses, and the structure of the networks.
In order for all the machines that are connected to the Internet to talk to each other they have to use the same set of rules, known as communication protocols. The Internet’s main set of protocols are known as TCP/IP suite. The TCP/IP origins date back to the 1972 and its current form splits the communication functions into two: The Internet Protocol (IP) is responsible for giving all machines unique identification, addresses. IP also finds communications paths from one machine to another regardless of how far they are located in the Internet. The Transmission Control Protocol (TCP) is responsible for ensuring that the machines at both ends of the “conversion” talk reliably, regardless of the conditions of the network.
For the billions of machines that make up the Internet to be able to find and communicate with each other, they must have a unique address. The widely-used IP version 4 (IPv4) allows more than 4 billion machines to be connected to the Internet. The IPv4 addresses are presented to human users as a sequence of four decimal numbers separated by dots (e.g. 192.168.20.5). Since no human can possibly remember all the IP addresses of all the machines that he/she needs to contact, another system known as Domain Name Servers (DNS) is used to translate IPv4 addresses such as “126.96.36.199” to more human-friendly host names such as “adhocnode.com”.
The administration and coordination of Internet related activities, such as the assignment of addresses and domain names to different organizations, fall under various organizations. One of these organizations is the Internet Corporation for Assigned Names and Numbers (ICANN), which coordinates the assignment of IP addresses through five Regional Internet Registries (RIRs). Another well-known organization is The Internet Engineering Task Force (IETF), which is responsible for developing and promoting Internet standards. These organizations consist of individuals from across the Internet’s technical, business, academic, and other non-commercial communities who are interested in the evolution of the Internet.
The Internet is often represented in the technical drawings by a cloud symbol to hide the complexities of the networks from the Internet. The “Cloud” has also found it way to the everyday vocabulary after the rise of the social media and online services, such as cloud storage. In my next post I will uncover some of the details hidden in the cloud and describe the structure of the Internet.
Connectivity to the Internet through more than one upstream ISP (Internet Service Provider) is referred to as multi-homing (or dual-homing in case of two ISPs). Multi-homing is generally required to increase the reliability of the Internet connection by reducing the reliance on a single provider and eliminating single-point-of –failure in the IP network. Dual- or multi-homing can also be used to load-balance the Internet traffic and improve performance.
While there are some techniques that can be used to archive dual-homing for special applications, the use of BGP routing to connect to multiple providers is the only effective technique to achieve general dual-homing for IPv4 networks. This report will focus exclusively on the use of BGP to connect to multiple providers.
BGP provides the ability for the network traffic going or coming from the Internet to be forwarded to any of the available ISPs. Unlike internal routing, BGP does not select routes based on shortest path to the destination but on the number of ASs (Autonomous Systems) the represent the networks from source to destination. BGP may be also configure to implement other routing policies to, for example, prefer some routes over others.
To improve the reliability of the Internet connection, an organization may choose to connect to two or more ISPs and split the Internet traffic equally among them. In the case where one provider’s link fails, outgoing traffic will automatically be routed to the remaining link(s). Other networks will be notified, through BGP updates, of the failed link and incoming traffic will be routed through another ISP link as well. In this architecture, there must be enough capacity in the remaining active links to be able to carry all the traffic from the failed link with causing congestion, which results in dropped packets and degradation of service. This means than in a dual-homing scenario, each link must carry the entire organization’s Internet traffic volume.
The organization may find an advantage in connecting to two ISP of unequal bandwidth. BGP may be configured to use one ISP as the main route where all outgoing and incoming traffic is directed. The backup ISP of small bandwidth will be activated only in the case of the main ISP’s failure and only selected traffic is routed through this link while the main ISP is being repaired. The advantage of this approach is to reduce the expenses needed to establish a second full capacity link.
Dual- or Multi-homing can be also used to improve the performance of the Internet connectivity by the carful choice of the ISPs and the proper configuration of BGP. For an organization that serve customers in diverse geographic locations, or it has branches both locally and abroad, BGP peering with multiple ISPs can ensure that traffic to each geographic location will go through the best route. This configuration will reduce the latency experiences by the users in each geographic region.
To enable multi-homing using BGP, an organization must have its own public IP address block and a public Autonomous System (AS) number before connections to two or more separate ISPs are established. Generally, ISPs do not accept or announce IPv4 address blocks smaller than /24 (255 addresses) through BGP. The organization must receive its public ASN from the regional Registry of Internet Numbers (ARIN in North America). The IPv4 can be obtained directly from the regional authority or from one of the ISPs. In the latter case other ISPs must agree to announce the IPv4 block in BGP.
A key problem to avoid in multi-homing is creating two apparently independent links from completely different ISPs using a common infrastructure such as link or a router in the organization’s network. This will actually form a single point of failure and considerably reduce the reliability benefits from multi-homing. Another problem to watch for is connecting to two ISPs, which in turn connect to a third, common ISP. The failure of the distant ISP may result in simultaneous outage or degradation of service on both links.