How the Internet Works — Packets, DNS, and Global Network Infrastructure

How the Internet Actually Works

The Most Complex Machine Humanity Has Ever Built

When you open Instagram, your phone sends a request that travels through your Wi-Fi router, to your ISP's local exchange, through fiber-optic cables buried under streets and oceans, to a data center that might be 3,000 miles away, finds your feed among 2 billion user accounts, assembles it, sends it back through all those hops, and renders it on your screen. Total time: about 200 milliseconds. That is one-fifth of a second for a round trip that might cross an ocean floor, pass through a dozen routing facilities, and touch three different corporate networks.

This is the most complex machine humanity has ever built, and it works so reliably that you notice the one time it doesn't. You do not marvel at the 10,000 flawless page loads. You rage at the single timeout. That reliability is not accidental. It is the product of design decisions made in the 1960s and 1970s that turned out to be so resilient that the same fundamental architecture handles traffic loads its creators could not have imagined. Understanding how this system actually works - not the hand-wavy "it's in the cloud" version, but the real mechanics - gives you a foundation that makes every other technical concept click into place.

~200ms
Typical round-trip time for a web request across a continent
500+
Submarine fiber-optic cables carrying 95% of intercontinental data
13
Root DNS server identities (~1,700 actual instances worldwide)
5.4 billion
People connected to the internet as of 2024 — 67% of the planet

Packets, Not Streams: The Core Design Decision

Before the internet, long-distance communication used circuit switching. When you made a phone call, a dedicated physical circuit connected you to the other person for the entire call. That circuit was yours and nobody else could use it, even during the silences. It worked, but it was wildly inefficient.

The internet does something fundamentally different. It uses packet switching. Instead of reserving a dedicated path, your data gets chopped into small chunks called packets - typically between 1,000 and 1,500 bytes each. Each packet is stamped with the destination address and sent independently. Different packets from the same message can take completely different routes through the network. At the destination, they get reassembled in the correct order.

Think of it this way. Circuit switching is like renting an entire highway lane for your single car trip from New York to Los Angeles. Nobody else can use that lane until you arrive. Packet switching is like chopping your car into parts, loading each part onto whatever truck is heading west with available space, and reassembling the car when all the parts arrive in LA. The highway stays full and useful for everyone.

How Packet Switching Works Your Computer "Hello World!" PKT 1 PKT 2 PKT 3 R1 R2 R3 R4 R5 Destination Server PKT 1 PKT 2 PKT 3 "Hello World!" Each packet finds its own route. All arrive. Message reassembled. Route A Route B Route C R = Router
A message is split into packets that take independent routes through the network. Routers (R1-R5) forward each packet toward the destination based on current network conditions. The message is reassembled at the other end.

Why does this matter? Resilience. If a router in Chicago goes down, packets automatically route through Dallas instead. No single point of failure can bring the whole network down. This was the original design goal - the US military funded the internet's precursor (ARPANET) in the late 1960s specifically because a decentralized network could survive partial destruction. That Cold War paranoia produced the most robust communication system ever engineered.

Key Insight

Every piece of data you send - a text message, a Netflix frame, a bank transfer - is split into packets, routed independently across the network, and reassembled at the destination. This is why the internet does not "go down" when a single cable is cut or a router fails. Packets simply find another path. The network routes around damage by design.

Each packet carries a header with critical information: the source IP address (where it came from), the destination IP address (where it is going), a sequence number (so they can be reassembled in order), and error-checking data (to detect corruption). A typical web page might generate hundreds of packets. A YouTube video generates millions. The system handles it all using the same basic mechanism.

IP Addresses and DNS: The Internet's Address System

Every device connected to the internet needs an address. That address is an IP address - a numerical label that identifies where the device sits on the network. The version most people still encounter is IPv4: four numbers between 0 and 255, separated by dots. Google's public DNS server, for example, sits at 8.8.8.8. Your home router might assign your laptop something like 192.168.1.42.

IPv4 allows roughly 4.3 billion unique addresses. That sounded like plenty in the 1980s. It is not. With smartphones, laptops, smart TVs, thermostats, security cameras, and refrigerators all needing addresses, we blew past the limit. The solution is IPv6, which uses 128-bit addresses instead of 32-bit. That gives us 340 undecillion possible addresses - enough to assign a unique IP to every atom on the surface of the earth, with plenty left over. IPv6 adoption is still gradual (roughly 45% of Google traffic uses IPv6 as of 2024), and workarounds like NAT (Network Address Translation) have kept IPv4 functioning by letting multiple devices share a single public IP address through your router.

But nobody types 142.250.80.46 into their browser. You type google.com. The system that translates human-readable domain names into IP addresses is the Domain Name System (DNS) - and it is one of the most critical pieces of infrastructure on the internet.

How DNS Resolution Actually Works

When you type "google.com" into your browser, a chain of lookups fires off to find the corresponding IP address. This happens in milliseconds, and it is far more involved than most people realize.

Browser checks its own cache
OS checks local DNS cache
Query goes to ISP's DNS resolver
Resolver asks a root DNS server
Root directs to .com TLD server
TLD server directs to Google's nameserver
Google's nameserver returns the IP

First, your browser checks its own cache - have you visited google.com recently? If so, it already knows the IP and skips every other step. If not, it asks your operating system, which has its own cache. If the OS does not have it, the query goes to your ISP's DNS resolver (or a public one like Cloudflare's 1.1.1.1 or Google's 8.8.8.8). The resolver checks its cache. If it has no answer, it starts the full resolution chain: it queries one of the 13 root DNS server identities, which direct it to the appropriate top-level domain (TLD) server (the one responsible for .com, .org, .net, etc.), which in turn directs it to the authoritative nameserver for google.com. That final server returns the actual IP address.

The whole process typically takes 20-120 milliseconds on a cold lookup. Caching at every level means most lookups resolve almost instantly. Your ISP's resolver handles millions of queries per second and caches aggressively, so the full chain only fires for domains nobody nearby has visited recently.

Those 13 root server identities deserve a note. "Thirteen" sounds fragile. In reality, those 13 logical servers are distributed across roughly 1,700 physical instances worldwide using a technique called anycast, where the same IP address is advertised from multiple locations and your query automatically routes to the nearest one. The root server system has never suffered a complete outage.

The Protocol Stack: Layers That Make It All Work

The internet is not one protocol. It is a stack of protocols, each handling a specific job, layered on top of each other. This layered design is what allows the internet to be so flexible - you can swap out any layer without breaking the others. Wi-Fi can replace Ethernet at the bottom layer. HTTP can be replaced by a video streaming protocol at the top. The layers in between do not care.

The TCP/IP Protocol Stack Application Layer HTTP, HTTPS, FTP, SMTP, DNS "I want this webpage" Defines the format of the conversation Transport Layer TCP (reliable) / UDP (fast) "I'll make sure every packet arrives" Splits data into segments, ensures delivery Internet Layer IP (IPv4, IPv6), ICMP "I'll route it to the right address" Adds IP addresses, handles routing Network Access Layer Ethernet, Wi-Fi, Fiber, 5G "I'll carry the signal physically" Converts data to electrical/light/radio signals Data gets wrapped at each layer
Each layer handles one job and passes data down. When sending, your data gets wrapped with headers at each layer (encapsulation). When receiving, each layer strips its header and passes the content up.

What Each Layer Actually Does

The Application Layer is where the protocols you interact with live. HTTP and HTTPS handle web traffic. SMTP sends email. FTP transfers files. DNS translates domain names. These protocols define the structure and meaning of the data being exchanged - "this is a request for a webpage" or "this is an email message." Everything at this layer is about the content of the communication.

The Transport Layer takes that content and ensures it arrives correctly. Two protocols dominate here. TCP (Transmission Control Protocol) guarantees every packet arrives, in order, with no corruption. It does this through a system of acknowledgments - the receiver confirms each chunk, and anything missing gets resent. This is essential for web pages, email, and file transfers where losing data means a broken result. UDP (User Datagram Protocol) skips the guarantees for speed. It fires packets without checking if they arrive. This is perfect for live video calls and online gaming, where a dropped frame matters less than a delayed one. If your Zoom call occasionally gets a blocky frame, that is UDP choosing speed over perfection.

The Internet Layer handles addressing and routing. IP (Internet Protocol) stamps each packet with source and destination addresses and makes routing decisions. It does not care what the data is - a video frame and a banking transaction look identical at this layer. ICMP (Internet Control Message Protocol) is the diagnostic tool here; when you run a "ping" command to test if a server is reachable, you are using ICMP.

The Network Access Layer converts digital data into physical signals and sends them across whatever medium connects you to the next device - electrical pulses through copper Ethernet cables, light pulses through fiber-optic lines, or radio waves through Wi-Fi and cellular. This is where the digital meets the physical.

Real-World Example

Loading a single webpage like nytimes.com triggers 50 to 100+ separate HTTP requests - the HTML document, CSS stylesheets, JavaScript files, images, fonts, analytics scripts, and ad network calls. Each request generates multiple TCP segments, which become IP packets, which become electrical or optical signals. A single page load can produce thousands of packets, all handled in under 2 seconds. The layered design means each protocol only worries about its own job, and the complexity stays manageable.

HTTP and HTTPS: How Your Browser Talks to Servers

When you click a link or type a URL, your browser uses HTTP (HyperText Transfer Protocol) to communicate with the web server. The process is a simple request-response cycle. Your browser sends an HTTP request ("GET me this page"), and the server sends an HTTP response (the HTML content, plus a status code like 200 for success or 404 for "page not found").

HTTP by itself has a critical flaw: everything travels in plain text. On an open Wi-Fi network at a coffee shop, anyone with basic tools could read your HTTP traffic - the pages you visit, the passwords you enter, the messages you send. This is where HTTPS enters.

HTTPS is HTTP wrapped in TLS (Transport Layer Security) encryption. When you see the padlock icon in your browser's address bar, it means two things. First, the connection is encrypted - data between your browser and the server is scrambled so interceptors see gibberish. Second, the server's identity has been verified through a digital certificate issued by a trusted Certificate Authority. The server proves it really is google.com and not an impersonator.

As of 2024, over 95% of Chrome page loads use HTTPS. Browsers actively warn users when a site uses unencrypted HTTP. The shift took a decade of effort - Let's Encrypt, launched in 2015, made TLS certificates free and automated, removing the cost barrier that had kept much of the web unencrypted.

HTTP (Unencrypted)

Data travels in plain text. Anyone on the same network can intercept and read it. No identity verification of the server. Increasingly rare - browsers flag HTTP sites as "Not Secure." Still used for non-sensitive local development.

HTTPS (Encrypted)

Data is encrypted with TLS. Interceptors see only scrambled bytes. Server identity verified by a Certificate Authority. Required for login pages, payments, and sensitive data. Now the default for 95%+ of web traffic.

Routing: How Packets Navigate a Global Network

Between your device and the server you are connecting to, there are typically 10 to 20 intermediate devices called routers. Each router examines the destination IP address on incoming packets and decides which direction to forward them. Routers do not know the full path to the destination. They only know the best next hop based on their routing table - a continuously updated map of which directions lead to which networks.

You can see this for yourself. The traceroute command (tracert on Windows) shows every router your packets pass through on the way to a destination, along with the time each hop takes. Run it against a server in another country and you will see your data hop through your ISP's local router, then a regional hub, then a backbone router, then an international exchange point, then the destination country's network - each hop adding a few milliseconds.

BGP: The Protocol That Holds the Internet Together

At the largest scale, the internet is not one network. It is a network of networks - roughly 75,000 autonomous systems (AS), each operated by an ISP, corporation, or organization. These autonomous systems connect to each other at Internet Exchange Points (IXPs) and negotiate traffic routes using BGP (Border Gateway Protocol).

BGP is how ISPs tell each other "I can reach these IP addresses, and here is the path." Each autonomous system announces the routes it knows, and other systems decide which paths to use based on policies like cost, speed, and business agreements. BGP is sometimes called the "postal service of the internet" - it does not carry your packets itself, but it decides which roads your packets should take.

BGP is powerful but fragile. It operates largely on trust. When one autonomous system announces a route, other systems generally believe it. This design flaw has caused some of the internet's most dramatic outages.

Real-World Example

On October 4, 2021, Facebook disappeared from the internet for approximately 6 hours. The cause: a routine maintenance command accidentally withdrew all of Facebook's BGP route announcements. Every other network on the internet simultaneously "forgot" how to reach Facebook's servers. Facebook, Instagram, WhatsApp, and Messenger all went dark. The situation was compounded because Facebook's own internal tools relied on the same DNS and routing infrastructure that had gone down, so engineers could not even access their systems remotely. Staff had to physically enter data centers - where electronic badge readers also relied on Facebook's servers - to fix the problem. The outage cost Facebook an estimated $100 million in lost revenue and wiped $47 billion from Mark Zuckerberg's net worth in a single day.

Submarine Cables: The Backbone Nobody Sees

Despite the hype around satellites, approximately 95% of intercontinental internet data travels through submarine fiber-optic cables laid on the ocean floor. There are more than 500 active submarine cables worldwide, some spanning over 20,000 kilometers. A single modern cable like the MAREA (connecting Virginia to Bilbao, Spain) carries over 200 terabits per second - enough to stream roughly 40 million HD videos simultaneously.

These cables are about the diameter of a garden hose. They sit on the ocean floor, sometimes at depths exceeding 8,000 meters. Despite their thin profile, they are engineered to last 25 years. The main threats are fishing trawlers (responsible for roughly two-thirds of cable damage), anchors, and earthquakes.

In 2008, two submarine cable cuts near Alexandria, Egypt - caused by ship anchors - disrupted internet access for 14 countries across the Middle East and South Asia. India lost an estimated 50-60% of its westbound internet capacity. The event demonstrated something crucial: despite the internet's decentralized design, physical geography creates chokepoints. The Strait of Malacca, the Suez Canal area, and the coast of Cornwall are all locations where multiple cables converge, creating potential single points of failure for entire regions.

CDNs and Caching: Why Netflix Doesn't Buffer

If every Netflix viewer in New York had to stream video from a server in California, the cross-country backbone would melt. Instead, Netflix uses a Content Delivery Network (CDN) strategy taken to an extreme: they place custom hardware called Open Connect Appliances directly inside ISP data centers. When you stream a popular movie, the data likely travels less than a few miles, from a Netflix box sitting in your ISP's facility.

CDNs solve the fundamental problem of distance. The speed of light in fiber optic cable is roughly 200,000 kilometers per second - fast, but not instant. A round trip from New York to Tokyo adds about 150 milliseconds of unavoidable latency just from the physical distance. CDNs eliminate this by copying content closer to users.

Cloudflare operates one of the largest CDNs and reverse proxy networks on the internet, handling approximately 20% of all web traffic. That means roughly one in five HTTP requests on the entire internet passes through Cloudflare's network. They operate data centers in over 310 cities across 120+ countries, placing cached content and security services within milliseconds of most internet users on the planet.

Caching operates at every level of the system. Your browser caches images and scripts locally so repeat visits skip the download. Your ISP caches popular DNS lookups so your queries resolve faster. CDN edge servers cache web content so it does not travel back to the origin server for every request. Even your operating system caches DNS responses. This cascade of caching is why the same website loads measurably faster the second time you visit - much of the data is already sitting on your machine or nearby.

1
Browser Cache

Your browser stores recently downloaded files (images, scripts, stylesheets) locally. If the file has not changed, it skips the download entirely. This is why clearing your cache makes websites load slower temporarily.

2
CDN Edge Server

If the browser does not have the file, the request goes to the nearest CDN node - often in your city. The CDN serves a cached copy without contacting the origin server. Cloudflare, AWS CloudFront, and Fastly handle this for millions of websites.

3
Origin Server

Only if the CDN does not have a fresh copy does the request travel to the website's actual server, which might be thousands of miles away. For popular content, this almost never happens.

Wi-Fi, Cellular, and the Physical Last Mile

Everything described so far - packets, protocols, routing, CDNs - eventually needs to travel the final stretch from a nearby network node to your device. This "last mile" is where radio waves take over from fiber for most users.

Wi-Fi uses radio frequencies (primarily 2.4 GHz and 5 GHz, with Wi-Fi 6E adding 6 GHz) to connect your device to a router within roughly 30-50 meters indoors. It is a shared medium - everyone on the same Wi-Fi network competes for the same radio bandwidth. This is why your home internet slows down when four family members are streaming simultaneously; the Wi-Fi itself becomes the bottleneck, not your ISP connection.

Cellular networks (4G LTE and 5G) use a different set of radio frequencies with cell towers spaced across the landscape. 5G comes in three flavors that are worth distinguishing. Low-band 5G covers large areas but offers modest speed improvements over 4G. Mid-band 5G balances coverage and speed and is the most widely deployed. Millimeter-wave (mmWave) 5G delivers extreme speeds exceeding 1 Gbps but only works within a few hundred meters of a tower and cannot penetrate walls. When carriers advertise "5G speeds," they are usually referring to mid-band.

Starlink and other satellite internet constellations address the last mile for areas where running fiber or building cell towers is not economically viable - rural regions, ocean vessels, remote research stations. SpaceX's Starlink operates roughly 6,000 satellites in low Earth orbit (approximately 550 km altitude), delivering 50-200 Mbps with latency around 25-60 milliseconds. That is dramatically better than traditional satellite internet (which used geostationary satellites at 35,000 km with 600+ millisecond latency) but still slower than fiber.

The backbone of the internet remains fiber optic cable. A single modern fiber strand can carry data at speeds measured in terabits per second using wavelength-division multiplexing - a technique that sends multiple colors of light through the same fiber simultaneously, each color carrying its own data stream. The transatlantic MAREA cable uses 8 fiber pairs to achieve a total capacity exceeding 200 Tbps. That single cable on the ocean floor carries more bandwidth than the entire internet had in the early 2000s.

Fiber optic backbone100+ Tbps per cable
5G mmWave1-4 Gbps
5G mid-band100-900 Mbps
Wi-Fi 6 (802.11ax)Up to 9.6 Gbps shared
Starlink satellite50-200 Mbps
Traditional satellite (GEO)10-50 Mbps, 600ms+ latency

TCP vs. UDP: Reliability vs. Speed

The Transport Layer choice between TCP and UDP shapes the behavior of every internet application. Understanding the tradeoff explains most of the quirks you experience daily.

TCP establishes a connection before sending data (a "three-way handshake"), numbers every segment, requires the receiver to acknowledge receipt, and retransmits anything lost or corrupted. The result is a reliable, ordered byte stream. Web browsing, email, file downloads, and API calls all use TCP because losing even a single byte of data produces a broken result - an incomplete HTML page, a corrupted image, a failed transaction.

UDP skips the handshake, skips the acknowledgments, and fires packets as fast as possible. If one gets lost, tough luck. The application either does not notice or copes with the loss. This sounds irresponsible until you consider the alternative. On a live Zoom call, if a video frame arrives 200 milliseconds late because TCP was busy retransmitting it, the conversation has already moved on. Displaying the old frame is worse than skipping it. Online gaming has the same constraint - your character's position needs to update 30-60 times per second, and a stale update is more harmful than a dropped one.

TCP (Transmission Control Protocol)

Connection-based. Guarantees delivery and order. Retransmits lost packets. Used for: web pages, email, file transfers, APIs. Slower due to overhead but completely reliable. Every byte arrives correctly or the connection reports an error.

UDP (User Datagram Protocol)

Connectionless. No delivery guarantee. No retransmission. Used for: video calls, live streaming, online gaming, DNS queries. Faster with lower latency. Some data loss is acceptable when timeliness matters more than completeness.

Modern applications increasingly use QUIC, a protocol developed by Google and now standardized as the foundation of HTTP/3. QUIC runs over UDP but builds in its own reliability and encryption, combining TCP's guarantees with UDP's speed. It eliminates the "head-of-line blocking" problem where TCP stalls all data while waiting to retransmit a single lost packet. As of 2024, QUIC carries over 30% of web traffic, primarily through Google services and Cloudflare's network.

What Happens When You Load a Web Page

Knowing the individual components, here is the complete sequence when you type "nytimes.com" into your browser and press Enter. This entire process takes roughly 1-3 seconds.

1
DNS Resolution

Your browser resolves "nytimes.com" to an IP address by checking its cache, then your OS cache, then querying a DNS resolver. Typically 1-50ms if cached, 50-200ms if a full lookup is needed.

2
TCP Connection + TLS Handshake

Your browser opens a TCP connection to the server's IP address (three-way handshake), then negotiates TLS encryption. This adds 1-2 round trips, typically 50-150ms total.

3
HTTP Request + Response

Your browser sends an HTTP GET request for the page. The server (or CDN edge) responds with the HTML document. The HTML typically arrives in 100-500ms.

4
Parsing and Additional Requests

Your browser parses the HTML and discovers it needs CSS files, JavaScript files, images, and fonts. It fires off 50-100+ additional HTTP requests, many in parallel. Each triggers its own DNS lookup (usually cached) and TCP/TLS connection (often reused).

5
Rendering

As resources arrive, the browser builds the DOM (document structure), applies CSS styles, executes JavaScript, and paints pixels to your screen. The page becomes interactive once critical resources have loaded.

The complexity is staggering. A single page load orchestrates DNS, TCP, TLS, HTTP, HTML parsing, CSS rendering, JavaScript execution, and dozens of network round trips - all in under 3 seconds. The reason it feels instant is that decades of engineering have optimized every layer: DNS caching, connection reuse, parallel requests, content compression, CDN edge caching, and browser preloading.

The Internet vs. The Web: A Distinction That Matters

People use "the internet" and "the web" interchangeably. They are not the same thing. The internet is the physical and logical network infrastructure - the cables, routers, protocols (IP, TCP, UDP, BGP), and addressing systems that allow devices to communicate. The World Wide Web is one application that runs on top of the internet, using HTTP/HTTPS to serve hyperlinked documents (web pages).

Email uses the internet (via SMTP and IMAP protocols) but is not the web. Online gaming uses the internet but is not the web. Video calls use the internet but are not the web. The web was invented by Tim Berners-Lee in 1989 at CERN, two decades after the internet's precursor went live. It is the internet's most popular application, but it is not the internet itself.

Understanding this distinction matters practically. When someone says "the internet is down," they might mean their Wi-Fi is disconnected (network access layer), their ISP has an outage (internet layer), DNS is failing (application layer), or a specific website is unreachable (web application). Each problem has a different cause and a different fix.

Security on the Network: What Can Go Wrong

The internet was designed for resilience, not security. The original protocols assumed trust between participants - a reasonable assumption when the network connected a few dozen university research labs, but dangerously naive for a system now used for banking, medical records, and government secrets.

DNS spoofing tricks your device into resolving a domain name to a malicious IP address, sending you to a fake banking site that looks identical to the real one. Man-in-the-middle attacks intercept communication between you and a server, reading or modifying data in transit. DDoS (Distributed Denial of Service) attacks overwhelm a server with so many requests that legitimate users cannot get through - the largest recorded attacks have exceeded 3 Tbps of junk traffic.

HTTPS and TLS address the encryption and authentication gaps. DNSSEC adds cryptographic signatures to DNS responses to prevent spoofing. BGP security extensions (RPKI) are slowly being deployed to prevent the kind of route hijacking that could redirect traffic through malicious networks. These are patches on a system that was not designed with adversaries in mind, but they are effective patches - the internet in 2024 is dramatically more secure than the internet of 2005, even though the threats have also escalated.

Answers to Questions People Actually Ask

What is the difference between the internet and the web? The internet is the network - the physical cables, routers, and protocols that allow devices to communicate. The web is one application running on top of the internet, using HTTP to serve linked documents in a browser. Email, online gaming, and video streaming also use the internet but are not part of "the web."

Can someone see what I am browsing? With HTTPS (the padlock icon), an observer on your network can see which domain you visit (google.com) but not the specific page, search query, or content. Without HTTPS, everything is visible in plain text. Your ISP can always see which domains you connect to, even with HTTPS. A VPN encrypts traffic between you and the VPN provider, hiding your activity from your ISP - but then the VPN provider can see your traffic instead. You are moving trust, not eliminating it.

Why is my internet slow even with fast Wi-Fi? Wi-Fi is only the last 30 feet between your device and your router. The bottleneck could be anywhere else in the chain: your ISP's capacity, congestion at a peering point, the server you are connecting to being overloaded, slow DNS resolution, or the website simply not being optimized. Running a speed test measures your connection to the test server - but the website you are trying to reach might be on a completely different path with different congestion.

What is an IP address, and why are we running out? An IP address is a numerical label that identifies your device on the network, like a mailing address. IPv4 uses 32 bits, giving 4.3 billion possible addresses. We exhausted the pool in 2011. IPv6 uses 128 bits, providing 340 undecillion addresses - effectively unlimited. The transition is happening gradually because every router, firewall, and server needs to support both versions simultaneously during the changeover.

What happens when a submarine cable gets cut? Traffic automatically reroutes through other cables, but with reduced capacity and higher latency. Regions with few cable connections can experience severe degradation. Repairs are done by specialized cable ships that grapple the cable from the ocean floor, splice it, and relay it. A typical repair takes 1-2 weeks. Countries and companies increasingly invest in cable diversity - having multiple cables on different routes - to reduce the impact of any single cut.

How does the internet still work when parts of it break? Packet switching and dynamic routing. When a router or cable fails, routing protocols (particularly BGP) detect the failure and recalculate paths within seconds to minutes. Packets automatically flow through the remaining working paths. This decentralized resilience is the internet's defining architectural feature, inherited directly from its military origins. No central authority needs to intervene. The network heals itself.

Where This Knowledge Takes You Next

The internet is the delivery system for essentially all modern technology. Understanding packets, protocols, DNS, routing, and the physical infrastructure is not trivia - it is foundational knowledge that connects directly to everything else in computing. When you learn about databases, you will understand why latency between your application and the database matters. When you study cloud computing, you will know why AWS regions exist and why putting your server close to your users improves performance. When you encounter security topics, you will understand exactly what HTTPS protects and what it does not.

The system you just learned about carries 5 exabytes of data per day. It connects 5.4 billion people. It runs on protocols designed in the 1970s and cables laid on the ocean floor. No single entity owns it. No central authority controls it. It routes around damage, scales through caching, and handles a traffic load that doubles roughly every two years. The next time your page loads in under a second, you will know what actually happened in that fraction of a moment - and you will appreciate just how many layers of engineering made it look effortless.