Networking

TCP, UDP, and QUIC: When to Use Each One

IP gets a packet from one machine to another. That’s its entire job. It does not care whether the packet arrives in order with the others before it, whether it arrives at all, or whether the receiving application can keep up. Those concerns — reliability, ordering, flow, multiplexing — belong to the layer above IP, called the transport layer. The two main transport protocols on the internet are TCP and UDP. The third, QUIC, is a newer hybrid that’s rapidly becoming the default for the web.

Glowing cyan fiber optic strands carrying data
Every TCP segment, UDP datagram, and QUIC packet your machine emits ultimately rides photons through fibers like these — the transport layer is the protocol that decides what those photons mean once they arrive. Photo: gstudio, Pexels.

This is lesson 7 of Networking from Scratch. By now you have packets being addressed (lesson 2), routed (lesson 5), and physically delivered (lesson 6). This article is about how applications actually use the network on top of all that — and which protocol to reach for when you’re building or troubleshooting one.

The two questions a transport protocol answers

Strip every transport protocol down and you find it answering the same two questions:

  1. Do I need ordered, reliable delivery? — Will the application break if some bytes arrive out of order, or if a chunk goes missing entirely?
  2. Do I want the cost of a connection? — Am I willing to pay a round-trip of setup before I can send my first byte of real data?

Different applications answer those questions differently, and the “right” transport protocol falls out of the answers.

TCP — reliable, ordered, connection-oriented

TCP is what you reach for when you need every byte to arrive, in the right order, exactly once. Web pages, file downloads, SSH sessions, database connections, email — all TCP. The reliability and ordering aren’t free, but for most application use they’re worth the cost.

The three-way handshake

Before any actual data crosses, both sides agree to talk to each other:

Client  ----SYN----> Server   "I'd like to start a connection. My initial sequence number is X."
Client  <--SYN/ACK-- Server   "OK. My initial sequence number is Y. I acknowledge yours."
Client  ----ACK----> Server   "Acknowledging yours. Let's go."

That’s one round trip of latency before any payload moves. On a connection with 30 ms latency, every new TCP connection costs 30 ms before you can send anything useful. This is the headline cost of TCP, and the main thing modern protocols (TLS 1.3, QUIC, HTTP/2 connection reuse) try to amortise.

Sequence numbers and acknowledgements

Once the connection is up, every byte TCP sends has a sequence number, and the receiver acknowledges what it’s gotten. If a packet goes missing — the sender doesn’t see an ACK within an expected window — it retransmits. If packets arrive out of order, TCP buffers them on the receiver side and delivers them to the application in order. The application reading from a TCP socket sees a clean ordered stream of bytes, even if the underlying network was lossy and chaotic.

Flow control: the receive window

TCP also makes sure the sender doesn’t overwhelm the receiver. Each ACK carries a receive window — “here’s how much more data you can send me before I’ve processed what you just sent.’’ A slow consumer (small window) automatically throttles a fast producer. This is the protocol-level mechanism behind the “buffer is full, slow down” behaviour you sometimes see in throughput graphs.

Congestion control

Beyond not overwhelming the receiver, TCP also tries not to overwhelm the network itself. Each connection probes the available bandwidth by gradually increasing its send rate, then backs off when it sees packet loss (which TCP interprets as “the path is congested”). The classic algorithms are Reno, CUBIC (Linux default for years), and BBR (more modern, used heavily by Google). The details matter for tuning; for an admin, the takeaway is that TCP automatically backs off under congestion and recovers as it eases — that’s why your transfer rate fluctuates rather than failing outright on a busy link.

Graceful shutdown

TCP also has a four-way teardown: each side sends a FIN, the other ACKs it, and the connection is closed cleanly. This is why a TCP-based application can know “the file is fully transferred” — the protocol guarantees it.

When TCP is the right answer

  • Anything where missing or reordered bytes break the application: HTTPS, SSH, SMTP, FTP, RDP, database protocols, file transfers, version control.
  • Long-lived sessions where a 1-RTT setup is amortised across thousands of bytes.
  • Anything that needs flow control built-in, so the receiver isn’t expected to handle floods.

UDP — connectionless, fast, no promises

UDP is the opposite philosophy. There’s no handshake, no sequence numbers, no ACKs, no retransmits, no flow control, no congestion control. You hand the kernel a datagram with a destination IP and port, and the kernel sends one IP packet. If it arrives, it arrives. If it doesn’t, your application has to notice and decide what to do.

That sounds like a downgrade until you realise three things:

  1. For some workloads, those guarantees are actively harmful. A live voice call would rather drop a 20 ms audio chunk than wait 200 ms for it to be retransmitted.
  2. The setup cost is zero. UDP’s “connection” is just “the kernel remembers you sent something to that IP and port.” First byte goes out immediately.
  3. For tiny request/response patterns (like a DNS lookup), the overhead of TCP’s handshake would more than double the total time.

When UDP is the right answer

Application Why UDP
DNS Single tiny request, single tiny response. Setting up TCP for that would more than double the time.
DHCP Initial configuration runs before the host even has an IP. TCP can’t handshake without one.
NTP Time sync packets are tiny and individually self-contained.
VoIP, video calls Drops are better than delays. Late audio is worthless.
Online gaming Same logic — a game cares about “where is the player now” not about replaying old packets.
SNMP, syslog Lightweight monitoring. Occasional loss is acceptable; overhead must be low.
Multicast / broadcast TCP can’t do one-to-many; UDP can.

If your application can tolerate occasional loss, or it’s small enough that retransmits would be slower than just sending again from the application layer, UDP is probably right.

The TCP and UDP header sizes (worth knowing)

Header size is a real tax on small payloads:

TCP UDP
Header size 20 bytes minimum, often 32+ with options 8 bytes flat
Includes seq #, ack #, flags, window, checksum, options source/dest port, length, checksum

For a 50-byte payload, TCP’s overhead is roughly 40%; UDP’s is about 16%. For huge transfers it doesn’t matter. For lots of tiny packets it adds up.

QUIC — the modern hybrid

For decades, the internet looked like “use TCP for important things, UDP for the rest.” QUIC, standardised by the IETF in 2021, is the first transport protocol designed for the modern web that breaks that pattern. It runs on top of UDP (so middleboxes treat it as opaque traffic) but provides reliable, ordered, encrypted delivery like TCP — and adds features TCP can’t.

What QUIC actually does

  • 0-RTT and 1-RTT setup. The first time a client connects to a server, it does a 1-RTT handshake that combines the TCP handshake and the TLS handshake. On subsequent connections to the same server, it can send application data in the very first packet (0-RTT). For a typical web load, that’s a 1–2 round-trip saving compared to TCP+TLS 1.3.
  • Encryption is built in. Every QUIC connection is encrypted by default — not just the payload but most of the headers. There’s no “plaintext QUIC” the way there’s plaintext HTTP over TCP.
  • Multiple streams in one connection. A single QUIC connection can carry many independent streams, each with its own ordering. If one stream is blocked waiting for a lost packet, the others keep going. With TCP+HTTP/2, head-of-line blocking on the TCP layer stalls every multiplexed stream simultaneously; QUIC fixes that.
  • Connection migration. A QUIC connection identifies itself by a connection ID, not by the IP/port four-tuple TCP uses. So when your phone hands off from Wi-Fi to LTE, your QUIC connections survive without dropping. TCP connections die in that scenario every time.
  • Userspace implementation. Because QUIC runs over UDP, the protocol logic lives in the application (browser, server) rather than the kernel. That means new versions can ship at the speed of browser updates instead of OS-kernel updates — the reason QUIC has iterated this fast.

QUIC and HTTP/3

HTTP/3 is just HTTP over QUIC instead of TCP. From the application’s point of view it looks like HTTP/2 (same semantics, multiplexed streams, header compression), but the transport underneath is faster to set up, doesn’t suffer head-of-line blocking, and is encrypted from the first packet. As of 2024, HTTP/3 is supported by every major browser, every major CDN, and every major cloud’s public-facing services. If you load a Google or Cloudflare-hosted site with a recent browser, you’re probably using QUIC and don’t know it.

How to tell which one’s in use

Look at the destination port and (if you’re packet capturing) the IP protocol number:

Looks like It’s
IP protocol 6, port 80, port 443, port 22, port 25, port 3306 TCP
IP protocol 17, port 53, port 67, port 68, port 123, port 161 UDP
IP protocol 17 (UDP), port 443 with the QUIC version field set QUIC

From a quick check on the command line:

# Linux / macOS - what's currently open and on which protocol
ss -tunap            # Linux
lsof -i              # macOS / Linux

# Windows
Get-NetTCPConnection
Get-NetUDPEndpoint

Common confusions, sorted out

  • Ports belong to TCP and UDP separately. TCP port 53 and UDP port 53 are different things. DNS uses both for different reasons (UDP for normal queries, TCP for large responses and zone transfers).
  • “TLS uses TCP” isn’t universally true anymore. Classic TLS does ride on TCP. TLS-equivalent encryption inside QUIC rides on UDP.
  • Reliability isn’t just retransmits. TCP also reorders, retransmits selectively (SACK), and adapts to packet loss with congestion control. Building those in your own UDP-based app is hard; that’s why QUIC was a worthwhile project.
  • UDP isn’t insecure by nature. The protocol just doesn’t add anything; whether the data is encrypted depends on what’s above it. DTLS, WireGuard, and QUIC are all encrypted UDP-based protocols.
  • Some firewalls treat UDP poorly. Stateful firewalls have to invent their own “is this conversation still active?” rules for UDP because the protocol has no built-in concept of a connection. That’s where you sometimes see UDP traffic mysteriously dropped after long idle gaps.

A simple decision tree

If your application… Use
Needs every byte, in order, no exceptions TCP (or QUIC if you’re building a new web protocol)
Is many short request/response exchanges (DNS-like) UDP, unless responses can be too large
Is real-time media (voice, video, gaming) UDP
Is HTTP for browsers and you control both ends modernly QUIC / HTTP/3
Needs to survive network changes (mobile) QUIC
Is one-to-many (multicast / broadcast) UDP — TCP can’t do this at all

What you can now answer

  • Why does TCP cost a round-trip before any data moves? — The three-way handshake.
  • What does TCP guarantee that UDP doesn’t? — Ordered delivery, reliability via retransmits, flow control, congestion control, clean teardown.
  • Why is DNS UDP? — Tiny request, tiny response — TCP’s setup would more than double the time.
  • Why is QUIC over UDP if it does TCP-like things? — To live in userspace and bypass middleboxes that won’t let new IP-protocol numbers through.
  • Why does QUIC matter for mobile? — Connection migration: the connection survives a Wi-Fi-to-LTE handoff that would kill TCP.

What’s next

You now understand the layer where applications actually meet the network. The next lesson goes further up: DNS, end to end. We’ll trace a single name resolution from your laptop’s resolver, through the recursive server, up to the root, down to the authoritative server, and back — and look at DNSSEC, DoH/DoT, and the tools you actually use to debug DNS when something’s wrong.

Leave a Reply