How to Enable TCP Fast Open in Nginx

TCP Fast Open (TFO) reduces connection latency by allowing data to be sent during the initial TCP handshake, eliminating one round-trip for repeat visitors. For high-latency connections, mobile users, or sites with many short-lived requests, enabling TFO in Nginx can measurably improve page load times. This guide walks through kernel configuration, Nginx setup, and verification steps to enable TCP Fast Open on your server.

Understanding TCP Fast Open

RFC 7413 defines TCP Fast Open as a TCP extension that optimizes connection establishment for repeat clients. Before configuring it, understanding how TFO works helps you decide whether it benefits your use case.

TCP Fast Open requires Linux kernel 3.7+ and Nginx 1.5.8+. Any modern distribution released after 2014 exceeds these requirements. This is primarily historical context; you almost certainly have support.

The Traditional TCP Handshake

Standard TCP connections require a three-way handshake before any data transfer can begin. First, the client sends a SYN packet. Then, the server responds with SYN-ACK, and the client completes the handshake with an ACK. Only after this exchange can the client send its HTTP request. As a result, high-latency connections experience noticeable delay before any useful work begins.

Traditional TCP Connection (4 steps before response):

  Client                            Server
    |                                  |
    |  ───── 1. SYN ─────────────────► |
    |                                  |
    |  ◄──── 2. SYN-ACK ────────────── |
    |                                  |
    |  ───── 3. ACK ─────────────────► |
    |                                  |
    |  ───── 4. HTTP Request ────────► |  ← Data sent here
    |                                  |
    |  ◄──── 5. HTTP Response ──────── |

How TCP Fast Open Reduces Latency

TCP Fast Open allows the client to send data in the initial SYN packet for subsequent connections. During the first connection, the server generates a cryptographic cookie and sends it to the client. For future connections, the client includes this cookie in the SYN packet along with its HTTP request data. Consequently, the server can validate the cookie and process the request immediately, eliminating one full round-trip.

TCP Fast Open Connection (3 steps, data sent immediately):

  Client                            Server
    |                                  |
    |  ─ 1. SYN + Cookie + Data ─────► |  ← Data sent here (saves 1 RTT)
    |                                  |
    |  ◄──── 2. SYN-ACK ────────────── |  (Server already processing)
    |                                  |
    |  ───── 3. ACK ─────────────────► |
    |                                  |
    |  ◄──── 4. HTTP Response ──────── |

When TCP Fast Open Helps Most

TFO benefits are most noticeable in these scenarios:

  • High-latency connections: Users connecting from distant geographic regions or over mobile networks see the biggest improvement.
  • Many short-lived requests: APIs, AJAX calls, and sites with many small resources benefit from reduced connection overhead.
  • Repeat visitors: TFO only helps on subsequent connections after the initial cookie exchange, so returning visitors benefit most.

For sites where most connections come from new visitors on low-latency networks, the improvement may be minimal. Nevertheless, TFO has negligible overhead when enabled, so there is little downside to enabling it.

Step 1: Enable TCP Fast Open in the Linux Kernel

Before Nginx can use TCP Fast Open, you must enable TFO in the Linux kernel for server sockets. First, check the current kernel setting:

cat /proc/sys/net/ipv4/tcp_fastopen

The returned value is a bitmask with these meanings:

  • 0: TCP Fast Open disabled entirely
  • 1: TFO enabled for outgoing connections only (client mode)
  • 2: TFO enabled for incoming connections only (server mode)
  • 3: TFO enabled for both client and server connections (recommended)

For Nginx to accept TFO connections, you need at least 2 (server mode). The value 3 enables both server and client modes, which is recommended if your server also makes outgoing connections to upstream backends or APIs.

Enable TCP Fast Open Temporarily

To enable TFO immediately without rebooting, write directly to the kernel parameter. Note that the system loses this change on reboot:

sudo bash -c 'echo 3 > /proc/sys/net/ipv4/tcp_fastopen'

Next, verify the change took effect:

cat /proc/sys/net/ipv4/tcp_fastopen
3

Enable TCP Fast Open Permanently

To persist the setting across reboots, create a dedicated sysctl configuration file:

echo "net.ipv4.tcp_fastopen=3" | sudo tee /etc/sysctl.d/99-tcp-fastopen.conf

Apply the setting immediately without rebooting:

sudo sysctl -p /etc/sysctl.d/99-tcp-fastopen.conf
net.ipv4.tcp_fastopen = 3

With this complete, the kernel now accepts TCP Fast Open connections.

Step 2: Configure the Nginx listen Directive

You enable TCP Fast Open in Nginx through the fastopen parameter on the listen directive. This parameter specifies the maximum queue length for pending TFO connections.

Before editing your Nginx configuration, create a backup so you can restore it if needed:

sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup

Add the fastopen Parameter

Edit your server block configuration. On Debian and Ubuntu systems, this is typically in /etc/nginx/sites-available/. On RHEL, Fedora, and other distributions, check /etc/nginx/conf.d/ or the main /etc/nginx/nginx.conf file.

Next, modify the listen directive to include the fastopen parameter:

server {
    listen 80 fastopen=256;
    listen [::]:80 fastopen=256;
    server_name example.com;

    root /var/www/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

Similarly, for HTTPS servers, add the parameter alongside ssl:

server {
    listen 443 ssl fastopen=256;
    listen [::]:443 ssl fastopen=256;
    server_name example.com;

    ssl_certificate /etc/ssl/certs/example.com.crt;
    ssl_certificate_key /etc/ssl/private/example.com.key;

    root /var/www/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

Understanding the fastopen Queue Size

The number following fastopen= sets the maximum number of connections that can be in the TFO pending queue. This queue holds connections where the SYN packet contained data but the handshake is not yet complete.

  • 256: A reasonable default for most servers handling moderate traffic.
  • 512 or higher: Consider larger values for high-traffic servers with many concurrent TFO connections.
  • 10-50: Sufficient for low-traffic or development servers.

When the queue fills up, new TFO connections fall back to the standard three-way handshake, so there is no connection failure. As a result, you can start with 256 and increase the value if you observe TFO connections dropping under load.

Step 3: Test and Apply the Configuration

After modifying your Nginx configuration, test the syntax before reloading:

sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once the test passes, reload Nginx to apply the changes:

sudo systemctl reload nginx

Using reload instead of restart applies the configuration without dropping active connections. Finally, verify Nginx is running:

sudo systemctl status nginx --no-pager
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; preset: enabled)
     Active: active (running) since Sat 2026-01-04 10:30:15 UTC; 2min ago
    Process: 12340 ExecReload=/usr/sbin/nginx -g daemon on; -s reload (code=exited, status=0/SUCCESS)
   Main PID: 12345 (nginx)
      Tasks: 3 (limit: 4567)
     Memory: 5.2M
        CPU: 45ms
     CGroup: /system.slice/nginx.service
             ├─12345 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
             └─12346 "nginx: worker process"

Verify TCP Fast Open Is Working

Testing TCP Fast Open requires a client that supports TFO. The curl command with the --tcp-fastopen flag can send a TFO-enabled request if your client system also has TFO enabled.

Test with curl

From a system with TFO enabled (kernel setting of 1 or 3), run:

curl --tcp-fastopen -I http://your-server-ip/
HTTP/1.1 200 OK
Server: nginx/1.24.0
Date: Sat, 04 Jan 2026 10:35:22 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 01 Jan 2026 00:00:00 GMT
Connection: keep-alive
Accept-Ranges: bytes

When TFO is working, curl completes the request normally. However, curl does not explicitly report whether it used TFO. Therefore, to confirm TFO is active, check the kernel statistics.

Check Kernel TFO Statistics

On the server, first verify Nginx is listening and then examine the TCP Fast Open counters in the kernel:

ss -tlnp | grep nginx
LISTEN  0  511  0.0.0.0:80  0.0.0.0:*  users:(("nginx",pid=12346,fd=6))
LISTEN  0  511  [::]:80     [::]:*     users:(("nginx",pid=12346,fd=7))

This output confirms Nginx is accepting connections on port 80. To verify the kernel is generating TFO cookies, monitor the TFO counters after making several requests:

grep -i fastopen /proc/net/netstat

Look for the TcpExt: line containing TFO statistics. When you see the TCPFastOpenPassive or TCPFastOpenCookieReqd counters increasing, clients are successfully using TFO.

Troubleshooting TCP Fast Open

If TCP Fast Open is not working as expected, check these common issues:

Kernel TFO Not Enabled for Servers

If the kernel setting is 0 or 1, Nginx cannot accept TFO connections. Verify and correct the setting:

cat /proc/sys/net/ipv4/tcp_fastopen

If it returns 0 or 1, set it to 3 as shown in Step 1.

Middleboxes or Firewalls Stripping TFO Options

Some network devices (firewalls, load balancers, CDNs) strip TCP options they do not recognize. If TFO works on localhost but not from external networks, a middlebox may be interfering. CDN providers like Cloudflare handle TFO at their edge, so your origin server’s TFO setting may not matter for proxied traffic.

Client Does Not Support TFO

TFO requires both server and client support. Most modern Linux distributions and macOS enable TFO by default, while Windows has limited TFO support. If your testing client does not support TFO, the connection silently falls back to the standard handshake.

Nginx Configuration Not Applied

If you edited the wrong configuration file or forgot to reload, TFO will not be active. Verify your configuration is loaded:

sudo nginx -T | grep fastopen

This command dumps the active configuration. If you do not see fastopen in the output, Nginx did not apply your changes.

Additional Nginx Performance Optimizations

TCP Fast Open is one of several performance optimizations available in Nginx. Consider combining it with these related configurations:

Conclusion

You have now configured TCP Fast Open on your server, reducing connection latency for repeat visitors by allowing data transmission during the TCP handshake. The sysctl setting enables kernel-level TFO support, while Nginx’s fastopen parameter on the listen directive activates the feature. Going forward, monitor your TFO statistics to confirm the optimization benefits your traffic, and consider combining it with other Nginx performance features like reuseport and open file cache for maximum efficiency.

Leave a Comment

Let us know you are human: