Skip to content

Optimize and tune Nginx for high concurrency



Nginx is a high-performance web server and reverse proxy that can handle a large number of concurrent connections. However, to ensure that Nginx can handle high concurrency, it is important to optimize and tune its configuration. In this blog post, we will discuss three key areas of optimization: worker processes, client connections, and caching.

Worker processes

Nginx operates using a master process that manages worker processes. The number of worker processes determines how many concurrent connections Nginx can handle. By default, Nginx starts with one worker process, which may not be sufficient for high concurrency. To optimize Nginx for high concurrency, it is important to configure the number of worker processes based on the available CPU cores. A good rule of thumb is to set the number of worker processes to the number of CPU cores available on the server.

Client connections

Another important factor in optimizing Nginx for high concurrency is the number of client connections that can be handled. By default, Nginx sets a limit of 1024 client connections per worker process. This limit may be too low for high concurrency. To increase the number of client connections, it is important to adjust the worker_connections directive in the Nginx configuration file. A good rule of thumb is to set this value to the number of worker processes multiplied by the number of CPU cores available on the server.


Caching is an important technique for improving website performance and reducing server load. Nginx supports caching using the proxy_cache directive. This directive can be used to cache responses from upstream servers, reducing the load on the backend servers and improving response times for clients. To optimize caching in Nginx, it is important to configure the cache size, cache keys, and cache timeouts based on the requirements of the website.

System Level

I. Adjust the number of files opened simultaneously:

ulimit -n 20480

II. Max TCP connections (somaxconn):

echo 10000 > /proc/sys/net/core/somaxconn

III. Immediate TCP connection recycle and reuse (recycle, reuse):

echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle

IV. Do not resist TCP flooding:

echo 0 > /proc/sys/net/ipv4/tcp_syncookies

Alternatively, you can use the optimized configuration by adding the following to /etc/sysctl.conf:

net.core.somaxconn = 20480
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_orphans = 131072
net.ipv4.tcp_syncookies = 0

To use: sysctl -p to take effect

sysctl -p

Nginx Level

Modify the Nginx configuration file, nginx.conf:

Increase the number of worker_rlimit_nofile and worker_connections, and disable keepalive_timeout.

worker_processes  1;
worker_rlimit_nofile 20000;

events {
    use epoll;
    worker_connections 20000;
    multi_accept on;

http {
  keepalive_timeout 0;

Restart Nginx:

/usr/local/nginx/sbin/nginx -s reload

Use ab stress test:

ab -c 10000 -n 150000 <>
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, <>
Licensed to The Apache Software Foundation, <>

Benchmarking (be patient)
Completed 15000 requests
Completed 30000 requests
Completed 45000 requests
Completed 60000 requests
Completed 75000 requests
Completed 90000 requests
Completed 105000 requests
Completed 120000 requests
Completed 135000 requests
Completed 150000 requests
Finished 150000 requests

Server Software:        nginx/1.8.0
Server Hostname:
Server Port:            80

Document Path:          /index.html
Document Length:        612 bytes

Concurrency Level:      10000
Time taken for tests:   19.185 seconds
Complete requests:      150000
Failed requests:        0
Write errors:           0
Total transferred:      131180388 bytes
HTML transferred:       95121324 bytes
Requests per second:    7818.53 [#/sec] (mean)
Time per request:       1279.013 [ms] (mean)
Time per request:       0.128 [ms] (mean, across all concurrent requests)
Transfer rate:          6677.33 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  650 547.9    522    7427
Processing:   212  519 157.4    496     958
Waiting:        0  404 139.7    380     845
Total:        259 1168 572.1   1066    7961

Percentage of the requests served within a certain time (ms)
  50%   1066
  66%   1236
  75%   1295
  80%   1320
  90%   1855
  95%   2079
  98%   2264
  99%   2318
 100%   7961 (longest request)
Leave your message