Maximizing TCP Throughput in Linux: Understanding and Tuning Send and Receive Buffers
--
A standard performance tuning recommendation for distributed systems running on Linux is to adjust the tcp_rmem
and tcp_wmem
Kernel tunables. But what are these? Why are they important? How do they work?
In today's article, we are going to explore the tcp_rmem
(Receive Buffer) and tcp_wmem
(Send Buffer) work and how to adjust them to improve network-based application performance.
Understanding TCP Buffers
To better understand how TCP Buffers work, let's explore a scenario where "Application A," a Client, wants to send data to "Application B," a Server over TCP (depicted below).
The Sender
In the diagram above, we can see that Application A would like to send 4000 bytes of data to Application B. To do this, Application A will append the data to the socket using either the write()
or send()
system call. From an application perspective, the data has been written to a socket, but the data is first appended to a Send Buffer for that socket.
Once the data is available within the Send Buffer, the Kernel will break up the data into a series of TCP packets.
Typically, the default size of a packet on Linux systems is 1500 bytes, with the first 24 bytes being the packet header; this means a single packet can hold 1476 bytes of application data. To send 4000 bytes of application data, the Kernel will need to send three packets.
As an application, we don't need to worry about how much data goes into one packet. Applications write data to a buffer. The Kernel takes the data from that buffer and sends it to the target system in however many packets are required.
The Kernel will also keep the application data within the Send Buffer until the Server has acknowledged the sent packets. It does this in case packets need to be retransmitted in cases of packet loss.