How to deal with Dubbo congestion caused by sudden traffic?

osc_yo7hxxom 2020-11-09 13:38:52
deal dubbo congestion caused sudden


Click on the above “ In blue ”, choice “ Set to star ”

Be a positive person , Instead of being active !

author | nxlhero

source | https://blog.51cto.com/nxlhero/2515849

The content and structure of the article

The first part introduces the appearance of Dubbo Service congestion , as well as Dubbo Official recommendations for the use of a single long connection .

Part ii introduction Dubbo The communication process in a specific configuration , With code .

The third part introduces some performance related parameters in the whole call process .

The fourth part by adjusting the number of connections and TCP Buffer observation Dubbo Performance of .

One 、 background

Production congestion review

A recent release in the production process , Because of the sudden traffic , There's congestion . The deployment diagram of the system is as follows , Client pass Http Protocol access to Dubbo The consumer , Consumers go through Dubbo Protocol access service provider . This is a single machine room ,8 Consumers 3 Providers , There are two computer rooms for external service .

In the process of release , Take off a computer room , Let another computer room serve the outside world , Then the removed computer room will release a new version , And then swap , In the end, the two computer rooms will serve the public with new versions . The problem is when the single room serves the outside world , At this time, the single room is still the old version application . I didn't know there would be a peak at night , As a result, the peak in the evening was almost the same as that in the morning , A single room can't carry such a large flow , There's congestion . The characteristic of these traffic is high concurrency , Individual transaction return message is large , Because it's a product list page , Click to send multiple transactions to the background .

At the time of the problem , Because I don't know the State , Switch to another machine room first , It turns out to be congested , Finally, the whole thing goes back , It's been a while and there's no problem . There were some phenomena at that time :

(1) Of the provider CPU Memory is not high , The highest in the first machine room CPU 66%(8 Nuclear virtual machine ), The second machine room is the highest CPU 40%(16 Nuclear virtual machine ). Consumer's highest CPU Only 30% many ( Two consumer nodes are located on the same virtual machine )

(2) In times of congestion , Of the service provider Dubbo Business thread pool ( This thread pool is described in detail below ) It's not full , At most 300, The maximum is 500. But after taking this machine room down , That is, there is no external traffic , The thread pool is full instead , And it took a few minutes to process the accumulated requests .

(3) Entries per second counted by monitoring tool Dubbo Number of requests from the business thread pool , In congestion , Sometimes it's 0, Sometimes it's really big , During normal daytime hours , This value does not exist for 0 When .

Guess the cause of the accident

At that time, no abnormality was detected in other indicators , I didn't fight Dump, We analyze these phenomena and our Dubbo To configure , Guess it's congestion on the Internet , And the key parameter that affects congestion is Dubbo Number of protocol connections , We use a single connection by default , But there are fewer consumers , Failed to make full use of network resources .

Official account : Back end interviews , Learn a little every day , Go into the big factory together !

By default , Every Dubbo Consumers and Dubbo The provider establishes a long connection ,Dubbo The official advice is :

Dubbo The default protocol is single long connection and NIO Asynchronous communication , It is suitable for small data and large concurrent service calls , And the situation that the number of service consumer machines is far greater than the number of service provider machines .

conversely ,Dubbo The default protocol is not suitable for delivering services with large amount of data , For example, file transfer , Video transmission, etc , Unless the number of requests is very low .

(http://dubbo.apache.org/zh-cn/docs/user/references/protocol/dubbo.html)

The following is also Dubbo Official answers to some common questions :

Why more consumers than providers ?

because dubbo The protocol uses a single long connection , Suppose the network is a gigabit network card , According to the test experience data, each connection can only be full at most 7MByte( Different environments may be different , For reference ), Theoretically 1 A service provider needs 20 Only a service consumer can fill up the network card .

Why can't we pass on the big bag ?

because dubbo The protocol uses a single long connection , If the packet size of each request is 500KByte, Suppose the network is a gigabit network card , Maximum per connection 7MByte( Different environments may be different , For reference ), Of a single service provider TPS( Number of transactions processed per second ) The maximum is :128MByte / 500KByte = 262. A single consumer invokes the... Of a single service provider TPS( Number of transactions processed per second ) The maximum is :7MByte / 500KByte = 14. If it's acceptable , Consider using , Otherwise, the network will become a bottleneck .

Why asynchronous single long connection ?

Because most of the current situation of services is that there are few service providers , Usually there are only a few machines , And there are more consumers of services , Maybe the whole site is visiting the service , such as Morgan The only provider is 6 Taiwan provider , But there are hundreds of consumers , Every day 1.5 Billion calls , If conventional hessian service , Service providers are easily overwhelmed , Through a single connection , To ensure that a single consumer does not crush the provider , A long connection , Reduce connection handshake verification, etc , And use asynchronous IO, Reuse thread pool , prevent C10K problem .

Because we don't have a lot of consumers and providers , So it's probably not enough connections , This leads to the bottleneck of network transmission . Let's make a detailed analysis of Dubbo Protocol and some experiments to verify our conjecture .

Two 、Dubbo Communication process details

We use Dubbo Older version , yes 2.5.x Of , It USES netty The version is 3.2.5, the latest version Dubbo There are some modifications to the thread model , Our following analysis is based on 2.5.10 For example .

It is illustrated by figure and part of code Dubbo Protocol call procedure , The code only writes a few key parts , It uses netty3,dubbo The thread pool has no queues , A synchronous invocation , The following code contains Dubbo and Netty Code for .

Whole Dubbo A call process is as follows :

1. Enqueuing Requests

We go through Dubbo A call to a rpc service , The calling thread actually encapsulates the request and puts it into a queue . This queue is netty A queue for , The definition of this queue is as follows , It's a Linked queue , Unlimited length .

class NioWorker implements Runnable {
...
private final Queue<Runnable> writeTaskQueue = new LinkedTransferQueue<Runnable>();
...
}

A series of calls to the main thread go through , Finally through NioClientSocketPipelineSink Class puts the request in this queue , Requests put into the queue , Contains a request ID, This ID Very important .

2. Call thread wait

After the team ,netty Will return to the calling thread a Future, Then the calling thread waits Future On . This Future yes Dubbo Defined , Name is DefaultFuture, The main call is called by thread DefaultFuture.get(timeout), Wait for a notice , So we look at it with Dubbo dependent ThreadDump, It's common to see threads stop here , This is waiting for the backstage to return .

public class DubboInvoker<T> extends AbstractInvoker<T> {
...
@Override
protected Result doInvoke(final Invocation invocation) throws Throwable {
...
return (Result) currentClient.request(inv, timeout).get(); //currentClient.request(inv, timeout) Returned a DefaultFuture
}
...
}

We can take a look at this DefaultFuture The implementation of the ,

public class DefaultFuture implements ResponseFuture {
private static final Map<Long, Channel> CHANNELS = new ConcurrentHashMap<Long, Channel>();
private static final Map<Long, DefaultFuture> FUTURES = new ConcurrentHashMap<Long, DefaultFuture>();
// invoke id.
private final long id; //Dubbo Requested id, Every consumer is from 0 At the beginning long type 
private final Channel channel;
private final Request request;
private final int timeout;
private final Lock lock = new ReentrantLock();
private final Condition done = lock.newCondition();
private final long start = System.currentTimeMillis();
private volatile long sent;
private volatile Response response;
private volatile ResponseCallback callback;
public DefaultFuture(Channel channel, Request request, int timeout) {
this.channel = channel;
this.request = request;
this.id = request.getId();
this.timeout = timeout > 0 ? timeout : channel.getUrl().getPositiveParameter(Constants.TIMEOUT_KEY, Constants.DEFAULT_TIMEOUT);
// put into waiting map.
FUTURES.put(id, this); // Wait with id by key hold Future Put it all together Future Map in , In this way, the reply data comes back according to id Find the corresponding Future Notification thread 
CHANNELS.put(id, channel);
}

3.IO The thread reads the data in the queue

The work is done by netty Of IO The thread pool completes , That is to say NioWorker, The corresponding class is called NioWorker. It will execute in an endless loop select, stay select in , The write requests in the queue are processed at one time ,select The logic is as follows :

public void run() {
    for (;;) {
....
            SelectorUtil.select(selector);
            proce***egisterTaskQueue();
            processWriteTaskQueue(); // Processing write requests in the queue first 
processSelectedKeys(selector.selectedKeys()); // To deal with select event , Both reading and writing may have 
....
}
}
private void processWriteTaskQueue() throws IOException {
for (;;) {
final Runnable task = writeTaskQueue.poll();// This queue is the queue in which the calling thread puts the request 
if (task == null) {
break;
}
task.run(); // Writing data 
cleanUpCancelledKeys();
}
}

4.IO Threads write data to Socket buffer

This is an important step , It's related to the performance problems we've encountered , still NioWorker, That's the last step task.run(), Its implementation is as follows :

void writeFromTaskLoop(final NioSocketChannel ch) {
if (!ch.writeSuspended) { // This place is very important , If writeSuspended 了 , So just skip this one 
write0(ch);
}
}
private void write0(NioSocketChannel channel) {
......
final int writeSpinCount = channel.getConfig().getWriteSpinCount(); //netty A configurable parameter , The default is 16
synchronized (channel.writeLock) {
channel.inWriteNowLoop = true;
for (;;) {
for (int i = writeSpinCount; i > 0; i --) { // Try at most 16 Time 
localWrittenBytes = buf.transferTo(ch);
if (localWrittenBytes != 0) {
writtenBytes += localWrittenBytes;
break;
                }
if (buf.finished()) {
break;
}
}
if (buf.finished()) {
// Successful write - proceed to the next message.
buf.release();
channel.currentWriteEvent = null;
channel.currentWriteBuffer = null;
evt = null;
buf = null;
future.setSuccess();
} else {
// Not written fully - perhaps the kernel buffer is full.
// The point is , If write 16 I haven't finished writing yet , Maybe the kernel buffer is full ,writeSuspended Set to true
addOpWrite = true;
channel.writeSuspended = true;
......
}
......
if (open) {
if (addOpWrite) {
setOpWrite(channel);
} else if (removeOpWrite) {
clearOpWrite(channel);
}
}
......
}
fireWriteComplete(channel, writtenBytes);
}

Under normal circumstances , Write requests in the queue are passed through processWriteTaskQueue Dispose of , But these write requests are also registered with selector On , If processWriteTaskQueue Write a successful , You'll delete selector Write request on . If Socket The write buffer for is full , about NIO, Will return immediately , about BIO, Will be waiting .Netty It uses NIO, It tries to 16 Next time , Still can't write success , It just put writeSuspended Set to true, In this way, all subsequent write requests will be skipped . When will it be written again ? This is the time to lean selector 了 , If it finds socket Can write , Just put the data in .

Official account : Program the ape DD, Grow with first-line architects !

Here is processSelectedKeys The process written in , Because it's discovery socket Write only when you can write , So directly writeSuspended Set to false.

void writeFromSelectorLoop(final SelectionKey k) {
NioSocketChannel ch = (NioSocketChannel) k.attachment();
ch.writeSuspended = false;
write0(ch);
}

5. Data from consumers socket Send buffer transfers to the provider's receive buffer

This is the operating system and network card to achieve , The application layer of the write Write successfully , It doesn't mean you can get it from the other side , Of course tcp Through the retransmission mechanism, we will try our best to ensure that the peer receives .

6. Server side IO The thread reads the request data from the buffer

This is the server side NIO Thread implementation , stay processSelectedKeys in .

public void run() {
    for (;;) {
....
        SelectorUtil.select(selector);
proce***egisterTaskQueue();
processWriteTaskQueue();
processSelectedKeys(selector.selectedKeys()); // To deal with select event , Both reading and writing may have 
....
}
}
private void processSelectedKeys(Set<SelectionKey> selectedKeys) throws IOException {
for (Iterator<SelectionKey> i = selectedKeys.iterator(); i.hasNext();) {
SelectionKey k = i.next();
i.remove();
try {
int readyOps = k.readyOps();
if ((readyOps & SelectionKey.OP_READ) != 0 || readyOps == 0) {
if (!read(k)) {
// Connection already closed - no need to handle write.
continue;
}
}
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
writeFromSelectorLoop(k);
}
} catch (CancelledKeyException e) {
close(k);
}
if (cleanUpCancelledKeys()) {
break; // break the loop to avoid ConcurrentModificationException
}
}
}
private boolean read(SelectionKey k) {
......
// Fire the event.
fireMessageReceived(channel, buffer); // After reading , This function will eventually be called , Send an event that receives a message 
......
}

7.IO The thread hands the request to Dubbo Thread pool

According to the configuration , Walking Handler Different , To configure dispatch by all, Walking handler as follows . below IO The thread is handed directly to a ExecutorService To deal with this request , There's a familiar error “Threadpool is exhausted", When the business thread pool is full , If there is no queue , It's a mistake .

public class AllChannelHandler extends WrappedChannelHandler {
......
public void received(Channel channel, Object message) throws RemotingException {
ExecutorService cexecutor = getExecutorService();
try {
cexecutor.execute(new ChannelEventRunnable(channel, handler, ChannelState.RECEIVED, message));
} catch (Throwable t) {
//TODO A temporary solution to the problem that the exception information can not be sent to the opposite end after the thread pool is full. Need a refactoring
//fix The thread pool is full, refuses to call, does not return, and causes the consumer to wait for time out
if(message instanceof Request && t instanceof RejectedExecutionException){
Request request = (Request)message;
if(request.isTwoWay()){
String msg = "Server side(" + url.getIp() + "," + url.getPort() + ") threadpool is exhausted ,detail msg:" + t.getMessage();
Response response = new Response(request.getId(), request.getVersion());
response.setStatus(Response.SERVER_THREADPOOL_EXHAUSTED_ERROR);
response.setErrorMessage(msg);
channel.send(response);
return;
}
}
throw new ExecutionException(message, channel, getClass() + " error when process received event .", t);
}
}
......
}

8. Server side Dubbo After the thread pool has processed the request , Put the return message in the queue

The thread pool will call the following functions

public class HeaderExchangeHandler implements ChannelHandlerDelegate {
......
Response handleRequest(ExchangeChannel channel, Request req) throws RemotingException {
Response res = new Response(req.getId(), req.getVersion());
......
// find handler by message class.
Object msg = req.getData();
try {
// handle data.
Object result = handler.reply(channel, msg); // Real business logic classes 
res.setStatus(Response.OK);
res.setResult(result);
} catch (Throwable e) {
res.setStatus(Response.SERVICE_ERROR);
res.setErrorMessage(StringUtils.toString(e));
}
return res;
}
public void received(Channel channel, Object message) throws RemotingException {
......
if (message instanceof Request) {
// handle request.
Request request = (Request) message;
if (request.isTwoWay()) {
Response response = handleRequest(exchangeChannel, request); // Process business logic , Get one Response
channel.send(response); // Write back response
}
}
......
}

channel.send(response) Finally called NioServerSocketPipelineSink Put the returned message in the queue .

9. Server side IO The thread takes data out of the queue

And process 3 equally

10. Server side IO The thread writes the reply data to Socket Send buffer

IO When threads write data , Write to TCP The buffer works . But if the buffer is full , It can't be written in . For blocking and non blocking IO, The return result is different , Blocking IO Will be waiting for , Instead of blocking IO Will fail immediately , Let the caller choose the strategy .

Netty The strategy is to try to write as much as possible 16 Time , If it doesn't work , Stop for a while IO Writes to threads , Wait until the connection is writable ,writeSpinCount The default is 16, It can be adjusted by parameters .

for (int i = writeSpinCount; i > 0; i --) {
localWrittenBytes = buf.transferTo(ch);
if (localWrittenBytes != 0) {
writtenBytes += localWrittenBytes;
break;
}
if (buf.finished()) {
break;
}
}
if (buf.finished()) {
// Successful write - proceed to the next message.
buf.release();
channel.currentWriteEvent = null;
channel.currentWriteBuffer = null;
evt = null;
buf = null;
future.setSuccess();
} else {
// Not written fully - perhaps the kernel buffer is full.
addOpWrite = true;
channel.writeSuspended = true;

11. The data transfer

Data transmission over the network mainly depends on the bandwidth and network environment .

12. client IO The thread reads data out of the buffer

This process and process 6 It's the same

13.IO The thread hands data to Dubbo Business thread pool

This step and process 7 It's the same , The name of this thread pool is DubboClientHandler.


14. Business thread pool based on message ID Notification thread

Through the first HeaderExchangeHandler Of received The function knows that it is Response, And then call handleResponse,

public class HeaderExchangeHandler implements ChannelHandlerDelegate {
static void handleResponse(Channel channel, Response response) throws RemotingException {
if (response != null && !response.isHeartbeat()) {
DefaultFuture.received(channel, response);
}
}
public void received(Channel channel, Object message) throws RemotingException {
......
if (message instanceof Response) {
handleResponse(channel, (Response) message);
}
......
}

DefaultFuture according to ID obtain Future, Notify the calling thread

 public static void received(Channel channel, Response response) {
......
DefaultFuture future = FUTURES.remove(response.getId());
if (future != null) {
future.doReceived(response);
}
......
}

thus , The main thread gets the returned data , End of call .

3、 ... and 、 Key parameters that affect the above process

Protocol parameters

We are using Dubbo when , The protocol needs to be configured on the server side , for example

<dubbo:protocol name="dubbo" port="20880" dispatcher="all" threadpool="fixed" threads="2000" />

Here are some performance related parameters in the protocol , In our scenario , Thread pool uses fixed, Size is 500, The queue is 0, Everything else is default .

attribute Corresponding URL Parameters type If required The default value effect describe
name <protocol> string Required dubbo performance tuning Name of agreement
threadpool threadpool string Optional fixed performance tuning Thread pool type , Optional :fixed/cached.
threads threads int Optional 200 performance tuning Service thread pool size ( Fixed size )
queues queues int Optional 0 performance tuning Thread pool queue size , When the thread pool is full , The size of the queue waiting to execute , It is not recommended to set , When the thread pool is full, it should fail immediately , Retry other service provider machines , Instead of waiting in line , Unless there is a special need .
iothreads iothreads int Optional cpu Number +1 performance tuning io Thread pool size ( Fixed size )
accepts accepts int Optional 0 performance tuning Service provider's maximum number of acceptable connections , This is the maximum number of connections that can be made on the server side , For example, set to 2000, If you have established 2000 A connection , New comers will be rejected , To protect service providers .
dispatcher dispatcher string Optional dubbo The protocol defaults to all performance tuning How to distribute messages of the agreement , Used to specify the thread model , such as :dubbo Agreed all, direct, message, execution, connection etc. . This mainly involves IO The division of labor between thread pool and business thread pool , In general , Let the business thread pool handle making connections 、 Heart rate etc. , It won't have a big impact .
payload payload int Optional 8388608(=8M) performance tuning Request and response packet size limit , Company : byte . This is the maximum allowed length of a single message ,Dubbo Not suitable for requests with long packets , So there are restrictions .
buffer buffer int Optional 8192 performance tuning Network read / write buffer size . Note that this is not TCP buffer , This is when reading and writing network messages , The application layer of the Buffer.
codec codec string Optional dubbo performance tuning Protocol encoding
serialization serialization string Optional dubbo The protocol defaults to hessian2,rmi The protocol defaults to java,http The protocol defaults to json performance tuning Protocol serialization , Used when the protocol supports multiple serialization methods , such as :dubbo Agreed dubbo,hessian2,java,compactedjava, as well as http Agreed json etc.
transporter transporter string Optional dubbo The protocol defaults to netty performance tuning Protocol server and client implementation type , such as :dubbo Agreed mina,netty etc. , It can be divided into server and client To configure
server server string Optional dubbo The protocol defaults to netty,http The protocol defaults to servlet performance tuning The server-side implementation type of the protocol , such as :dubbo Agreed mina,netty etc. ,http Agreed jetty,servlet etc.
client client string Optional dubbo The protocol defaults to netty performance tuning The client implementation type of the protocol , such as :dubbo Agreed mina,netty etc.
charset charset string Optional UTF-8 performance tuning Serialization encoding
heartbeat heartbeat int Optional 0 performance tuning Heartbeat interval , For long connections , When the physical layer is disconnected , For example, unplug the cable ,TCP Of FIN Message too late to send , The other party can't receive the disconnection event , A heartbeat is needed to help check if the connection is broken

Service parameters

For each Dubbo service , There will be a configuration , All the parameters are configured here :http://dubbo.apache.org/zh-cn/docs/user/references/xml/dubbo-service.html.

We focus on several performance related . In our scenario , The number of retries is set to 0, Cluster mode uses failfast, The others are the default values .

attribute Corresponding URL Parameters type If required The default value effect describe Compatibility
delay delay int Optional 0 performance tuning Delay registration service time ( millisecond ) , Set to -1 when , To delay to Spring Expose services when container initialization is complete 1.0.14 Above version
timeout timeout int Optional 1000 performance tuning Remote service call timeout ( millisecond ) 2.0.0 Above version
retries retries int Optional 2 performance tuning Remote service call retries , Not including first call , No need to retry please set to 0 2.0.0 Above version
connections connections int Optional 1 performance tuning The maximum number of connections to each provider ,rmi、http、hessian Equal short connection protocol means to limit the number of connections ,dubbo Equal length association means the number of long connections established 2.0.0 Above version
loadbalance loadbalance string Optional random performance tuning Load balancing strategy , Optional value :random,roundrobin,leastactive, respectively : Random , polling , Minimum active calls 2.0.0 Above version
async async boolean Optional false performance tuning Whether asynchronous execution is the default , Unreliable asynchronous , Just ignore the return value , Do not block the execution thread 2.0.0 Above version
weight weight int Optional
performance tuning Service weight 2.0.5 Above version
executes executes int Optional 0 performance tuning The maximum number of requests that can be executed in parallel per service per method by the service provider 2.0.5 Above version
proxy proxy string Optional javassist performance tuning Generate dynamic proxy mode , Optional :jdk/javassist 2.0.5 Above version
cluster cluster string Optional failover performance tuning The cluster approach , Optional :failover/failfast/failsafe/failback/forking 2.0.5 Above version

The main reason for this congestion , It should be service connections Set too small ,dubbo No global connection number configuration is provided , Only personalized connection number configuration can be made for a certain transaction .

Four 、 Number of connections with Socket Experiment of buffer effect on performance

Through simple Dubbo service , Verify the effect of connection number and buffer size on transmission performance .

We can modify the system parameters , Adjust the TCP Size of buffer .

stay /etc/sysctl.conf Modify the following , tcp_rmem It's the send buffer ,tcp_wmem It's the receive buffer , Three values represent the minimum , Default and maximum , We can all set it the same .

net.ipv4.tcp_rmem = 4096 873800 16777216
net.ipv4.tcp_wmem = 4096 873800 16777216

And then execute sysctl –p Make it effective .

The server code is as follows , Accept a message , Then it returns twice the message length , Random sleep 0-300ms, So the mean should be 150ms. On the server side, every 10s Print once tps And response time , there tps To complete a function call tps, It doesn't involve transmission , The response time is also the time of this function .

 // Server implementation 
public String sayHello(String name) {
counter.getAndIncrement();
long start = System.currentTimeMillis();
try {
Thread.sleep(rand.nextInt(300));
} catch (InterruptedException e) {
}
String result = "Hello " + name + name + ", response form provider: " + RpcContext.getContext().getLocalAddress();
long end = System.currentTimeMillis();
timer.getAndAdd(end-start);
return result;
}

From the client N Threads , Every thread calls Dubbo service , Every time 10s Print once qps And response time , This qps And response time includes network transmission time .

 for(int i = 0; i < N; i ++) {
threads[i] = new Thread(new Runnable() {
@Override
public void run() {
while(true) {
Long start = System.currentTimeMillis();
String hello = service.sayHello(z);
Long end = System.currentTimeMillis();
totalTime.getAndAdd(end-start);
counter.getAndIncrement();
}
}});
threads[i].start();
}

adopt ss -it The command can look at the current tcp socket Details of , Include the reply to the other side ack The data of Send-Q, The biggest window cwnd,rtt(round trip time) etc. .

(base) niuxinli@ubuntu:~$ ss -it
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 36 192.168.1.7:ssh 192.168.1.4:58931
cubic wscale:8,2 rto:236 rtt:33.837/8.625 ato:40 mss:1460 pmtu:1500 rcvmss:1460 advmss:1460 cwnd:10 bytes_acked:559805 bytes_received:54694 segs_out:2754 segs_in:2971 data_segs_out:2299 data_segs_in:1398 send 3.5Mbps pacing_rate 6.9Mbps delivery_rate 44.8Mbps busy:36820ms unacked:1 rcv_rtt:513649 rcv_space:16130 rcv_ssthresh:14924 minrtt:0.112
ESTAB 0 0 192.168.1.7:36666 192.168.1.7:2181
cubic wscale:7,7 rto:204 rtt:0.273/0.04 ato:40 mss:33344 pmtu:65535 rcvmss:536 advmss:65483 cwnd:10 bytes_acked:2781 bytes_received:3941 segs_out:332 segs_in:170 data_segs_out:165 data_segs_in:165 send 9771.1Mbps lastsnd:4960 lastrcv:4960 lastack:4960 pacing_rate 19497.6Mbps delivery_rate 7621.5Mbps app_limited busy:60ms rcv_space:65535 rcv_ssthresh:66607 minrtt:0.035
ESTAB 0 27474 192.168.1.7:20880 192.168.1.5:60760
cubic wscale:7,7 rto:204 rtt:1.277/0.239 ato:40 mss:1448 pmtu:1500 rcvmss:1448 advmss:1448 cwnd:625 ssthresh:20 bytes_acked:96432644704 bytes_received:49286576300 segs_out:68505947 segs_in:36666870 data_segs_out:67058676 data_segs_in:35833689 send 5669.5Mbps pacing_rate 6801.4Mbps delivery_rate 627.4Mbps app_limited busy:1340536ms rwnd_limited:400372ms(29.9%) sndbuf_limited:433724ms(32.4%) unacked:70 retrans:0/5 rcv_rtt:1.308 rcv_space:336692 rcv_ssthresh:2095692 notsent:6638 minrtt:0.097

adopt netstat -nat You can also view the current tcp socket Some information , such as Recv-Q, Send-Q.

(base) niuxinli@ubuntu:~$ netstat -nat
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:20880 0.0.0.0:* LISTEN
tcp 0 36 192.168.1.7:22 192.168.1.4:58931 ESTABLISHED
tcp 0 0 192.168.1.7:36666 192.168.1.7:2181 ESTABLISHED
tcp 0 65160 192.168.1.7:20880 192.168.1.5:60760 ESTABLISHED

You can see the following Recv-Q and Send-Q The specific meaning of :

 Recv-Q Established: The count of bytes not copied by the user program connected to this socket.
Send-Q
Established: The count of bytes not acknowledged by the remote host.

Recv-Q It's already in the accept buffer , But data that hasn't been read by the application code .Send-Q It's already in the send buffer , But they haven't responded yet Ack The data of . These two kinds of data normally do not accumulate , If it's piled up , There may be a problem .

The first group of experiments : Single connection , change TCP buffer

result :

Continue to enlarge the buffer

We use it netstat perhaps ss Command to see the current socket situation , The second column below is Send-Q size , Data written to the buffer that has not been acknowledged by the opposite end , When the send buffer is maximum 64k about , The buffer is not enough .

Keep increasing the buffer , To 4M, We can see , The response time goes down further , But still wasted a lot of time on the transmission , Because there is no pressure on the server application layer .

Server and client TCP The situation is as follows , None of the buffers are full

<center> Server side </center>

<center> client </center>

This is the time , How to turn it up again TCP buffer , It's no use , Because the bottleneck is not here , It's the number of connections . Because in Dubbo in , A connection is bound to a NioWorker On the thread , Reading and writing are all done by this connection , The transmission speed exceeds the reading and writing capacity of a single thread , So we see on the client side , A lot of data is squeezed into the receive buffer , Not read away , In this way, the transmission rate of the opposite end will also slow down .

The second group of experiments : More connections , Fixed buffer

The service-side pure business function response time is very stable , When the buffer is small , Although the adjustment of the number can make the time down , But it's not the best , So the buffer can't be set too small ,Linux The general default is 4M, stay 4M When ,4 Connections have basically minimized the response time .

# Conclusion

To make the most of network bandwidth , The buffer can't be too small , If it's too small, it's possible that a single transmission will be larger than the buffer , Serious impact on transmission efficiency . But it doesn't work if it's too big , More than one connection is needed to make the most of CPU resources , The number of connections must at least exceed CPU Check the number .

 Recommended reading 
 Code comparison tool , I'll use this 6 individual 
 Share my favorite 5 A free online  SQL Database environment , It's so convenient !
Spring Boot A combination of three moves , Hand in hand to teach you to play elegant back-end interface 
MySQL 5.7 vs 8.0, You choose the one ? Net friend : I'm going to stay where I am ~

Last , I recommend you an interesting and interesting official account : The guy who wrote the code ,7 Old programmers teach you to write bug, reply interview | resources Send you a complete set of Development Notes There's a surprise
Sweep yards attention
版权声明
本文为[osc_yo7hxxom]所创,转载请带上原文链接,感谢

  1. Introduction and practice of springboot cache
  2. 解决No enum constant org.apache.ibatis.type.JdbcType.XXX
  3. Solve no enum constant org.apache.ibatis . type.JdbcType.XXX
  4. JVM bytecode instruction manual - view Java bytecode
  5. How much do you know about Java data types?
  6. ELF_ Plead -- blacktech hackers organize malware for Linux
  7. Introduction to HTTP message format
  8. MySQL 数据类型
  9. MySQL基础命令中文解析
  10. JDBC事务
  11. JDBC transaction
  12. 1.3Java基础入门【运算符】
  13. Spring cloud course: 12. Similarities and differences of Eureka / consult / zookeeper
  14. Linux operation and maintenance (1) - command line
  15. Java number class, character class, string class
  16. Java计算机IT编程文档常见单词翻译
  17. Tencent interview: how to achieve the isolation level between MySQL transaction and mvcc?
  18. Translation of common words in it programming documents of Java computer
  19. Algorithm - a classic SQL question and a Java algorithm question
  20. Algorithm - a classic SQL question and a Java algorithm question
  21. Tencent interview: how to achieve the isolation level between MySQL transaction and mvcc?
  22. Principle analysis of MySQL mvcc
  23. Zookeeper使用场景
  24. Usage scenarios of zookeeper
  25. JAVA_逻辑运算符与位运算符使用
  26. Simple use of Java date class
  27. Basic learning of JavaScript (1)
  28. Oracle Data Types(数据类型)
  29. Dubbo、SpringCloud
  30. Geohash principle and operation of redis Geo
  31. Oracle data types
  32. MySQL data archiving tool Recommendation -- MySQL_ archiver
  33. Springboot mini - Solon详解(七)- Solon Ioc 的注解对比Spring及JSR330
  34. Spring boot mini - Solon (7) - annotation comparison of Solon IOC spring and jsr330
  35. Linux操作系统的常用命令
  36. Common commands of Linux operating system
  37. Java 基本类型
  38. Java basic types
  39. MySQL学习-排序与分组函数
  40. MySQL性能与调优
  41. MySQL learning sorting and grouping function
  42. MySQL performance and tuning
  43. java object
  44. Shortcut keys for eclipse
  45. Summary of Java Architect's growth path (continuous update)
  46. Introduce an OA open source product based on springboot!
  47. JavaScript 日期时间格式化
  48. JavaScript date time formatting
  49. Linux common command collection, memo, Notepad, online record
  50. (3) Hive built-in operators and functions
  51. Netty study notes
  52. Detailed explanation of Linux awk command
  53. Linux基础命令
  54. Linux basic command
  55. Javadoc document generation
  56. Uniapp (Vue general) integrates Tencent location service SDK -- multi platform small program general
  57. Slowhttptest slow attack tool use details
  58. MySQL: how to use overlay index to avoid backtracking and optimize query
  59. MySQL: how to use overlay index to avoid backtracking and optimize query
  60. Keyword, reserved word and identifier of Java Foundation