[skills with annual salary of 60W] after working for 5 years, do you really understand netty and why to use it? (deep dry goods)

Learn architecture from mic 2021-11-25 18:21:47
skills annual salary 60w working

Let's look at the picture below , When the client initiates a Http When asked , What about the processing flow of the server side ?

image-20211109084258499

In short, it can be divided into the following steps :

  1. be based on TCP Protocol to establish network communication .
  2. Start transmitting data to the server .
  3. The server receives the data and parses it , Start processing the request logic .
  4. The server returns the result to the client after processing .

In the process , It will involve the network IO signal communication , In traditional BIO In mode , The client sends a data read request to the server , Before the client receives the data returned by the server , It's stuck , This session is completed until the server returns data . This process is called synchronous blocking IO, stay BIO If you want to implement asynchronous operation in the model , You can only use the multithreading model , That is, a request corresponds to a thread , In this way, the connection of the server can be avoided from being occupied by one client, resulting in the number of connections cannot be increased .

Synchronous blocking IO Mainly reflected in two blocking points

  • Blocking when the server receives a client connection .
  • Client side and server side IO When communication , Blocking when data is not ready .

image-20210811170350557

In this tradition BIO In mode , It will cause a very serious problem , As shown in the figure below , If at the same time N A client initiates a request , according to BIO Characteristics of the model , The server can only process one request at a time . This will cause client requests to be queued , The impact is , The user is waiting for a request to be processed and returned for a very long time . It means that the server has no concurrent processing capability , This is obviously not appropriate .

image-20211109084710182

that , How should the server be optimized ?

Non blocking IO

From the previous analysis, it is found that , When the server handles a request , Will be blocked and unable to process subsequent requests , Can the blocked places be optimized to be non blocking ? So there is non blocking IO(NIO)

Non blocking IO, When the client sends a request to the server , If the data on the server is not ready , Client requests are not blocked , I'm going straight back . However, it is possible that the data on the server side is not ready , The return received by the client is an empty , How can the client get the final data ?

As shown in the figure , The client can only obtain the request result by polling .NIO comparison BIO Come on , The process without blocking will significantly improve the performance and the number of connections .

image-20210708165359843

NIO There is still a drawback , There will be a lot of empty polling in the polling process , And there will be a lot of system calls ( Initiate kernel instruction to load data from network card buffer , Switching from user space to kernel space ), As the number of connections increases , Can cause performance problems .

Multiplexing mechanism

I/O The essence of multiplexing is through a mechanism ( System kernel buffer I/O data ), Allows a single process to monitor multiple file descriptors , Once a descriptor is ready ( Generally read ready or write ready ), Can inform the program to read and write accordingly

What is? fd: stay linux in , The kernel treats all external devices as a file , Reading and writing a file will call the system commands provided by the kernel , Return to one fd( File descriptor ). And for a socket There will also be corresponding file descriptors for reading and writing , Become socketfd.

common IO Multiplexing methods include 【select、poll、epoll】, All are Linux API Provided IO Reuse mode , Then let's focus on select、 and epoll These two models

  • select: Processes can be implemented by putting one or more fd Pass to select system call , The process will block in select Operationally , such select Can help us detect multiple fd Is it in a ready state , This model has two disadvantages

    • Because it can listen to multiple file descriptors at the same time , If there are 1000 individual , At this time, if one of them fd It's ready , Then the current process needs to linearly poll all fd, That is, listening fd The more , The greater the performance overhead .
    • meanwhile ,select Can be opened in a single process fd There are limits , The default is 1024, For those who need to support tens of thousands of single machines TCP It's really a little less connected
  • epoll:linux It also provides epoll System call ,epoll It is based on event driven method instead of sequential scanning , Therefore, the performance is relatively higher , The main principle is , When monitored fd in , Yes fd When it's ready , Will tell the current process exactly which one fd be ready , Then the current process only needs to go from the specified fd Just read the data from the , in addition ,epoll What we can support fd Online is the largest file handle of the operating system , This number is much larger than 1024
【 because epoll It can tell the application process which... Through events fd It's readable , So we also call this IO It is asynchronous and non blocking IO, Of course, it's pseudo asynchronous , Because it also needs to synchronously copy data from the kernel to user space , True asynchronous non blocking , The data should be completely ready , I just need to read from user space 】

I/O The advantage of multiplexing is that multiple I/O The blocks are multiplexed to the same select On the block , Thus, the system can process multiple client requests at the same time in the case of single thread . Its biggest advantage is the small system overhead , And there's no need to create new processes or threads , It reduces the resource cost of the system , Its overall implementation idea is shown in the figure 2-3 Shown .

After the client requests to the server , At this time, the client is in the process of transmitting data , for fear of Server End in read Blocking during client data , The server will register the request to Selector On the multiplexer , The server does not need to wait at this time , Just start a thread , adopt selector.select() Blocking polling ready on multiplexer channel that will do , in other words , If a client is connected, the data transmission is completed , that select() Method returns ready channel, Then perform relevant processing .

image-20210708203509498

asynchronous IO

asynchronous IO And multiplexing , The biggest difference is : When the data is ready , The client does not need to send kernel instructions to read data from kernel space , Instead, the system will asynchronously copy the data directly to the user space , The application only needs to use this data directly .

image-20210811172034569

<center> chart 2-4 asynchronous IO</center>

stay Java in , We can use NIO Of api To complete the multiplexing mechanism , Implement pseudo asynchronous IO. stay Analysis of network communication evolution model This article demonstrates Java API Code for realizing multiplexing mechanism , Finding code is not just cumbersome , And it's troublesome to use .

therefore Netty There is ,Netty Of I/O The model is based on non blocking IO Realized , The bottom layer depends on JDK NIO Frame multiplexer Selector To achieve .

A multiplexer Selector You can poll multiple Channel, use epoll After the model , Only one thread is responsible for Selector The polling , You can access thousands of client connections .

Reactor Model

http://gee.cs.oswego.edu/dl/c...

I understand NIO After multiplexing , It's necessary to talk to you again Reactor Multiplexing high performance I/O Design patterns ,Reactor It's essentially based on NIO A high performance multiplexing mechanism is proposed IO Design patterns , Its core idea is to respond IO Separate event and business processing , Process through one or more threads IO event , Then the ready event is distributed to the business process handlers Thread to asynchronous non blocking processing , Pictured 2-5 Shown .

Reactor The model has three important components :

  • Reactor : take I/O The event is sent to the corresponding Handler
  • Acceptor : Handle client connection requests
  • Handlers : Perform a non blocking read / Write

image-20210708212057895

<center> chart 2-5 Reactor Model </center>

This is the most basic single Reactor Single thread model ( Holistic I/O The operation is completed by the same thread ).

among Reactor Threads , Responsible for demultiplexing sockets , There's a new connection coming to trigger connect After the event , Leave it to Acceptor To deal with , Yes IO After reading and writing the event, give it to hanlder Handle .

Acceptor The main task is to build handler , In getting and client dependent SocketChannel after , Bind to the corresponding hanlder On , Corresponding SocketChannel After a read-write event , be based on racotor distribution ,hanlder You can deal with ( be-all IO Events are bound to selector On , Yes Reactor distribution )

Reactor Pattern essentially refers to the use of I/O Multiplexing (I/O multiplexing) + Non blocking I/O(non-blocking I/O) The pattern of .

Multithreaded single Reactor Model

Single thread Reactor This implementation has disadvantages , As can be seen from the example code ,handler The execution of is serial , If one of them handler Processing thread blocking will cause other business processing to block . because handler and reactor Execution in the same thread , This will also result in the new being unable to receive new requests , Let's do a little experiment :

  • In the above Reactor Code DispatchHandler Of run In the method , Add one more Thread.sleep().
  • Open multiple client windows to connect to Reactor Server End , One of the windows was blocked after sending a message , When another window sends a message again, the subsequent request cannot be processed because the previous request is blocked .

To solve this problem , It has been proposed to use multithreading to process business , That is, add a thread pool to the business processing place for asynchronous processing , take reactor and handler Execute in different threads , Pictured 4-7 Shown .

image-20210709154534593

<center> chart 2-6</center>

Multithreading and multithreading Reactor Model

In multithreading, single Reactor In the model , We found out all about I/O The operation consists of a Reactor To complete , and Reactor Run in a single thread , It needs to deal with, including Accept()/read()/write/connect operation , For small capacity scenarios , The impact is not big . But for high loads 、 In application scenarios with large concurrency or large amount of data , It's easy to be a bottleneck , The main reasons are as follows :

  • One NIO Threads handle hundreds of links at the same time , It can't be supported in terms of performance , Even if NIO Thread CPU The load reaches 100%, Nor can it meet the reading and sending of massive messages ;
  • When NIO After the thread is overloaded , The processing speed will be slower , This can lead to a large number of client connection timeouts , It's often resend after timeout , This is even more important NIO Thread load , It will eventually lead to a large amount of message backlog and processing timeout , Become the performance bottleneck of the system ;

therefore , We can further optimize , Introduce more Reactor Multithreading mode , Pictured 2-7 Shown ,Main Reactor Responsible for receiving connection requests from clients , Then pass the received request to SubReactor( among subReactor There can be multiple ), Specific business IO Deal with by SubReactor complete .

Multiple Reactors Patterns can also be equated with Master-Workers Pattern , such as Nginx and Memcached And so on is to adopt this multithreading model , Although the implementation details of different projects are slightly different , But overall, the pattern is consistent .

image-20210709162516832

<center> chart 2-7</center>

  • Acceptor, Request recipient , In practice, its role is similar to that of a server , It's not really responsible for establishing connection requests , And just delegate the request Main Reactor Thread pool , Play a forwarding role .
  • Main Reactor, Lord Reactor Thread group , The main Responsible for connection Events , And will IO Read write requests are forwarded to SubReactor Thread pool .
  • Sub Reactor,Main Reactor Usually, after listening to the client connection, the read and write of the channel will be forwarded to Sub Reactor A thread in the thread pool ( Load balancing ), Read and write data . stay NIO in Usually register the read of the channel (OP_READ)、 Write events (OP_WRITE).

High performance communication framework Netty

stay Java in , There are many network programming frameworks , such as Java NIO、Mina、Netty、Grizzy etc. . But in all the middleware you come into contact with , Most of them use Netty.

as a result of Netty Is currently the most popular high performance Java Network programming framework , It is widely used in middleware 、 live broadcast 、 social contact 、 Games and other fields . When it comes to open source middleware , You know Dubbo、RocketMQ、Elasticsearch、Hbase、RocketMQ And so on Netty Realization .

In actual development , The students who came to class today ,99% No one will be involved in using Netty Do network programming development , But why spend energy telling you ? There are several reasons

  • In interviews with many big companies , It will involve relevant knowledge points

    • Netty What is the performance of high performance
    • Netty What are the important components of
    • Netty The memory pool of 、 Design of object pool
  • Many middleware uses netty To do network communication , So when we analyze the source code of these middleware , Reduce the difficulty of understanding network communication
  • promote Java Knowledge system , Realize the comprehensiveness of the technical system as far as possible .

Why choose Netty

Netty In fact, it is a high-performance NIO frame , So it's based on NIO Packaging based on , In essence, it is to provide high-performance network IO Function of communication . As we have analyzed the network communication in detail in the previous course , So I'm learning Netty when , Learning should be easier .

Netty The above three methods are provided Reactor Model support , We can go through Netty Packaged API To quickly complete different Reactor Model development , That's why everyone chooses Netty One of the reasons , besides ,Netty Compared with NIO Native API, It has the following characteristics :

  • Provides efficient I/O Model 、 Thread model and time processing mechanism
  • Provides a very simple and easy to use API, comparison NIO Come on , For the foundation Channel、Selector、Sockets、Buffers etc. api Provides a higher level of encapsulation , Shielded NIO Complexity
  • It provides good support for data protocol and serialization
  • stability ,Netty Repair the JDK NIO More questions , such as select Caused by idling cpu Consume 100%、TCP Break line reconnection 、keep-alive Detection and other issues .
  • Extensibility is very good in the same type of framework , For example, one is a customizable thread model , The user can select... In the startup parameters Reactor Model 、 Extensible event driven model , Separation of business and framework concerns .
  • Performance level optimization , As a network communication framework , Need to handle a large number of network requests , It is bound to face the problem that network objects need to be created and destroyed , This pair JVM Of GC Not very friendly to me , In order to reduce JVM The pressure of recycling , Two optimization mechanisms are introduced

    • Object pool reuse ,
    • Zero copy technology

Netty Ecological introduction to

First , We need to understand Netty What functions are provided , Pictured 2-1 Shown , Express Netty Functional description provided in Ecology . These functions will be analyzed step by step in the following content .

image-20210811151520387

<center> chart 2-1 Netty Functional ecology </center>

Netty Basic use of

It needs to be explained , We're talking about Netty The version is 4.x edition , There was a time before netty Released a 5.x edition , But it was abandoned by the authorities , as a result of : Use ForkJoinPool Added complexity , And it does not show obvious performance advantages . Keeping all branches synchronized at the same time is quite a lot of work , There is no need to .

add to jar Packet dependency

Use 4.1.66 edition

<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
</dependency>

establish Netty Server service

In most scenes , We use master-slave multithreading Reactor Model ,Boss Thread is to live Reactor,Worker It's from Reactor. They use different NioEventLoopGroup

Lord Reactor Responsible for handling Accept, And then put Channel Register to from Reactor, from Reactor Mainly responsible for Channel All over the life cycle I/O event .

public class NettyBasicServerExample {
public void bind(int port){
// We're going to create two EventLoopGroup,
// One is boss Dedicated to receiving connections , It can be understood as dealing with accept event ,
// The other is worker, You can focus on other than accept Other events , Deal with subtasks .
// Notice above ,boss Thread generally sets a thread , Setting multiple will only use one , And there are many application scenarios at present ,
// worker Threads are usually tuned according to the server , If you don't write the default is cpu Twice as many .
EventLoopGroup bossGroup=new NioEventLoopGroup();
EventLoopGroup workerGroup=new NioEventLoopGroup();
try {
// The server should start , Need to create ServerBootStrap,
// In this netty hold nio The templating code of is encapsulated
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(bossGroup, workerGroup) // To configure boss and worker Threads
// To configure Server The passage of , amount to NIO Medium ServerSocketChannel
.channel(NioServerSocketChannel.class)
//childHandler Said to worker Those threads are configured with a processor ,
// Configuration initialization channel, That is to give worker Thread configuration corresponds to handler, When a request is received from the client , Assigned to the specified handler Handle
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new NormalMessageHandler()); // add to handler, Specifically IO Event handler
}
});
// Because the default is NIO Asynchronous non-blocking , So after binding the port , adopt sync() Method blocks until the connection is established
// Bind the port and synchronously wait for the client to connect (sync Method will block , Until the whole startup process is completed )
ChannelFuture channelFuture=bootstrap.bind(port).sync();
System.out.println("Netty Server Started,Listening on :"+port);
// Wait for the server listening port to close
channelFuture.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
// Release thread resources
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) {
new NettyBasicServerExample().bind(8080);
}
}

The above code is described as follows :

  • EventLoopGroup, Define thread groups , It's equivalent to what we were writing before NIO The thread defined when the code . Two thread groups are defined here, which are boss Threads and worker Threads ,boss The thread is responsible for receiving the connection ,worker The thread is responsible for handling IO event .boss Thread generally sets a thread , Setting multiple will only use one , And there are many application scenarios at present . and worker Threads are usually tuned according to the server , If you don't write the default is cpu Twice as many .
  • ServerBootstrap, The server should start , Need to create ServerBootStrap, In this netty hold nio The templating code of is encapsulated .
  • ChannelOption.SO_BACKLOG

Set up Channel type

NIO The model is Netty The most mature and widely cited model in , So in use Netty When , We will adopt NioServerSocketChannel As Channel type .

bootstrap.channel(NioServerSocketChannel.class);

except NioServerSocketChannel outside , It also provides

  • EpollServerSocketChannel,epoll The model is only in linux kernel 2.6 The above can support , stay windows and mac They don't support , If you set Epoll stay window An error will be reported when running in the environment .
  • OioServerSocketChannel, It is used for the server to receive TCP Connect
  • KQueueServerSocketChannel,kqueue Model , yes Unix More efficient in IO Multiplexing technology , common IO Reuse technology has select, poll, epoll as well as kqueue wait . among epoll by Linux Monopoly , and kqueue In many UNIX There is... On the system .

register ChannelHandler

stay Netty Through ChannelPipeline Register multiple ChannelHandler, The handler Is to worker The processor that the thread executes , When IO When the event is ready , It will be configured here Handler To call .

You can register multiple ChannelHandler, Every ChannelHandler Attend to each one's own duties , Such as encoding and decoding handler, Heartbeat mechanism handler, Message processing handler etc. . This can maximize the reuse of code .

.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
socketChannel.pipeline().addLast(new NormalMessageHandler());
}
});

ServerBootstrap Medium childHandler Method needs to register a ChannelHandler, There is a ChannelInitializer Implementation class of , By instantiating ChannelInitializer To configure initialization Channel.

When I received IO After the event , This data will be in multiple handler To spread . A... Is configured in the above code NormalMessageHandler, Used to receive client messages and output .

Binding port

complete Netty After the basic configuration of , adopt bind() Method really triggers the start , and sync() Method will block , Until the whole startup process is completed .

ChannelFuture channelFuture=bootstrap.bind(port).sync();

NormalMessageHandler

ServerHandler Inherited ChannelInboundHandlerAdapter, This is a netty One of the event handlers in ,netty The processor in is divided into Inbound( Arrival ) and Outbound( Departure ) processor , More on that later .

public class NormalMessageHandler extends ChannelInboundHandlerAdapter {
//channelReadComplete Method represents the finished processing of the message ,writeAndFlush Method to write and send messages
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
// The logic here is that all messages have been read , In the unified write back to the client .Unpooled.EMPTY_BUFFER Empty message ,addListener(ChannelFutureListener.CLOSE) After writing , Just close the connection
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
//exceptionCaught The method is to deal with the exception
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
cause.printStackTrace();
ctx.close();
}
//channelRead Method represents how to handle the message after reading it , Here we print out the message
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf in=(ByteBuf) msg;
byte[] req=new byte[in.readableBytes()];
in.readBytes(req); // Read the data to byte Array
String body=new String(req,"UTF-8");
System.out.println(" The server receives a message :"+body);
// Write back the data
ByteBuf resp=Unpooled.copiedBuffer(("receive message:"+body+"").getBytes());
ctx.write(resp);
//ctx.write Means to send the message back to the client , But just write to the buffer , Not sent ,flush Will really write to the Internet
}
}

Through the above code, it is found that , We only need a little code to complete NIO Server development , Compared with traditional NIO The server side of the native class library , The amount of code is greatly reduced , The development difficulty is also greatly reduced .

Netty and NIO Of api Corresponding

TransportChannel ---- Corresponding NIO Medium channel

EventLoop---- Corresponding to NIO Medium while loop

EventLoopGroup: Multiple EventLoop, It's the cycle of events

ChannelHandler and ChannelPipeline--- Corresponding to NIO The implementation of customer logic in handleRead/handleWrite(interceptor pattern)

ByteBuf---- Corresponding to NIO Medium ByteBuffer

Bootstrap and ServerBootstrap --- Corresponding NIO Medium Selector、ServerSocketChannel Etc 、 To configure 、 Start up, etc

Netty The overall working mechanism of

Netty The overall working mechanism of the is as follows , The overall design is the multithreading we talked about earlier Reactor Model , Separate request listening and request processing , Specific tasks are executed through multiple threads handler.

image-20210812181454154

<center> chart 2-2</center>

Network communication layer

The main responsibility of the network communication layer is to perform network communication IO operation , It supports a variety of network communication protocols and protocols I/O Link operation of model . When the network data is read into the kernel buffer , Will trigger a read / write event , These events are distributed to the time scheduler for processing .

stay Netty in , The core components of network communication are the following three components

  • Bootstrap, The client starts api, Used to link remote netty server, Bind only one EventLoopGroup
  • ServerBootStrap, Server monitoring api, Used to listen to the specified port , Will bind two EventLoopGroup, bootstrap Components can be started very easily and quickly Netty Applications
  • Channel,Channel It's the carrier of network communication ,Netty Self realized Channel In order to JDK NIO channel Based on , Provides a higher level of abstraction , It also shields the bottom Socket Complexity , by Channel It provides more powerful functions .

Pictured 2-3 Shown , It means Channel Common implementation class diagram ,AbstractChannel As a whole Channel Implemented base class , Derived from AbstractNioChannel( Non blocking io)、AbstractOioChannel( Blocking io), Each subclass represents a different I/O Model and protocol type .

image-20210812213408836

<center> chart 2-3 Channel Class diagram of </center>

As connections and data change ,Channel There will also be multiple states , For example, connection establishment 、 Connection registration 、 Connect read and write 、 Connection destroyed . As the state changes ,Channel Will also be in different life cycles , Each state is bound with a corresponding event callback . The following are common time callback methods .

  • channelRegistered, channel Created and registered to EventLoop On
  • channelUnregistered,channel Not registered after creation or from EventLoop Cancellation of registration
  • channelActive,channel In a ready state , It can be read and written
  • channelInactive,Channel Not ready
  • channelRead,Channel You can read data from the source
  • channelReadComplete,Channel Read data complete

Just to summarize ,Bootstrap and ServerBootStrap Start the client and server respectively ,Channel It's the carrier of network communication , It provides with the underlying Socket The ability to interact .

And when Channel Changes in events in the life cycle , You need to trigger further processing , This process is handled by Netty Event scheduler to complete .

event scheduler

The event scheduler is through Reactor The thread model aggregates various events , adopt Selector The main loop thread integrates multiple events (I/O Time 、 Signal time ), When these events are triggered , The specific processing of this event needs to be given to the relevant... In the service orchestration layer Handler To deal with it .

Event scheduler core component :

  • EventLoopGroup. Equivalent to thread pool
  • EventLoop. Equivalent to threads in the thread pool

EventLoopGroup It is essentially a thread pool , Mainly responsible for receiving I/O request , And assign threads to execute processing requests . For better understanding EventLoopGroup、EventLoop、Channel The relationship between , Let's look at the picture 2-4 Process shown .

image-20210812220244801

<center> chart 2-4,EventLoop How it works </center>

It can be seen from the picture that

  • One EventLoopGroup It can contain more than one EventLoop,EventLoop Used for processing Channel All in the life cycle I/O event , such as accept、connect、read、write etc.
  • EventLoop It will bind to a thread at the same time , Every EventLoop Responsible for handling multiple Channel
  • Every new one Channel,EventLoopGroup Will choose one EventLoop Binding , The Channel In the life cycle EventLoop Bind and unbind multiple times .

chart 2-5 It means EventLoopGroup Class diagram of , It can be seen that Netty Provides EventLoopGroup A variety of implementations of , Such as NioEventLoop、EpollEventLoop、NioEventLoopGroup etc. .

As you can see from the diagram ,EventLoop yes EventLoopGroup Sub interface of , We can EventLoop Equivalent to EventLoopGroup, Premise is EventLoopGroup There is only one EventLoop.

<img src="https://mic-blob-bucket.oss-cn-beijing.aliyuncs.com/202111090024225.png" alt="image-20210812221329760" style="zoom:80%;" />

<center> chart 2-5 EventLoopGroup Class diagram </center>

EventLoopGroup yes Netty The core processing engine , It is similar to what we explained earlier Reactor What does the thread model matter ? Actually , We can simply EventLoopGroup Think of it as Netty in Reactor The specific implementation of thread model , We can configure different EventLoopGroup bring Netty Support a variety of different Reactor Model .

  • Single thread model ,EventLoopGroup Contains only one EventLoop,Boss and Worker Use the same EventLoopGroup.
  • Multithreading model :EventLoopGroup Contains multiple EventLoop,Boss and Worker Use the same EventLoopGroup.
  • Master slave multithreading model :EventLoopGroup Contains multiple EventLoop,Boss It is the Lord. Reactor,Worker It's from Reactor Model . They use different EventLoopGroup, Lord Reactor Responsible for new network connections Channel The creation of ( That is, connected events ), Lord Reactor After receiving the connection from the client , Give it to Cong Reactor To deal with it .

Service orchestration layer

The service choreography layer is responsible for assembling various services , Simply speaking , Namely I/O After event triggering , There needs to be one Handler To deal with it , So the service orchestration layer can be through a Handler Processing chain to realize the dynamic arrangement and orderly propagation of network events .

It contains three components

  • ChannelPipeline, It uses a two-way linked list to link multiple Channelhandler Link together , When I/O Event triggered ,ChannelPipeline The assembled multiple... Will be called in turn ChannelHandler, Realize to Channel Data processing of .ChannelPipeline It's thread safe , Because every new Channel Will bind a new ChannelPipeline. One ChannelPipeline Associated with a EventLoop, And one EventLoop Only one thread will be bound , Pictured 2-6 Shown , Express ChannelPIpeline chart .

    <img src="https://mic-blob-bucket.oss-cn-beijing.aliyuncs.com/202111090024172.png" alt="image-20210812223234507" style="zoom: 50%;" />

    <center> chart 2-6 ChannelPipeline</center>

    As you can see from the diagram ,ChannelPipeline Contains inbound ChannelInBoundHandler And outbound ChannelOutboundHandler, The former is to receive data , The latter is to write data , In fact, that is InputStream and OutputStream, For better understanding , Let's look at the picture 2-7.

image-20210812224219710

<center> chart 2-7 InBound and OutBound The relationship between </center>

  • ChannelHandler, in the light of IO Data processor , After receiving the data , By the appointed Handler To deal with .
  • ChannelHandlerContext,ChannelHandlerContext For preservation ChannelHandler The context of , in other words , When the event is triggered , Multiple handler Data between , It's through ChannelHandlerContext To deliver .ChannelHandler and ChannelHandlerContext The relationship between , Pictured 2-8 Shown .

    Every ChannelHandler All correspond to one's own ChannelHandlerContext, It retains ChannelHandler Context information required , Multiple ChannelHandler Data transfer between , It's through ChannelHandlerContext To achieve .

image-20210812230122911

<center> chart 2-8 ChannelHandler and ChannelHandlerContext Relationship </center>

That's all Netty Introduction to the characteristics and working mechanism of the core components in , These components will be analyzed in detail in the subsequent content . It can be seen that ,Netty The layered design of architecture is very reasonable , It shields the bottom NIO And the implementation details of the framework layer , For business developers , You only need to care about the arrangement and implementation of business logic .

Summary of component relationship and principle

Pictured 2-9 Shown , Express Netty Key component coordination principles in , The specific working mechanism is described as follows .

  • Service order startup initialization Boss and Worker Thread group ,Boss The thread group is responsible for listening for network connection events , When a new connection is established ,Boss The thread will connect the Channel Registration is bound to Worker Threads
  • Worker The thread group is assigned a EventLoop Responsible for dealing with the Channel Reading and writing events , Every EventLoop It's equivalent to a thread . adopt Selector Perform event loop listening .
  • When the client initiates I/O When an event is , Server side EventLoop Say ready Channel Distributed to the Pipeline, Data processing
  • Data transfer to ChannelPipeline after , From the first ChannelInBoundHandler To deal with , according to pipeline Chains are passed one by one
  • After the server completes the processing, write the data back to the client , The data written back will be in ChannelOutboundHandler Spread in a chain of , Finally arrive at the client .

image-20210814151504091

<center> chart 2-9 Netty How each component works </center>

Netty Details of core components in

stay 2.5 Right in the middle Netty With an overall understanding , Let's give a very detailed description of these components , Deepen your understanding .

starter Bootstrap and ServerBootstrap As Netty Build the intersection between client and server , It's writing Netty The first step in the network program . It allows us to Netty The core components of are assembled together like building blocks . stay Netty Server In the process of end-to-end construction , We need to focus on three important steps

  • Configure thread pool
  • Channel initialization
  • Handler Processor build
Copyright notice : All articles in this blog except special statement , All adopt CC BY-NC-SA 4.0 license agreement . Please quote from Mic Take you to architecture
If this article is helpful to you , Please also give me some attention and praise , Your persistence is the driving force of my continuous creation . Welcome to WeChat public official account for more technical dry cargo. !
版权声明
本文为[Learn architecture from mic]所创,转载请带上原文链接,感谢
https://javamana.com/2021/11/20211109101113021v.html

  1. MySQL Learning - Logging System Redo log and Bin log
  2. Springboot Common comments | @ configuration
  3. Mécanisme d'expiration du cache redis et d'élimination de la mémoire
  4. Analyse concise du code source redis 01 - configuration de l'environnement
  5. Redis source Concise Analysis 02 - SDS String
  6. Spring cloud gateway practice 2: more routing configuration methods
  7. Principe de mise en œuvre ultime du mécanisme de concurrence Java sous - jacent
  8. [démarrer avec Java 100 exemples] 13. Modifier l’extension de fichier - remplacement de chaîne
  9. Java期末作业——王者荣耀的洛克王国版游戏
  10. Elasticsearch聚合学习之五:排序结果不准的问题分析,阿里巴巴java性能调优实战
  11. Java期末作業——王者榮耀的洛克王國版遊戲
  12. Java final work - King's Glory Rock Kingdom Game
  13. 【网络编程】TCP 网络应用程序开发
  14. 【网络编程入门】什么是 IP、端口、TCP、Socket?
  15. 【網絡編程入門】什麼是 IP、端口、TCP、Socket?
  16. [Introduction à la programmation réseau] qu'est - ce que IP, port, TCP et socket?
  17. [programmation réseau] développement d'applications réseau TCP
  18. [Java Basics] comprendre les génériques
  19. Dix outils open source que les architectes de logiciels Java devraient maîtriser!!
  20. Java经典面试题详解,突围金九银十面试季(附详细答案,mysql集群架构部署方案
  21. java架构之路(多线程)synchronized详解以及锁的膨胀升级过程,mysql数据库实用教程pdf
  22. java整理,java高级特性编程及实战第一章
  23. java教程——反射,mongodb下载教程
  24. Java岗大厂面试百日冲刺 - 日积月累,每日三题【Day12,zookeeper原理作用
  25. Java后端互联网500道中高级面试题(含答案),linux钩子技术
  26. java8 Stream API及常用方法,java初级程序员面试
  27. java-集合-Map(双列)——迪迦重制版,2021Java开发社招面试解答之性能优化
  28. Flink处理函数实战之二:ProcessFunction类,java线程面试题目
  29. flex 布局详解,【Java面试题
  30. Linux basic command learning
  31. Why did docker lose to kubernetes? Docker employee readme!
  32. MySQL安装
  33. Elastic Search Aggregate Learning five: Problem Analysis of Uncertainty of sequencing results, Alibaba Java Performance Tuning Practical
  34. Installing, configuring, starting and accessing rabbitmq under Linux
  35. Oracle SQL injection summary
  36. Installation MySQL
  37. L'exposition à la photo d'essai sur la route i7 du nouveau vaisseau amiral de BMW Pure Electric a également été comparée à celle de Xiaopeng p7.
  38. spring JTA 关于异常处理的时机问题
  39. Le problème du temps de traitement des exceptions dans la JTA printanière
  40. Flink Handling Function Real War II: processfunction class, Java thread interview subject
  41. Oracle SQL injection summary
  42. [Java data structure] you must master the classic example of linked list interview (with super detailed illustration and code)
  43. Do you really know MySQL order by
  44. Record a java reference passing problem
  45. spring JTA 關於异常處理的時機問題
  46. Java - Set - Map (double file) - dija Rewriting, 2021 Java Developer's Performance Optimization
  47. Android入门教程 | OkHttp + Retrofit 取消请求的方法
  48. Java 8 Stream API and common methods, Java Junior Program interview
  49. Github 疯传!史上最强!BAT 大佬,2021年最新Java大厂面试笔试题分享
  50. git(3)Git 分支,zookeeper下载教程
  51. Java Backend Internet 500 questions d'entrevue moyennes et avancées (y compris les réponses), technologie de crochet Linux
  52. Entretien d'entretien d'usine Java post sprint de 100 jours - accumulation de jours et de mois, trois questions par jour [jour 12, fonction de principe de Zookeeper
  53. Tutoriel Java - reflection, tutoriel de téléchargement mongodb
  54. How to analyze several common key and hot issues in redis from multiple dimensions
  55. GIT (3) GIT Branch, Zookeeper Download tutoriel
  56. Tutoriel de démarrage Android | okhttp + Retrofit comment annuler une demande
  57. Design pattern [3.3] - Interpretation of cglib dynamic agent source code
  58. Share the actual operation of private collection project nodejs backend + Vue + Mysql to build a management system
  59. Springboot has 44 application initiators
  60. GitHub上标星2,java项目开发实训教程