Netty source code analysis -- implementation principle of poolchunk

Waste 2020-11-07 20:55:43
netty source code analysis implementation

This article mainly shares Netty in PoolChunk How to manage memory .
Source code analysis is based on Netty 4.1.52

Memory management algorithm

First of all PoolChunk Memory organization .
PoolChunk The default memory size of is 16M,Netty Divide it into 2048 individual page, Every page by 8K.
PoolChunk It can be allocated Small Memory block .
Normal Memory block size must be page Multiple .

PoolChunk adopt runsAvail Field management memory block .
runsAvail yes PriorityQueue<Long> Array , among PriorityQueue Deposit is handle.
handle It can be understood as a handle , Maintain information about a memory block , It consists of the following parts

  • o: runOffset , stay chunk in page Offset index , from 0 Start ,15bit
  • s: size, Current location assignable page Number ,15bit
  • u: isUsed, Whether to use ?, 1bit
  • e: isSubpage, Whether in subpage in , 1bit
  • b: bitmapIdx, Memory block in subpage Index in , be not in subpage Then for 0, 32bit

front 《 Memory alignment class SizeClasses》 The article said ,SizeClasses take sizeClasses In the table isMultipageSize by 1 A new table can be formed by taking out the rows of , This is called Page form

runsAvail The default array length is 40, Each position index Top up handle Represents the existence of a block of available memory , And distributable pageSize Greater than or equal to (pageIdx=index) Upper pageSize, Less than (pageIdex=index+1) Of pageSize.
Such as runsAvail[11] Upper handle Of size Distributable pageSize May be 16 ~ 19,
If runsAvail[11] On handle Of size by 18, If it's time to handle Allocated 7 individual page, The rest 11 individual page, At this time, we will handle Move runsAvail[8]( Of course ,handle Information needs to be adjusted ).
At this time, if you want to find the distribution 6 individual page, You can start from runsAvail[5] Start looking for runsAvail Array , If the previous runsAvail[5]~runsAvail[7] None handle, I found it runsAvail[8].
Distribute 6 individual page after , The rest 5 individual page,handle Move runsAvail[4].

Have a look first PoolChunk Constructor for

PoolChunk(PoolArena<T> arena, T memory, int pageSize, int pageShifts, int chunkSize, int maxPageIdx, int offset) {
// #1
unpooled = false;
this.arena = arena;
this.memory = memory;
this.pageSize = pageSize;
this.pageShifts = pageShifts;
this.chunkSize = chunkSize;
this.offset = offset;
freeBytes = chunkSize;
runsAvail = newRunsAvailqueueArray(maxPageIdx);
runsAvailMap = new IntObjectHashMap<Long>();
subpages = new PoolSubpage[chunkSize >> pageShifts];
// #2
int pages = chunkSize >> pageShifts;
long initHandle = (long) pages << SIZE_SHIFT;
insertAvailRun(0, pages, initHandle);
cachedNioBuffers = new ArrayDeque<ByteBuffer>(8);

unpooled: Whether to use memory pool
arena: The PoolChunk Of PoolArena
memory: Underlying memory block , For heap memory , It's a byte Array , For direct memory , It is (jvm)ByteBuffer, But either way , The default memory size is 16M.
pageSize:page size , The default is 8K.
chunkSize: Whole PoolChunk The memory size of , The default is 16777216, namely 16M.
offset: Underlying memory alignment offset , The default is 0.
runsAvail: initialization runsAvail
runsAvailMap: Record the start and end positions of each memory block runOffset and handle mapping .

#2 insertAvailRun Method in runsAvail Insert a handle, The handle representative page The offset is 0 Can be allocated 16M Memory block

Memory allocation


boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int sizeIdx, PoolThreadCache cache) {
final long handle;
// #1
if (sizeIdx <= arena.smallMaxSizeIdx) {
// small
handle = allocateSubpage(sizeIdx);
if (handle < 0) {
return false;
assert isSubpage(handle);
} else {
// #2
int runSize = arena.sizeIdx2size(sizeIdx);
handle = allocateRun(runSize);
if (handle < 0) {
return false;
// #3
ByteBuffer nioBuffer = cachedNioBuffers != null? cachedNioBuffers.pollLast() : null;
initBuf(buf, nioBuffer, handle, reqCapacity, cache);
return true;

#1 Handle Small Memory block request , call allocateSubpage Method treatment , Analysis of subsequent articles .
#2 Handle Normal Memory block request
sizeIdx2size Methods find the corresponding memory block according to the memory block index size.sizeIdx2size yes PoolArena Parent class SizeClasses Methods provided , Please refer to the series of articles 《 Memory alignment class SizeClasses》.
allocateRun Methods responsible for distribution Normal Memory block , return handle Stores the allocated memory block size and offset .

#3 Use handle And underlying memory classes (ByteBuffer) initialization ByteBuf 了 .

private long allocateRun(int runSize) {
// #1
int pages = runSize >> pageShifts;
// #2
int pageIdx = arena.pages2pageIdx(pages);
synchronized (runsAvail) {
//find first queue which has at least one big enough run
// #3
int queueIdx = runFirstBestFit(pageIdx);
if (queueIdx == -1) {
return -1;
//get run with min offset in this queue
PriorityQueue<Long> queue = runsAvail[queueIdx];
long handle = queue.poll();
assert !isUsed(handle);
// #4
removeAvailRun(queue, handle);
// #5
if (handle != -1) {
handle = splitLargeRun(handle, pages);
// #6
freeBytes -= runSize(pageShifts, handle);
return handle;

#1 Calculate what is needed page Number
#2 Calculate the corresponding pageIdx
Be careful ,pages2pageIdx Method aligns the requested memory size to the above Page One of the tables size. For example, apply for 172032 byte (21 individual page) Memory block ,pages2pageIdx Methods the calculation results are as follows 13, Actual distribution 196608(24 individual page) Memory block .
#3 from pageIdx To traverse the runsAvail, Find the first handle.
The handle The required memory blocks can be allocated on .
#4 from runsAvail,runsAvailMap Remove the handle Information
#5 stay #3 Step found handle Partition the required memory block on the .
#6 Reduce the number of bytes of available memory

private long splitLargeRun(long handle, int needPages) {
assert needPages > 0;
// #1
int totalPages = runPages(handle);
assert needPages <= totalPages;
int remPages = totalPages - needPages;
// #2
if (remPages > 0) {
int runOffset = runOffset(handle);
// keep track of trailing unused pages for later use
int availOffset = runOffset + needPages;
long availRun = toRunHandle(availOffset, remPages, 0);
insertAvailRun(availOffset, remPages, availRun);
// not avail
return toRunHandle(runOffset, needPages, 1);
//mark it as used
handle |= 1L << IS_USED_SHIFT;
return handle;

#1 totalPages, from handle Get the current location available in page Count .
remPages, Surplus after distribution page Count .
#2 The remaining page Number greater than 0
availOffset, Calculate surplus page Start offset
Make a new one handle,availRun
insertAvailRun take availRun Insert into runsAvail,runsAvailMap in

Memory free

void free(long handle, int normCapacity, ByteBuffer nioBuffer) {
// #1
int pages = runPages(handle);
synchronized (runsAvail) {
// collapse continuous runs, successfully collapsed runs
// will be removed from runsAvail and runsAvailMap
// #2
long finalRun = collapseRuns(handle);
// #3
finalRun &= ~(1L << IS_USED_SHIFT);
//if it is a subpage, set it to run
finalRun &= ~(1L << IS_SUBPAGE_SHIFT);
insertAvailRun(runOffset(finalRun), runPages(finalRun), finalRun);
freeBytes += pages << pageShifts;
if (nioBuffer != null && cachedNioBuffers != null &&
cachedNioBuffers.size() < PooledByteBufAllocator.DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK) {

#1 Calculate released page Count
#2 If possible , Merge the available memory blocks before and after
#3 Insert a new handle


private long collapseRuns(long handle) {
return collapseNext(collapsePast(handle));

collapsePast Method merges the previous blocks of available memory
collapseNext Method to merge the available memory blocks

private long collapseNext(long handle) {
for (;;) {
// #1
int runOffset = runOffset(handle);
int runPages = runPages(handle);
Long nextRun = getAvailRunByOffset(runOffset + runPages);
if (nextRun == null) {
return handle;
int nextOffset = runOffset(nextRun);
int nextPages = runPages(nextRun);
//is continuous
// #2
if (nextRun != handle && runOffset + runPages == nextOffset) {
//remove next run
handle = toRunHandle(runOffset, runPages + nextPages, 0);
} else {
return handle;

#1 getAvailRunByOffset Methods from runsAvailMap Of the next memory block found in handle.
#2 If it is a contiguous block of memory , The next memory block is removed handle, And its page Merge to generate a new handle.

Here's an example

You can combine the examples runsAvail And memory usage changes , Understand the code above .
actually ,2 individual Page The memory block of the Subpage Distribute , When it is recycled, it is put back into the thread cache instead of releasing the memory block directly , But to show PoolChunk Memory management process in , These scenarios are not considered in the figure .

PoolChunk stay Netty 4.1.52 Version changes the algorithm , Introduced jemalloc 4 The algorithm of --
Netty 4.1.52 Previous version ,PoolChunk What is introduced is jemalloc 3 The algorithm of , Using binary tree to manage memory block . Interested students can refer to my subsequent articles 《PoolChunk Realization (jemalloc 3 The algorithm of )》

If you think this article is good , Welcome to my WeChat official account. , The series is being updated . Your concern is the driving force of my persistence !


  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云