Redis practice 03. Redis simple practice - Web Application

Man Fu Zhu Ji 2021-01-23 01:30:05
redis practice redis simple practice


demand

function : P23
  • Sign in cookie
  • The shopping cart cookie
  • Cache generated pages
  • Cache database rows
  • Analysis of web access records
At a high level Web application P23

From a high-level point of view , Web Application is through HTTP A server or service that responds to requests sent by a web browser (service). Web Requests are generally stateless (stateless), That is, the server itself will not record any information related to past requests , So that the invalid server can be easily replaced .

Web Typical steps for a server to respond to a request :

  1. The server sends a request to the client (request) To analyze
  2. The request is forwarded to a predefined processor (handler)
  3. The processor may take data from the database
  4. The processor updates the template based on the data (templete) Rendering (render)
  5. The processor returns the rendered content to the client as a response to the request (response)
Basic numerical quantity P24

All the contents of this practice revolve around the discovery and solution of a fictitious large-scale online store , Some basic data are as follows :

  • Every day 500 Ten thousand different users
  • Every day 1 Click to click
  • Buy more than... From the website every day 10 10000 items

Realization

Login and cookie cache P24

There are two common ways to store login information in cookie Inside :

  • Signature (signed) cookie: The user name is usually stored , There may also be other websites that think that swimming information , for example : Last successful login time 、 user id etc. . There will also be signatures , Verify with the server cookie Whether the information in is modified .
  • token (token) cookie: Store random bytes as tokens , The server looks up the ownership of the token in the database according to the token . as time goes on , The old token will be replaced by the new one .
Signature cookie And the token cookie The advantages and disadvantages of P24
cookie type advantage shortcoming
Signature cookie verification cookie Everything you need is stored in cookie Inside .cookie Can contain additional information , And it's easy to sign this information It's hard to handle signatures correctly . It's easy to forget to sign the data , Or forget to verify the signature of the data , This creates security vulnerabilities
token cookie It's very easy to add information . cookie It's very small , So mobile terminals and slower clients can send requests faster More information needs to be stored on the server . If you're using a relational database , So load and store cookie It can be very expensive

This practice uses token cookie To refer to the entry of the stored user login information . In addition to login information , It also needs to store information such as the length of user access and the number of products browsed in the database , In the future, we can learn how to better sell products to users by analyzing this information .

Use a hash table to store logins cookie Mapping between token and logged in user , Find the corresponding user according to the given token id. P24

// redis key
type RedisKey string
const (
// The logged in user Hashtable (field:token;value:userId)
LOGIN_USER RedisKey = "loginUser"
// The latest operation time of the user Ordered set
USER_LATEST_ACTION RedisKey = "userLatestAction"
// The last time the user browsed the product Ordered set Prefix ( Storage itemId And browsing time stamp )
VIEWED_ITEM_PREFIX RedisKey = "viewedItem:"
// User shopping cart Hashtable Prefix ( Storage itemId And the number of extra cars )
CART_PREFIX RedisKey = "cart:"
// Request return value cache character string Prefix ( Storage The request corresponds to the return value Serialization string )
REQUEST_PREFIX RedisKey = "request:"
// Cache data interval ( Company :ms) character string
ITEM_INTERVAL RedisKey = "itemInterval"
// Data cache time ( Accurate to milliseconds ) Ordered set
ITEM_CACHED_TIME RedisKey = "itemCachedTime"
// data ( goods ) Of json character string Prefix ( Storage itemId Information about )
ITEM_PREFIX RedisKey = "item:"
// Product views Ordered set ( Storage itemId And the number of views )
ITEM_VIEWED_NUM RedisKey = "itemViewedNum"
)
// according to token obtain userId(err Not for nil when , The user is logged in and userId It works )
func GetUserId(conn redis.Conn, token string) (userId int, err error) {
return redis.Int(conn.Do("HGET", LOGIN_USER, token))
}

By this time we have been able to pass token Get users id 了 , The corresponding setting method is also needed , That is, the user will set the relevant information every time he operates , And update the token The latest operation timestamp for . If the user is browsing a product , You also need to add the product to the ordered collection of browsing product history , And limit a user to record the latest 25 A product browsing record . P25

// A user browses the most items and records the latest 25 individual
const MAX_VIEWED_ITEM_COUNT = 25
// Update token related information ( If the user has an action, it will be updated , If the current operation is to browse the business details page , Then in itemId, otherwise itemId <= 0)
func UpdateToken(conn redis.Conn, token string, userId int, itemId int) {
currentTime := time.Now().Unix() + int64(itemId)
// Update token and corresponding userId Corresponding relation
_, _ = conn.Do("HSET", LOGIN_USER, token, userId)
// Recent operation time Record token The timestamp
// ( Can't record userId The timestamp ,userId It won't change , So even token Updated , userId The corresponding time stamp will still be updated , There's no way to judge the current token Is it overdue )
_, _ = conn.Do("ZADD", USER_LATEST_ACTION, currentTime, token)
// When you are currently browsing the product details page , It will pass in itemId, otherwise itemId <= 0
if itemId > 0 {
// Decided to use userId As a suffix :token May change , and userId It's the only certain
viewedItemKey := VIEWED_ITEM_PREFIX + RedisKey(strconv.Itoa(userId))
// add to ( to update ) Recently browse product information
_, _ = conn.Do("ZADD", viewedItemKey, currentTime, itemId)
// Remove timestamp in ascending order [0, Last but not least MAX_VIEWED_ITEM_COUNT + 1 individual ] All the elements inside , Leave the latest MAX_VIEWED_ITEM_COUNT individual
_, _ = conn.Do("ZREMRANGEBYRANK", viewedItemKey, 0, -(MAX_VIEWED_ITEM_COUNT + 1))
}
}

More and more data will be stored , And the login user will not operate all the time , So you can set the maximum number of login users supported , And regularly delete the login information of those users who have been operating for the longest time . P26

At the same time, the book also deleted the user's historical access records , I didn't delete it here , Use an ordered collection of historical access records as a database , Independent of user login status , Even if the user's login information is deleted , We can still analyze the corresponding data , More in line with the actual use of .

Cleaning up dead loops inside methods can be less elegant , But use go Keyword to run as a coroutine , It can achieve the effect of timing task to a certain extent , And it's the same as most timed tasks , Will exit with the exit of the main program .

// The most logged in users are 1000w The latest information ( In fact, it's already big at this time key 了 , But the current scenario doesn't need to think too much about this )
const MAX_LOGIN_USER_COUNT = 10000000
// clear session The actual interval 10s To run a
const CLEAN_SESSIONS_INTERVAL = 10 * time.Second
// Number of cleanups per time
const CLEAN_COUNT = 1000
// seek Two int The minimum value of
func min(a, b int) int {
if a < b {
return a
}
return b
}
// Merge RedisKey And []string, Return to one []interface{}
func merge(redisKey RedisKey, strs ...string) []interface{} {
result := make([]interface{}, 1 + len(strs))
result[0] = redisKey
for i, item := range strs {
result[i + 1] = item
}
return result
}
// clear session ( Because most users don't operate all the time , So it needs to be cleaned up regularly The oldest login information )
// Internal dead cycle , You can use go call , As a timed task
func CleanSessions(conn redis.Conn) {
for ; ; {
loginUserCount, _ := redis.Int(conn.Do("ZCARD", LOGIN_USER))
// Exceeded the maximum number of records , Clean up
if loginUserCount > MAX_VIEWED_ITEM_COUNT {
// Get the oldest records of token , Not more than CLEAN_COUNT individual ( Multithreading / There will be concurrency problems in distributed situations , The point is not here , Not for the moment )
cleanCount := min(loginUserCount - MAX_LOGIN_USER_COUNT, CLEAN_COUNT)
tokens, _ := redis.Strings(conn.Do("ZRANGE", USER_LATEST_ACTION, 0, cleanCount - 1))
// I won't support it []string Go straight to []interface{} ( Use... On string arrays ... Cannot correspond to parameter ...interface{})
// Only element types of arrays Type The same can be used ... Pass it to the corresponding ...Type
_, _ = conn.Do("HDEL", merge(LOGIN_USER, tokens...)...)
_, _ = conn.Do("ZREM", merge(USER_LATEST_ACTION, tokens...)...)
// Do not delete the user's historical access record , Use as a database
}
// At the end of each execution , wait for CLEAN_SESSIONS_INTERVAL Long-term
time.Sleep(CLEAN_SESSIONS_INTERVAL)
}
}
The shopping cart P28

Each user's shopping cart is a hash table , Store itemId And The relationship between the number of goods loaded with cars . The shopping cart here only provides the most basic quantity setting , The logic of addition and subtraction is handled by the caller .

// Update cart item quantity ( Don't think about concurrency )
func UpdateCartItem(conn redis.Conn, userId int, itemId int, count int) {
cartKey := CART_PREFIX + RedisKey(strconv.Itoa(userId))
if count <= 0 {
// Delete the item
_, _ = conn.Do("HREM", cartKey, itemId)
} else {
// Update the number of items
_, _ = conn.Do("HSET", cartKey, itemId, count)
}
}

Shopping carts are the same as historical visits , Use as a database , It has nothing to do with user login status , Not deleted with login exit . Therefore, the code that cleans up the login data regularly does not need to be modified , There's no need to add new functions .

Web caching P29

Let's assume that the website 95% The page changes only once a day at most , So there's no need to do everything every time , The cached data can be returned directly before the request is actually processed , It can reduce the server pressure , And improve the user experience .

Java You can use annotations to intercept and cache specific services , Implement pluggable cache , Avoid modifying the core code .

Go Words , I don't know how to implement pluggable mode , I feel like I can use Java The way the interceptor works in the game , Judge and cache requests before they are distributed to specific methods . I'm just going to do a simple business logic processing here to show the general process , Don't care about pluggable , Let users have no perception .

// Determine whether the current request can be cached ( With the actual business scenario processing , I don't care here , It can be cached by default )
func canCache(conn redis.Conn, request http.Request) bool {
return true
}
// Hash the request , Get an identification string ( With the actual business scenario processing , I don't care here , The default is url)
func hashRequest(request http.Request) string {
return request.URL.Path
}
// Serialize the return value , Get a string ( With the actual business scenario processing , I don't care here , The default is Serialization status code )
func serializeResponse(response http.Response) string {
return strconv.Itoa(response.StatusCode)
}
// Deserialize the cached results , Get the return value ( With the actual business scenario processing , I don't care here , Default Status code for 200)
func deserializeResponse(str string) http.Response {
return http.Response{StatusCode: 200}
}
// The return value is cached for 5 minute
const CACHE_EXPIRE = 5 * time.Minute
// Cache request return value
func CacheRequest(conn redis.Conn, request http.Request, handle func(http.Request) http.Response) http.Response {
// If the current request cannot be cached , Call the method directly to return
if !canCache(conn, request) {
return handle(request)
}
// Get the return value of the cache from the cache
cacheKey := REQUEST_PREFIX + RedisKey(hashRequest(request))
responseStr, _ := redis.String(conn.Do("GET", cacheKey))
// Serialization , Do all valid caches have state , So it must not be for ""
if responseStr != "" {
return deserializeResponse(responseStr)
}
// Not in cache , Then do it again
response := handle(request)
// Cache first , Return the result
responseStr = serializeResponse(response)
_, _ = conn.Do("SET", cacheKey, responseStr, "EX", CACHE_EXPIRE)
return response
}
Data caching P30

We can't cache constantly changing pages , But you can cache the data on these pages , for example : Promotional products 、 Hot goods, etc . P30

Now the website needs promotion , The number of promotional items is fixed , It will stop when it is sold out . In order to ensure that users can see the promotion products and quantity in near real time , And make sure you don't put a lot of pressure on the database , So we need to cache the data of promotional products .

You can use timed tasks to update the data that needs to be cached to Redis in ( In fact, for promotion and other goods, the relevant inventory deduction is directly carried out in the cache , It can guarantee the real-time quantity , It can also reduce the pressure on the database , But there will be heat key problem ). Because different products may have different real-time requirements , So we need to record the update cycle and update time of each product , They are stored as hash table and ordered set respectively . P31

In order for the scheduled task to cache data on a regular basis , You need to provide a function , To set the update cycle and update time .

// The multiple needed to convert a millisecond to a nanosecond
const MILLI_2_NANO int64 = 1e6
// Update data cache update cycle ( Company :ms) And update time ( Accurate to milliseconds )
func UpdateItemCachedInfo(conn redis.Conn, itemId int, interval int) {
_, _ = conn.Do("HSET", ITEM_INTERVAL, itemId, interval)
_, _ = conn.Do("ZADD", ITEM_CACHED_TIME, time.Now().UnixNano() / MILLI_2_NANO, itemId)
}

The timer task gets the first product that needs to be updated regularly , If the update time is not up yet , Then wait for the next execution . When the update period does not exist or is less than or equal to 0 when , Indicates that no cache is required , Delete the relevant cache ; When the update period is greater than or equal to 0 when , Get product data , And update the relevant cache . P32

// Scheduled tasks per 50ms Do it once
const CACHE_ITEM_INTERVAL = 50
// Access to commodity data json strand ( With the actual business scenario processing , I don't care here , Default Contains only itemId)
func getItemJson(itemId int) string {
return fmt.Sprintf("{\"id\":%d}", itemId)
}
// Cache data
// Internal dead cycle , You can use go call , As a timed task
func CacheItem(conn redis.Conn) {
for ; ; {
// Get the first product that needs to be updated ( Regardless of the absence of goods )
result, _ := redis.Ints(conn.Do("ZRANGE", ITEM_CACHED_TIME, 0, 0, "WITHSCORES"))
itemId, itemCachedTime := result[0], result[1]
// If the current time is not up , Then wait for the next execution
if int64(itemCachedTime) * MILLI_2_NANO > time.Now().UnixNano() {
time.Sleep(CACHE_ITEM_INTERVAL * time.Millisecond)
continue
}
// Get update cycle
interval, _ := redis.Int(conn.Do("HGET", ITEM_INTERVAL, itemId))
itemKey := ITEM_PREFIX + RedisKey(strconv.Itoa(itemId))
// If the update cycle Less than or equal to 0, Remove the relevant cache information
if interval <= 0 {
_, _ = conn.Do("HREM", ITEM_INTERVAL, itemId)
_, _ = conn.Do("ZREM", ITEM_CACHED_TIME, itemId)
_, _ = conn.Do("DELETE", itemKey)
continue
}
// If the update cycle Greater than 0, Then you need to cache the data
itemJson := getItemJson(itemId)
_, _ = conn.Do("SET", itemKey, itemJson)
_, _ = conn.Do("ZADD", ITEM_CACHED_TIME, time.Now().UnixNano() / MILLI_2_NANO + int64 (interval), itemId)
}
}
Web analytics P33

Now the website just wants to 100 000 One of the items that users browse most often 10 000 Item cache , So you need to record the total number of views per product , And it can get the most browsed information 10 000 Commodity , So you need to store records in an ordered set . At the same time, it needs to be in UpdateToken Add statements that increase the number of times , After the change UpdateToken as follows : P33

// Update token related information ( If the user has an action, it will be updated , If the current operation is to browse the business details page , Then in itemId, otherwise itemId <= 0)
func UpdateToken(conn redis.Conn, token string, userId int, itemId int) {
currentTime := time.Now().Unix() + int64(itemId)
// Update token and corresponding userId Corresponding relation
_, _ = conn.Do("HSET", LOGIN_USER, token, userId)
// Recent operation time Record token The timestamp
// ( Can't record userId The timestamp ,userId It won't change , So even token Updated , userId The corresponding time stamp will still be updated , There's no way to judge the current token Is it overdue )
_, _ = conn.Do("ZADD", USER_LATEST_ACTION, currentTime, token)
// When you are currently browsing the product details page , It will pass in itemId, otherwise itemId <= 0
if itemId > 0 {
// Decided to use userId As a suffix :token May change , and userId It's the only certain
viewedItemKey := VIEWED_ITEM_PREFIX + RedisKey(strconv.Itoa(userId))
// add to ( to update ) Recently browse product information
_, _ = conn.Do("ZADD", viewedItemKey, currentTime, itemId)
// Remove timestamp in ascending order [0, Last but not least MAX_VIEWED_ITEM_COUNT + 1 individual ] All the elements inside , Leave the latest MAX_VIEWED_ITEM_COUNT individual
_, _ = conn.Do("ZREMRANGEBYRANK", viewedItemKey, 0, -(MAX_VIEWED_ITEM_COUNT + 1))
// Every time you browse the business details page is , We need to increase the number of views of current products
_, _ = conn.Do("ZINCRBY", ITEM_VIEWED_NUM, 1, itemId) //【 Change point 】
}
}

here , It can be deleted regularly when the number of views is not in the first place 10 000 Our product cache , At the same time, in order to ensure that the new hot commodity can not be affected by the existing hot commodity , So after deleting the commodity cache , The number of times of goods not deleted should be halved . have access to ZINTERSTORE , And configuration WEIGHTS Option can multiply all product scores by the same weight ( When there is only one ordered set ,ZUNIONSTORE The effect is the same ) P33

// Delete the non hot item cache , Reset hot product views Execution cycle , Every time 5 Minutes at a time
const RESCALE_ITEM_VIEWED_NUM_INTERVAL = 300
// The weight of hot product views
const ITEM_VIEWED_NUM_WEIGHT = 0.5
// Maximum number of cached items
const MAX_ITEM_CACHED_NUM = 10000
// Delete the non hot item cache , Reset the right to reduce the number of hot commodity views
// Internal dead cycle , You can use go call , As a timed task
func RescaleItemViewedNum(conn redis.Conn) {
for ; ; {
// Delete the least viewed [0, Reciprocal 20 001] goods , Leave the most visited 20 000 Commodity
// The browsing record left here is the largest number of cached products 2 times , New hotspot data can not be deleted
_, _ = conn.Do("ZREMRANGEBYRANK", ITEM_VIEWED_NUM, "0", -((MAX_ITEM_CACHED_NUM << 1) + 1))
// Half the number of views , Ensure that the new hotspot data is not affected too much
_, _ = conn.Do("ZINTERSTORE", ITEM_VIEWED_NUM, "1", ITEM_VIEWED_NUM, "WEIGHTS", ITEM_VIEWED_NUM_WEIGHT)
// wait for 5min after , Do the next operation
time.Sleep(RESCALE_ITEM_VIEWED_NUM_INTERVAL * time.Second)
}
}

Here are the doubts in the book

P33 The penultimate paragraph :

The newly added code records the number of views of all products , And sort the products according to the number of views , The most viewed items will be placed in the index of the ordered collection 0 position , And it has the least score of the whole ordered set .

The corresponding Python code snippet :

conn.zincrby('viewed:', item, -1)

And the deletion ranking is 20 000 The operation of the commodity after the first name is as follows :

conn.zremrangebyrank('viewed:', 0, -20001)

Redis command ZREMRANGEBYRANK Remove all elements in the ordered set that rank in the interval ( Sort in ascending order ), Understand as follows , The result of the above code will leave the least visited 20 000 A commodity , Not in line with the actual demand .

Later I saw that other commands were used in different ways , I thought it might be Python The order is a little different , A search on the Internet found that there are two kinds of results , We still need to prove it by our own practice .

Passing through Python In practice , The above deletions are indeed sorted in ascending order , Delete score ( Browse volume ) The lowest part , Leave points ( Browse volume ) The highest part .( Of course , The practice is relatively simple , There may be other configurations that affect the results , I don't want to explore any more )

Next, the book gets the ranking of views through the following code , And ranking judgment : P34

rank = conn.zrank('viewed:', item_id)
return rank is not None and rank < 10000

You can see that the author's setting here is in line with the negative number of views , But positive views can be accessed through ZREVRANK Get a descending ranking , It can be guessed that the author is in the deletion ranking 20 000 I didn't think about it clearly when I finished the product .

According to the negative number of views , You can use the following code to correctly delete the ranking in 20 000 After that, the goods :

conn.zremrangebyrank('viewed:', 20000, -1)

thus , We can achieve the above mentioned canCache function , Only product pages that can be cached , And ranked in 10 000 Only the product page within can be cached . P34

// From request to get itemId, Returns if it does not exist error ( With the actual business scenario processing , I don't care here , The default is 1)
func getItemId(request http.Request) (int, error) {
return 1, nil
}
// Determine whether the current request is dynamic ( With the actual business scenario processing , I don't care here , It's not by default )
func isDynamic(request http.Request) bool {
return false
}
// Determine whether the current request can be cached ( With the actual business scenario processing , I don't care here , It can be cached by default )
func canCache(conn redis.Conn, request http.Request) bool {
itemId, err := getItemId(request)
// without itemId( It's not a product page ) perhaps The results are dynamic , You cannot cache
if err == nil || isDynamic(request) {
return false
}
// Get the page view ranking of the requested product
itemRank, err := redis.Int(conn.Do("ZREVRANK", ITEM_VIEWED_NUM, itemId))
// If non-existent perhaps Ranking higher than 10000 name , No caching
if err != nil || itemRank >= MAX_ITEM_CACHED_NUM {
return false
}
return true
}

Summary

  • To refactor existing code as requirements change

Thinking and thinking

  • In the last practice, a lot of time was spent thinking about the business logic of handling exception processes , There's a lot of useless code , And it's related to concentration Redis And familiar with go My original intention is not in line with , So this time, we focused on how to realize the function , I don't think much about parameter checking and exception flow .
  • Practice is the only criterion for testing truth . Remember to read 《Head First Design patterns 》 I also met some questions in the books , I can't believe all the books , Or dare to question , Test the possibilities with practice .
This article was first published on the official account : Man Fu Zhu Ji ( Click to view the original ) Open source in GitHub : reading-notes/redis-in-action
版权声明
本文为[Man Fu Zhu Ji]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210122225440624d.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云