Kafka of message queue

itread01 2021-01-24 02:12:33
kafka message queue


[ Message queuing activeMQ](https://www.cnblogs.com/pluto-charon/p/14225896.html)[ Message queuing RabbitMQ](https://www.cnblogs.com/pluto-charon/p/14288765.html)## 1.kafka Introduce kafka By scala Language development of a multi segmentation slot , Having multiple copies and residing in zookeeper Coordinated, decentralized release - Subscription message system . With high throughput 、 Be persistent 、 Horizontal expansion Kit 、 Support flow processing and other features ; It can support massive data transmission ; And persist the message to disk , And the information is backed up to ensure the data security .kafka While ensuring high processing speed , It can also ensure the low delay of data processing and zero loss of data .kafka Characteristics of :1. High throughput , Low latency :kafka It can process hundreds of thousands of messages per second , The minimum delay is about milliseconds , Each topic can be divided into multiple slots , The consumption group performs consumption operation on the division slot 2. Expandable Suite : Support for hot expansion kits 3. Sustainability , reliability : Messages are persisted to the local disk , And support data backup 4. Fault tolerance : Allow nodes in the cluster to fail , If the number of copies is n, It is allowed to n-1 Nodes failed 5. High concurrency : Allow thousands of clients to read and write at the same time 6. Scalability :kafka During execution, you can easily expand or shrink the suite ; You can expand a package kafka Topics to include more slots kafka The main application scenarios of :- Message processing - Website tracking - Index storage - Log aggregation - Streaming - Basic process of event source :![](https://img2020.cnblogs.com/blog/1459011/202101/1459011-20210123233305608-1701696775.png)kafka The key role of :- **Producer:** The producer is the release of information , The character releases the message to kafka Of topic in - **Consumer:** Consumers , From broker Read data from - **Consumer Group:** Every Consumer Belonging to a particular Consumer Group( For each Consumer Appoint group name, If not specified group name It belongs to the presupposition group)- **Topic:** A class property that divides the data into classes - **Partition:**topic The data in is divided into one or more partition, Every topic It contains at least one partition- **Partition offset:** Every message has a current partition The only one under 64 Bitwise offset, It names the beginning of the message - **Replicas of Partition:** copy , It's a backup of a split slot - **Broker:**kafka The cluster contains one or more servers , The node of the server is called broker- **Leader:** Every partition From multiple copies , There is and only one of them as leader,leader It's currently responsible for reading and writing materials partition- **Follower:**Follower Follow Leader, All write requests are made through leader route , Data changes will be broadcast to all follower On ,follower And leader Keep your data in sync - **AR:** All copies in the slot are collectively referred to as AR- **ISR:** All with leader Parts keep a certain degree of replica composition ISR- **OSR:** And leader Replica synchronization lag too many replicas - **HW:** High water level , Identifies a specific offset, Consumers can only pull this offset Previous messages - **LEO:** That is, the end displacement of the log , Record the displacement value of the next message in the underlying log of the replica ## 2.kafka Installation of kafka The premise is to install zookeeper as well as jdk The environment . The version I have installed here is jdk1.8.0_20,kafka_2.11-1.0.0,zookeeper-3.4.14.kafka And jdk The version of must correspond to . I used to use kafka_2.12_2.3.0, No way. 1. Will kafka Upload your files to home Directory and unzip to /usr/local Under the catalogue ```shell [email protected] home]# tar -xvzf kafka_2.11-1.0.0.tgz -C /usr/local```2. Enter kafka Of config```shell[ [email protected] local]# cd /usr/local/kafka_2.11-1.0.0/config```3. Editor server.properties Archives ```yaml# If it's in a cluster environment , Then each broker.id Set it to different broker.id=0# Open the following line , This is equivalent to kafka Access to external services listeners=PLAINTEXT://192.168.189.150:9092# Log storage location :log.dirs=/tmp/kafka_logs Change it to log.dirs=/usr/local/kafka_2.11-1.0.0/logs# modify zookeeper The address of zookeeper.connect=192.168.189.150:2181# modify zookeeper The connection time is too long , The default is 6000( It could be overtime )zookeeper.connection.timeout.ms=10000```3. Start zookeeper Because I'm configured zookeeper Cluster , So we need to put three zookeeper All activated . Only start a single server zookeeper It will not be possible at the time of the election ( When more than half of the entire cluster goes down ,zookeeper The cluster will be considered unavailable )```shell[ [email protected] ~]# zkServer.sh start# Check the status [ [email protected] ~]# zkServer.sh status```4. Start kafka```shell[ [email protected] kafka_2.11-1.0.0]# bin/kafka-server-start.sh config/server.properties# You can also start in the background , If you don't use background boot , After starting, you need to open a new window to operate [ [email protected] kafka_2.11-1.0.0]# bin/kafka-server-start.sh -daemon config/server.properties```5. Create a theme ```shell# --zookeeper: It specifies kafka Connected by zookeeper Service address of # --partitions: Specifies the number of slots # --replication-factor: The replica factor is specified [ [email protected] kafka_2.11-1.0.0]# bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic charon --partitions 2 --replication-factor 1Created topic "charon".```6. Show all the themes ( Verify that there is something wrong with the established topic )```shell[ [email protected] kafka_2.11-1.0.0]# bin/kafka-topics.sh --zookeeper localhost:2181 --listcharon```7. Look at the details of a topic ```shell[ [email protected] kafka_2.11-1.0.0]# bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic charonTopic:charon PartitionCount:2 ReplicationFactor:1 Configs: Topic: charon Partition: 0 Leader: 0 Replicas: 0 Isr: 0 Topic: charon Partition: 1 Leader: 0 Replicas: 0 Isr: 0```8. Open a new window to enable consumers to receive messages .--bootstrap-server: Specify the connection kafka Cluster address ,9092 yes kafka The port of service . Because the specific address is configured in my configuration file , So we need to write down the specific address . Otherwise, it will report **[Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available.** My mistake ```shell[ [email protected] kafka_2.11-1.0.0]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.189.150:9092 --topic charon```9. Open a new window to start producers to generate messages --bootstrap-server: Specify the connection kafka Cluster address ,9092 yes kafka The port of service . Because the address is configured in my configuration file .```shell[ [email protected] kafka_2.11-1.0.0]# bin/kafka-console-producer.sh --broker-list 192.168.189.150:9092 --topic charon```10. Generate information and consume information ```shell# Producer production information >hello charon good evening# The message that the consumer receives hello charon good evening``` Of course, in this way , It can only be realized in the same network segment .## 3. Producers and consumers kafka Production process :![](https://img2020.cnblogs.com/blog/1459011/202101/1459011-20210123233438959-1615734513.png)1)producer First from zookeeper Of "/brokers/.../state" Node found the partition Of leader2)producer Send a message to the leader3)leader Write messages locally log4)followers From leader pull Message , Write local log Backward leader Transmit ACK5)leader Received all ISR Medium replication Of ACK After , increase HW(high watermark, Finally commit Of offset) And to producer Transmit ACK Consumer group :![](https://img2020.cnblogs.com/blog/1459011/202101/1459011-20210123233518103-587435607.png)kafka Consumers are part of the consumer group , When multiple consumers form a consumption group to consume the theme , Every consumer receives messages from different slots . If consumers are all in the same consumer group , It's work - Queue model . If consumers are in different groups , It's release - Subscription model . When a single consumer can't keep up with the speed of data generation , You can add more consumers to share the load , Each consumer processes only part of the slot information , So as to realize the horizontal scaling of a single application . But never let the number of consumers be less than the number of dividing slots , Because there will be extra consumers free at this time . When there are multiple applications that need to run from kafka When getting information , Let each application correspond to a consumer group , So that each application can get one or more topic The whole message of . Each consumer has a thread , If you want to execute multiple consumers in the same consumer group , Each consumer needs to be executed in its own thread .## 4. Code practice 1. New dependencies :```xml``` Producer code :```javapackage kafka;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.ProducerRecord;import org.apache.kafka.common.serialization.StringSerializer;import java.util.Properties;/** * @className: Producer * @description: kafka The producer of * @author: charon * @create: 2021-01-18 08:52 */public class Producer { /**topic*/ private static final String topic = "charon"; public static void main(String[] args) { // To configure kafka Attribute Properties properties = new Properties(); // Set the address properties.put("bootstrap.servers","192.168.189.150:9092"); // Set the response type , The default value is 0.(0: Producers don't wait kafka In response to ;1:kafka Of leader This message will be written to the local log file , But don't wait for a successful response from other machines in the cluster ; // -1(all):leader Will wait for all follower Synchronization complete , Make sure the message is not lost , Unless kafka All the machines in the cluster hang up , Guaranteed availability ) properties.put("acks","all"); // Set the number of retries , Bigger than 0, The client will resend the message if it fails properties.put("reties",0); // Set the batch size , When multiple messages need to be sent to the same slot , Producers will try to merge network requests , Submission efficiency properties.put("batch.size",10000); // Producer sets serialization mode , The default is :org.apache.kafka.common.serialization.StringSerializer properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); // Establish producers KafkaProducer producer = new KafkaProducer(properties); for (int i = 0; i < 5; i++) { String message = "hello,charon message "+ i ; producer.send(new ProducerRecord(topic,message)); System.out.println(" Producers send messages :" + message); } producer.close(); }}``` Consumer code :```javapackage kafka;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import org.apache.kafka.common.serialization.StringSerializer;import java.util.Arrays;import java.util.List;import java.util.Properties;/** * @className: Consumer * @description: kafka Of consumers * @author: charon * @create: 2021-01-18 08:53 */public class Consumer implements Runnable{ /**topic*/ private static final String topic = "charon"; /**kafka Consumers */ private static KafkaConsumer kafkaConsumer; /** Consumer information */ private static Consume
版权声明
本文为[itread01]所创,转载请带上原文链接,感谢
https://javamana.com/2021/01/20210124021047833H.html

  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云