Original article , Reprint please mark .https://www.cnblogs.com/boycelee/p/14728638.html

One 、Kafka Two 、 solve the problem Asynchronous processing The application of decoupling Traffic peak clipping 3、 ... and 、 characteristic Efficiency of reading and writing Network transmission Concurrency capability Persistence capability reliability Horizontal expansion Four 、 Basic concepts news & batch news batch The theme & Partition journal Log Basic concepts Log Save and compress Log saving Log compression Broker copy producer consumer Consumer group Messaging mode Kafka Architecture Overview 5、 ... and 、 Details of core features consumer Single consumer group Multi consumer group heartbeat The rebalancing mechanism Rebalance triggers Avoid rebalancing The consumer decides “ die ” Conditions Consumers don't report to Coordinator Send heartbeat request It's not consumed within the prescribed time poll Method To prevent consumers from being sentenced to “ die ” To avoid being “ Conditions 1” Death sentence To avoid being “ Conditions 2” Death sentence Displacement Management Displacement theme The reason for introducing The message format Displacement submission Automatic submission Manual submission Partition Replica mechanism The advantages of replica mechanism Copy definition Copy role Multiple copies of the same partition , How to ensure replica message consistency ? The reason why the follower copy does not provide services to the outside world Sync copy (ISR) With asynchronous copies (OSR) Standards for synchronous replicas HWLEOLeader The election A few copies are down All copies are down Why don't the minority obey the majority ? Physical storage Storage overview Basic concepts File category The logging stored Indexes Message compression Offset index 6、 ... and 、 Reference resources 7、 ... and 、 summary

One 、Kafka

Kafka It's a distributed messaging system .

Two 、 solve the problem

Message systems are often used for asynchronous processing 、 The application of decoupling 、 Traffic peak clipping 、 Message communication, etc .

Asynchronous processing


image-20210315230129937

The producer writes the message to the message queue , Consumers pull message queue messages asynchronously , So as to improve the message processing ability .

The application of decoupling


image-20210315230734421

Kafka As a medium of communication , Each subsystem only needs to do what is within the responsibility of the system . producer - Consumer model ,Kafka It's message queuing .

Traffic peak clipping


image-20210315232052783

Under normal circumstances , Upstream services ( If you quote 、 Marketing, etc ) The annual flow is large , In the face of large traffic can be more calm to deal with , But downstream applications ( Such as : transaction 、 Orders, etc ) Because the annual flow is small , In the face of large traffic, because of lack of preparation , And the system is broken down , Causing an avalanche .

In order to deal with this problem , Message queues can be used as temporary data storage nodes , Consumers according to their ability to consume , Control the consumption speed by pulling , To achieve the purpose of peak shaving .

3、 ... and 、 characteristic

Efficiency of reading and writing

Kafka In the face of massive data , It can efficiently handle the storage and query of messages . Through the software design to avoid the hardware read disk performance bottleneck .

Network transmission

Batch read messages , Batch compression of messages , So as to improve network utilization .

Concurrency capability

Kafka Support message partition , The sequence of messages in each partition is guaranteed , Multiple partitions can support concurrent operations , promote Kafka Concurrent operations .

Persistence capability

Kafka Persist messages to hard disk . Network transmission is unreliable , So you need to persist the data . It uses zero copy 、 Sequential reading 、 Sequential writing 、 Page caching and other technologies make Kafa It has the characteristics of high throughput .

reliability

Support partition multiple copies ,Leader Copies read and write ,Follow The replica is only responsible for synchronization Leader Copy data , Realize redundant backup of messages , promote Kafka Disaster resilience .

Horizontal expansion

many Producer、Broker、Consumer, All are distributed , many Consumer You can join the same Consumer Group, Each partition can only be assigned one Consumer, When Kafka When the server increases the number of partitions for horizontal expansion , Can be directed to Consumer Group add to Consumer, Increase consumption power . When Consumer Group There is Consumer When a fault occurs and you are offline , By rebalancing (Rebalance) Redistribution of partitions .

Four 、 Basic concepts

news & batch

news

(1) The news is Kafka The basic unit of ;

(2) Message by key and value Of byte Array composition ;

(3)key It can send messages to the specified partition according to the policy .

batch

(1) To improve efficiency , Messages are written in batches kafka, The same group of messages must belong to the same partition of the same topic ;

(2) Sending in batches can reduce network overhead , Speed up transmission .

The theme & Partition

The theme (Topic) Is the logical unit used to store message classification relationships , Think of it as a collection of stored messages . Partition (partition) yes Kafka The basic unit of data storage , You can think of it as a subset of a collection of stored messages .Kafka Messages are classified by topic , same Topic Different sections of (partition) It's going to be distributed over the unused ones Broker On , Partition mechanism provides the basis for horizontal expansion , You can add and distribute partition To enhance Kafka Message parallel processing capabilities .


image-20210417181748356

journal

Log Basic concepts

(1) The partition logically corresponds to a Log, The producer writes the message to the partition, which is actually corresponding to the partition Log;

(2)Log Can correspond to the folder on the disk , It consists of many Segment form , Every Segment Corresponding to a log file and index file ;

(3) When Segment When size exceeds limit , Will create new Segment;

(4)Kafka Take the order I/O, So only the latest Segment Additional data ;

(5) The index is sparse , The runtime maps it to memory , Increase index speed .


image-20210417191546608

Log Save and compress

Log saving

(1) The time limit

According to the retention time , When the news is in kafka The storage time in is longer than the specified time , It will be deleted .

(2) Size limit

according to Topic Storage size , When Topic The size of the log occupied is greater than a threshold , You can start deleting the oldest message .Kafka A new thread will be started , Periodically check for messages that can be deleted .

Log compression

In many scenes ,Kafka News key And value The value is constantly changing , It's like the data in the database is constantly modified , Consumers only care about the latest key Corresponding value. If the log compression function is turned on ,Kafka Will turn on the thread , Timing is the same for key To merge the messages , And keep the latest value value .

Broker

independent Kafka Service is a broker,broker The main job is to receive messages from producers , Distribute offset And save it to disk .Broker In addition to receiving messages from producers , And deal with consumers 、 other Broker Request , According to the request type, the corresponding processing line and response return . Normally, one machine corresponds to one broker.

copy

A copy is a redundant backup of the message process , Distributed systems store each other's data on different machines . stay Kafka in , Every section (partition) There can be multiple copies , The message in each copy is the same ( At the same time , The messages between multiple machines are not exactly the same ).

producer

producer (Producer) My main job is to generate messages . Push the message release to Topic In the corresponding partition of . for example :(1) Yes key Conduct hash;(2) polling ;(3) Customize .

consumer

consumer (Consumer) The main work of consumer news . Pull... From the corresponding partition Topic The news of consumption . Consumers need to go through offset Record where you spend .

Consumer group

Multiple consumers (Consumer) Make up the consumer group (Consumer Group). Consumer group (Consumer Group) Subscribed topics (Topic) Each partition of can only be assigned to , One consumer in the same consumer group deals with . But a consumer can consume the same theme (Topic) Multiple sections of .


image-20210418163922518

Messaging mode


image-20210417120121342

​ kafka No message push , It's just news pull . But consumers can push messages by polling and pulling .

Kafka Architecture Overview


image-20210316000534417

5、 ... and 、 Details of core features

consumer

(1) The consumer saves the offset of the subscription topic consumption message to the name "__consumer_offsets" In the theme of ;

(2) Recommended Kafka To store consumer offsets ,zookeeper Not suitable for high concurrency .

Single consumer group

Multiple consumers who consume the same topic only need to group_id The settings are the same , You can form a consumer group .

Situation 1 : In a consumer group , There is only one consumer .


image-20210418213958556

Situation two : There are multiple consumers in the consumer group .


image-20210418213256470

Situation three : The number of partitions is the same as the number of consumer groups .


image-20210418213522999

Situation four : The number of consumers in the consumer group is greater than the number of partitions . Idle consumers don't receive messages .


image-20210418214525504

Multi consumer group

One topic corresponds to multiple consumer groups , Each consumer group is able to consume all messages on the topic .


image-20210418215624764

heartbeat

Kafka The heart beat mechanism guarantees Consumer and Broker Health between , When Broker Coordinator Normal ,Consumer Will send heartbeat .

The rebalancing mechanism

Rebalancing is an agreement that specifies how to allocate information when there is a change between the consumer and the topic in the consumer group .

Rebalance triggers

(1) Consumers in the consumer group change .( Changes in the number of consumer groups , For example, the consumer group is out of the consumer group due to downtime )

(2) The number of partitions corresponding to the theme changes .(kafka Only add partition is supported )

(3) Subscription topics change .( Consumer groups use regular expressions to subscribe to topics , At this time, the corresponding topic is created )

Situation 1 : Normal condition , Each partition can only be assigned to one consumer .


image-20210419000612825

Situation two : Consumer machine down , The consumer exits the consumer group , Trigger rebalancing , Redistribute partitions to consumers in consumer groups .


image-20210419000928251

Situation three :Broker Machine down , Cause partition 3 Unable to provide services . If the partition has copies, rebalancing is triggered , If there is no copy, the consumer 3 idle .


image-20210419001510688

Situation four : Subscribe to topics using regular expressions , When a new topic is added , The partition corresponding to the theme will be assigned to the current consumer , It triggers rebalancing .


image-20210419003311991

Avoid rebalancing

The number of subscription topics and topic partitions have changed , Generally, it is triggered by operation and maintenance , There is no need to avoid rebalancing normally . So we can focus on the rebalancing caused by the change in the number of consumers in the consumer group .

After rebalancing , Each consumer instance will report to Coodinator Send heartbeat request .

The consumer decides “ die ” Conditions
Consumers don't report to Coordinator Send heartbeat request

(1)session.timeout.ms Parameter identification determines the time threshold of consumer death . The default value of the parameter is 10 second , That is, if 10 Not received within seconds Group I'm going to Consumer Instance heartbeat request , It is determined that Consumer example “ Death ”, Removed from the Group.

(2)heartbeat.interval.ms Parameter identifies the frequency of heartbeat requests sent . The smaller the value. ,Consumer The more often instances send heartbeat requests .

It's not consumed within the prescribed time poll Method

(1)max.poll.interval.ms Parameter identification Consumer The instance poll Method's maximum time interval . The default value is 5 minute , Express Comsumer If in 5 You can't spend it in minutes poll Method , Will be removed Group.

To prevent consumers from being sentenced to “ die ”
To avoid being “ Conditions 1” Death sentence

session.timeout.ms >= 3 * heartbeat.interval.ms. Guarantee Consumer At least before being sentenced to death 3 A heartbeat request .

for example : Set up session.timeout.ms = 6s; Set up heartbeat.interval.ms = 2s.

To avoid being “ Conditions 2” Death sentence

As much as possible max.poll.interval.ms The time setting is larger . You can use the longest time in the consumer instance as a basis , On this basis, we should expand the scope 1-1.5 times . Leave enough processing time for business processing , Avoid rebalancing due to long message consumption time .

Displacement Management

Displacement theme

Kafka Consumers consume messages according to the displacement order of messages , The displacement of consumers is managed by consumers , Can be stored in zookeeper in , Can be stored in Kafka The theme __consumer_offse in hjmgbknjk.n,jvgnvmnn/.vt.sconsumer_offsets It's the displacement theme .

The reason for introducing

(1) The old version of displacement management Zookeeper, The displacement data will be automatically or manually submitted to Zookeeper Preservation . When Consumer After restart , It will automatically start from Zookeeper Read the displacement data in , Continue to consume from the place where consumption was last cut off . This kind of design is Kafka Broker No need to save displacement data .

(2) but Zookeeper Not suitable for high frequency write operation , So in 0.8.2.x After the new version of Consumer A new displacement management mechanism has been introduced . take Consumer As a general Kafka news , Submitted to the __consumer_offsets.

(3) In this case, you don't need to modify it , You can't write to the topic at will , Because it can lead to Kafka Unable to parse normally .

The message format

(1)Key Contained in the GroupID、 topic name 、 Zone number ;

(2)Value Contains the displacement value .

Displacement submission

(1)Consumer You need to Kafka Record your own displacement data , This reporting process is called Submit displacement (Committing Offsets)

(2)Consumer Each partition assigned to it needs to submit its own displacement data

(3) Displacement submitted by Consumer I'm responsible for ,Kafka Only responsible for the custody of .__consumer_offsets

(4) Displacement submission is divided into automatic submission and manual submission

(5) Displacement commit is divided into synchronous commit and asynchronous commit

Automatic submission

(1) Set up enable.auto.commit The value is true;

(2) adopt auto.commit.interval.ms, You can set the time interval for automatic submission , The default value is 5 second ;

(3)Kafka Will ensure that at the beginning of the call poll When the method is used , Submit last time poll The displacement information of all messages returned .poll The logic of the method is to submit the displacement of the previous batch of messages first , And then process the next batch of messages , So it can ensure that there is no loss of consumption ;

(4) Automatic submission leads to repeated consumption of messages . example :Consumer Every time 5s Submit offset, When the displacement information is submitted 3 Second rebalancing , all Consumer Will be submitted from the last time offset Start spending , But at this point offset It's already 3 Seconds ago offset 了 , So we're going to spend again before rebalancing 3 All the data in seconds . We can only narrow the submission offset Time window of , But there's no way to avoid repeat consumption .

Manual submission

1、 Synchronous commit

(1) Use KafkaConsumer#commitSync(): Will submit KafkaConsumer#poll() Back to the latest offset;

(2) This method is synchronous operation , Wait until offset Returned only after successfully submitted ;

1while (true) {
2            ConsumerRecords<String, String> records =
3                        consumer.poll(Duration.ofSeconds(1));
4            process(records); //  Process the message
5            try {
6                        consumer.commitSync();
7            } catch (CommitFailedException e) {
8                        handle(e); //  Handle commit failure exception
9            }

(3) Synchronous commit makes Consumer In a blocking state ;

(4) Synchronous commit automatically retries when an exception occurs .

2、 Asynchronous submission

(1) Use asynchronous commit to circumvent Consumer Blocking ;

(2) abnormal (GC、 The network jitter ) Use synchronous commit to try again .

 1try {
2     while(true) {
3          ConsumerRecords<String, String> records = 
4                                    consumer.poll(Duration.ofSeconds(1));
5          process(records); //  Process the message
6          commitAysnc(); //  Use asynchronous commit to avoid blocking
7     }
8} catch(Exception e) {
9            handle(e); //  Handling exceptions
10} finally {
11    try {
12        consumer.commitSync(); //  The last commit uses synchronous blocking commit
13    } finally {
14       consumer.close();
15    }

Partition

Partition (Partition) yes Kafka The basic unit of data . The same theme (topic) Data will be stored in multiple places partion in , These partitions can be assigned to the same machine or to different machines . The advantage is that it is conducive to horizontal expansion , Avoid disk space and performance limitations on a single machine , At the same time, data redundancy can be increased by replication , So as to improve the disaster recovery ability . In order to achieve uniform distribution , commonly partition The number of people is usually Broker Server An integral multiple of a quantity .

Replica mechanism

The advantages of replica mechanism

Partitions have multiple copies , Provide redundant data , It helps to ensure that kafka High availability .

Copy definition

(1) Each topic can be divided into several sections , Configure multiple copies per partition ;

(2) The essence of a replica is a commit log that can only be appended with messages ;

(3) All copies in the same partition keep the same message sequence ;

(4) Different copies written by partition are stored in different places Broker On , Answer Broker When partition data is not available during downtime .


image-20210501184109411

Copy role

Multiple copies of the same partition , How to ensure replica message consistency ?

The most common solution is based on leaders (Leader-based) Copy mechanism of .


image-20210501190758088

1、 Copies fall into two categories

(1) Leader copy ;

(2) Copy of follower .

2、 stay Kafka The replication mechanism of is different from other distributed systems

(1) stay kafka in , Follower copies are not available for external service . All requests must be processed by the leader copy ;

(2) The only task of the follower replica is to pull messages asynchronously from the leader replica , And write it in your own submission log , To synchronize with the leader copy .

3、 Where is the leader Broker Downtime

(1)Kafka Depending on the Zookeeper The monitoring function provided can sense Broker Downtime , And start a new election ;

(2) The old Leader After the replica restarts , You can only join the cluster as a follower copy .

The reason why the follower copy does not provide services to the outside world

1、 Easy to implement “Read-your-writes”

(1) Producers use API Want to Kafka After the message is successfully written , Be able to use consumers right away API Look at the production information just now .

(2) If followers are allowed to provide services , Because the follower copy is asynchronous , Therefore, it is possible that the follower copy does not get the latest information from the leader copy , There will be a message that cannot be read and written immediately .

2、 Easy to achieve monotonic reading (Monotonic Reads)

(1) What is monotonous reading ? For message consumers , News doesn't come and go .

(2) If followers are allowed to provide services , Because the follower copy is asynchronous , Messages pulled by multiple replicas from the leader replica are not necessarily synchronized , There will be multiple requests to read copies of different followers , Sometimes the data is read without . If the read is all handled by the leader copy , that Kafka It's easy to achieve monotonic read consistency .

Sync copy (ISR) With asynchronous copies (OSR)

Because the follower copy needs to pull the leader copy asynchronously , Then we need to determine how to synchronize with the leader replica .

Kafka Introduced In-Sync Replicas, That is to say ISR( Sync ) A collection of copies , The copy is in Zookeeper On maintenance . If it exists in ISR Medium means synchronization with the leader copy , On the contrary, asynchronous copies (OSR)

Standards for synchronous replicas

image-20210503235617510

(1)replica.lag.time.max.ms Parameter value identification Follower Copies can be slower than Leader Maximum time interval between copies , The default value is 10 second .

(2) if Follower Copy lags behind Leader The maximum continuous time interval of the replica does not exceed the replica.lag.time.max.ms Parameter value setting size , Then it is decided that Follower Copy and Leader The replica is synchronized , Otherwise, it is considered as asynchronous , Will copy from ISR Remove from replica set (Follower Copy pull is slower than Leader The speed at which copies write messages , And the time interval exceeds the set threshold ).

(3)ISR It's a dynamic adjustment set , Non static . When Follower When the copy catches up , Will be added again, will be ISR aggregate .

HW

High water level (HW yes High Watermark Abbreviation ), Represents the offset of a particular message , Consumers can only pull this offset Previous data .

LEO

LEO yes Log End Offset Abbreviation , Represents the next message to be written in the current log file offset


image-20210503235536271

Leader The election

A few copies are down

(1) When Leader The copy corresponds to broker After downtime , It will start from Follower Select one of the copies as Leader;

(2) When it comes down broker When you recover, you'll start again from leader in pull data .

All copies are down

unclean.leader.election.enable Control whether to allow Unclean Leader election .

(1) Don't open Unclean. wait for ISR A recovery in , And choose to be leader;( Wait a long time , Reduced availability )

(2) Turn on Unclean. Select the first restored copy as the new leader, Whether it's ISR copy .( Turning it on will cause data loss )

(3) Under normal circumstances, it is not recommended to turn it on , At the expense of high availability , But maintaining data consistency , Avoid message loss .

Why don't the minority obey the majority ?

choice Leader If more than half of the synchronous copies are required to approve , The algorithm needs more redundant synchronous copies .( One machine failed , Need 3 Two synchronous copies )

Physical storage

Storage overview

Basic concepts

(1)Kafka Use log files to save messages sent by producers ;

(2) Every message has a offset The value represents its offset in the partition ;

(3)offset A value is a logical value, not a real physical address . It is similar to the primary key in a database , Uniquely identifies a piece of data in a database table . and offset stay Kafka A partition in uniquely identifies a message .

(4)Log One to one correspondence with the partition ,Log It's not a file, it's a folder ;

(5) Folder to topicName_pratiitonID name , All partition messages are stored in the log file under the secondary folder ;

(6)Kafka By segmenting Log Divided into several LogSegment,LogSegment It's a logical concept , Corresponding to... On disk Log A log file and index file in the directory ;

(7) The naming rule for log files is [baseOffset].log,baseOffset Is the first message in the log file offset;

(8)Kafka Logs are appended sequentially ;

(9) Each log file corresponds to an index file , The index file uses sparse index to index some messages in the file log .

(10) Log file structure diagram


image-20210503153647283

(11)Log Example


image-20210504001658365

Created a tp_demo_01 The theme of , There is 6 individual partition, For each of them partition There is a Topic-partition Named message log .

(12)LogSegment Example


image-20210504001721206

File category


image-20210504001748640

The logging stored

Indexes

In order to improve the speed of message search ,Kafka from 0.8 Version start , Add the corresponding index file for each log file .IndexFile And MassageSet File Together LogSegment.

Offset index file is used to record the mapping relationship between message offset and physical address . The timestamp index file looks up the corresponding offset according to the timestamp .

The format of the index entries in the index file : Each index entry is 8 byte , In two parts , The first part is relative offset(4 byte ), That is relative to the baseOffset The offset (baseOffset It's the base offset , The log file is named after the base offset ). The second part is the physical address (4 byte ), That is, the corresponding index message in the log file position Location . Through these two parts, we can achieve offset Mapping to physical addresses .

Message compression

image-20210503214424850
Offset index

image-20210503214049171

give an example

Suppose you need to find startOffset by 1067. Need to put offset=1067 Convert to the corresponding physical address , What is the process ?

(1) It will be absolutely offset Turn into relative offset, absolute offset subtract baseOffset, Get relative offset=67;

(2) By relative offset Look up the index file , obtain (58,1632) Index entry ( Through the way of jump table to locate a index file , And then through the dichotomy to find no greater than relative offset The largest index entry for );

(3) from position by 1632 Start searching in sequence at , Find the absolute offset=1067 The news of ;

(4) In the end offset by 1070 Location message for .( Because the message is compressed ,offset=1067 This message is compressed to form offset=1070 This message ).

6、 ... and 、 Reference resources

《Apache Kafka Source analysis 》

《 Geek time -Kafka Core technology and actual combat 》

《 retractor Java-Kafka》

7、 ... and 、 summary

This article from the scene 、 characteristic 、 Basic concepts 、 The core characteristics and other dimensions are elaborated in detail Kafka Knowledge about . About kafka Stability and specific source code implementation will be described in the advanced chapter .

Don't know much , Too little . Welcome criticism and correction !

Original article , Reprint please mark .https://www.cnblogs.com/boycelee/p/14728638.html

Kafka More related articles in fundamentals of principle analysis

  1. Vue.js Source code analysis ( Four ) The basic chapter Responsive principle data attribute

    Official website data Properties are described as follows : It means :data preserved Vue The data used in the example ,Vue Will modify data Access controller properties for each property in , When you access each property, you access the corresponding get Method , When you modify a property, the corresponding set Fang ...

  2. Vue.js Source code analysis ( 13、 ... and ) The basic chapter Components props Properties,

    Parent component passed props Property to pass data to the subcomponent , When defining a component, you can define a props attribute , The value can be an array of strings or an object . for example : <!DOCTYPE html> <html lang= ...

  3. Vue.js Source code analysis ( 6、 ... and ) The basic chapter Compute properties computed Properties,

    The expressions in the template are very convenient , But they were designed for simple computation . Putting too much logic in a template makes it too heavy and difficult to maintain , such as : <div id="example">{{ messag ...

  4. Vue.js Source code analysis ( 3、 ... and ) The basic chapter Template rendering el、emplate、render Properties,

    Vue There are three properties associated with templates , That's the explanation on the official website : el ; Provide one that already exists on the page DOM Elements as Vue The mount target of the instance template ; A string template as Vue The identity of the instance uses . The template will ...

  5. Vue.js Source code analysis ( Two ) The basic chapter Global configuration

    Vue.config It's an object , contain Vue Global configuration for , You can modify the following properties before starting the app , as follows : ptionMergeStrategies        ; Options for custom merge policies silent         ...

  6. Vue.js Source code analysis ( Nine ) The basic chapter Life cycle explanation

    Let's take a look at the introduction of the official website : There are mainly eight life cycles , Namely : beforeCreate.created.beforeMount.mounted.beforeupdate.updated   .beforeDes ...

  7. Vue.js Source code analysis ( 8、 ... and ) The basic chapter Dependency injection provide/inject Detailed explanation of combination

    Let's take a look at the introduction of the official website : To put it simply , When there are too many levels of component Introduction , Our descendant component wants to obtain the resources of the ancestor component , So what to do , You can't always take the father level up , And the code structure is confusing . This is what this pair of options do provide and ...

  8. Vue.js Source code analysis ( 7、 ... and ) The basic chapter The listener watch Properties,

    Let's take a look at the introduction of the official website : The introduction on the official website is very easy to understand , That is to listen to a change of data , When the data changes, execute our watch Method ,watch Option is an object , The key is the name of the data to be observed , The value is an expression ( function ), It can also be an object , ...

  9. Vue.js Source code analysis ( 11、 ... and ) The basic chapter filter filters Properties,

    Vue.js Allows you to customize filters , Can be used for some common text formatting . Filters can be used in two places : Double curly braces interpolate the sum v-bind expression ( The latter from 2.1.0+ Start supporting ). Filters should be added in JavaScrip ...

  10. Vue.js Source code analysis ( Ten ) The basic chapter ref Properties,

    ref Used to register reference information for an element or subcomponent . The reference information will be registered in the $refs On the object . If in the ordinary DOM Use on element , The reference points to DOM Elements : If it is used on sub components , References point to component instances , for example : ...

Random recommendation

  1. C# Interface control properties to modify thread safety issues

    I'm experimenting today delegate And thread At the end of the preliminary experiment , Because the original delegate Only one function will be called , I don't feel up to delegate The ultimate , So I redefined myself delegate, In another thread ...

  2. java Reentrant Lock

    //Listing 7-1. Achieving Synchronization in Terms of Reentrant Locks import java.util.concurrent.Exe ...

  3. android gallery Custom border + Slide

    Recently, I used image carousel in my project , Tried. Gallery,ViewFlipper,ViewPager, Feeling Gallery Best suited to the needs , however Gallery It's hard to see the system's borders , You need to use your own background image in the project . Let's take a look at how to make ...

  4. Use Array

    public class UsingArray {     public static void output(int[]Array)     {         if(Array!=null)    ...

  5. centos Single user mode : modify ROOT Passwords and grub encryption

    centos Single user mode : modify ROOT Passwords and grub encryption CentOSLinux Network application configuration management application server   Linux When the system is in normal state , The server is powered on ( Or restart ) after , It can be automatically booted by the system bootloader program ...

  6. Java Serialization mechanism and principle and my own understanding

    Java Analysis of serialization algorithm Serialization( serialize ) Is the process of describing an object as a series of bytes : Deserialization deserialization It's the process of reconstructing these bytes into an object .Java serialize API Provide one ...

  7. [ translate ] How to be in .NET Core Use in System.Drawing?

    You probably know System.Drawing, It's a popular way to perform graphics related tasks API, And it doesn't belong to .NET Core Part of . At first it was the .NET Core Designed as a cloud framework , It doesn't include non cloud related API. The other side ...

  8. hosts The file is introduced

    stay Window There's a... in the system Hosts file ( No suffix ), stay Windows98 Under the system, the file is in Windows Catalog , stay Windows2000/XP The system is located in C:\Winnt\System32\Drivers\ ...

  9. Sentinel search ( Clear explanation c Language ) + Functional macro

    // Sentinel law , That is to add the element to be searched after the array to be searched , This improves performance ( When the amount of data is huge ) #include <stdio.h> #define FAILURE -1 // Use ...

  10. python2.7 threading RLock/Condition To translate documents (RLock/Condition Detailed explanation )

    RLock Objects A reentrant lock is a synchronization primitive , It can be acquired multiple times by the same thread . In the internal , In addition to the lock used by the original lock / Out of unlocked state , It also uses “ Thread owned ” and “ Recursion level ” The concept of . In the locked state , Some threads have locks : Before ...