Notes on MySQL 45 lectures (1-7)

wbo112 2021-08-08 21:13:46
notes mysql lectures 1-7

《MySQL actual combat 45 speak 》 note

Section 1 : Infrastructure : One SQL How query statements are executed ?

  1. MySQL Basic architecture diagram

In general ,MySQL Can be divided into Server Layer and storage engine layer .

Server Layers include connectors 、 The query cache 、 analyzer 、 Optimizer 、 Actuators etc. , cover MySQL Most of our core services function , And all the built-in functions ( Such as date 、 Time 、 Mathematics and cryptographic functions ), All the functions across the storage engine are This layer implements , Like stored procedures 、 trigger 、 View etc. .

The storage engine layer is responsible for data storage and extraction . Its architecture pattern is plug-in , Support InnoDB、MyISAM、 Memory Wait for multiple storage engines .

Now the most commonly used storage engine is InnoDB, It is from MySQL 5.5.5 The version began to become Default storage engine .

  • The connector

    The connection command is written as follows , After entering the command, enter the password in the interactive dialogue . You can also directly -p It's on the command line

    mysql -h$ip -P$port -u$user -p

    Use show processlist Command to query the current connection status

    If the client does not move for a long time , The connector will automatically disconnect it . This time is determined by the parameter wait_timeout control Of , The default value is 8 Hours .

    The process of establishing a connection is usually more complicated , Try to minimize the action of establishing a connection , Use long connection .MySQL The memory used during execution is managed in the connection object , Release only when the connection is disconnected . So long connections can easily lead to memory problems .

    How to solve the memory problem caused by long connections :

    • Regularly disconnect long connections
    • about MySQL 5.7 Or a newer version , After a larger operation , Through execution mysql_reset_connection To reinitialize the connection resources . This process does not require reconnection and re-authorization , But it restores the connection to the state it was in when it was created .
  • The query cache

    MySQL After getting a query request , First, we will look in the cache , If it cannot be found, it will continue to the later execution stage . After execution , The results are stored in the query cache .

    The cache will be emptied due to the update operation on the table , For databases under great pressure to update , The command rate of query cache is relatively low . For static tables , Do not update or query many tables with few updates , For caching queries .MySQL This is also provided “ According to the need to use ” The way . You can set the parameters query_cache_type Set to DEMAND, So for the default SQL Statements do not use the query cache . For the statements that you decide to use the query cache , It can be used SQL_CACHE Explicitly specify select SQL_CACHE * from T where ID=10;

    MySQL 8.0 The version has deleted the whole block function of query cache , Later versions have no query cache

  • analyzer

    To do first " Lexical analysis ", Do it again " Syntax analysis ", Judge the input SQL Is it satisfactory? MySQL grammar

  • Optimizer

    When there are multiple indexes in the table , Decide which index to use ; Or there are multiple table associations in a statement (join) When , Determine the join order of the tables .

  • actuator

    First, judge whether you have permission to execute query on the currently operated table . Have permission , Open the table and continue . When I open my watch , The actuator is defined according to the engine of the table , Use this guide The interface provided by Qing .

    select * from T where ID=10;

    For example, the table in our example T in ,ID Field has no index , So the execution process of the actuator is like this :

    1. call InnoDB The engine interface takes the first row of this table , Judge ID Value is 10, If not, skip , If so Save this row in the result set ;
    2. Call the engine interface “ The next line ”, Repeat the same logic of judgment , Until you get to the last row of the table .
    3. The executor returns the record set composed of all the rows that meet the conditions in the traversal process to the client as a result set .

In the second quarter : Log system : One SQL How update statements are executed ?

  1. Important log module :redo log

    redo log yes InnoDB Engine specific logs .

    MySQL It's often said in WAL technology ,WAL The full name is WriteAhead Logging, The key point is to write a log first , Write the disk again .

    When a record needs to be updated ,InnoDB The engine will write the record first redo log Inside , And update memory , At this time, the update is finished . meanwhile ,InnoDB The engine will... At the right time , Update this operation record to disk , And this update is often done when the system is relatively idle .

    InnoDB Of redo log It's fixed size , For example, it can be configured as a group 4 File , The size of each file is 1GB, So this one “ Powder board ” In total, you can record 4GB The operation of . Write from the beginning , Write to the end and return to the beginning of the loop Write , As shown in the figure below .

    write pos Is the location of the current record , Move back as you write , Write to No 3 Go back to... At the end of file 0 The beginning of file No .
    checkpoint Is the current location to erase , It's also going back and forth , Before erasing a record, update the record to a data file .
    write pos and checkpoint Between is “ Powder board ” The empty part of the top , It can be used to record new operations . If write pos Catch up checkpoint, No new updates can be performed at this time , You have to stop and erase some records , hold checkpoint Push on .
    With redo log,InnoDB It can guarantee that even if the database is restarted abnormally , No records submitted before will be lost , This ability is called crash-safe.

  • Important log module :binlog

    Server Layer log , be called binlog( Archive log )

    There are three differences between the two logs :

    1. redo log yes InnoDB Engine specific ;binlog yes MySQL Of Server Layer , All engines can use .
    2. redo log It's a physical log , The record is “ What changes have been made on a data page ”;binlog It's a logical log , What is recorded is the original logic of this statement , such as “ to ID=2 In this line c Field plus 1 ”.
    3. redo log It's written in cycles , The space will be used up ;binlog Can be added to write .“ Additional writing ” Refer to binlog When the file is written to a certain size, it will switch to the next , Does not overwrite previous logs .
  • Perform the update process

    Create table : create table T(ID int primary key, c int);

    to update : update T set c=c+1 where ID=2;

    Actuators and InnoDB The engine is executing this simple update language The internal process of sentence tense .

    1. Find the engine for the actuator first ID=2 This business .ID It's the primary key , The engine uses tree search to find this line . If ID=2 this The data page where the row is located is already in memory , Directly back to the actuator ; otherwise , You need to read memory from disk first , however And then back .
    2. The actuator gets the row data given by the engine , Add this value to 1, Like it turns out to be N, Now is N+1, Get a new line data , Then call the engine interface to write the new data .
    3. The engine updates this row of new data into memory , At the same time, record the update operation to redo log Inside , here redo log It's about On prepare state . Then tell the actuator that the execution is finished , You can commit a transaction at any time .
    4. The actuator generates the binlog, And put binlog Write to disk .
    5. The executor calls the engine's commit transaction interface , The engine just wrote redo log Change to submit (commit) Status update complete .

    update Statement execution flowchart , The light color box in the picture indicates that it is in InnoDB Internally executed , The dark box indicates that it is executed in the actuator

redo log The write of is divided into two steps :prepare and commit, This is it. " Two-phase commit ".

  • Two-phase commit
    • Why does the journal need “ Two-phase commit ”

      because redo log and binlog It's two separate logics , If you don't have to commit in two stages , Or write it first redo log To write binlog, Or in reverse order . Let's see what's wrong with these two ways .
      Still use the front update Sentence for example . Assuming the current ID=2 The line of , Field c The value of is 0, Let's suppose we execute update Statement after writing the first log , During the period when the second log has not been written crash, What will happen ?

      1. First write redo log Post write binlog. Suppose that redo log finish writing sth. ,binlog Before I finished writing ,MySQL Abnormal process restart . Because of what we said earlier ,redo log After you've written , Even if the system crashes , Still able to recover data , So the line after recovery c The value of is 1.
        But because of binlog I didn't finish writing crash 了 , Now binlog There is no record of this statement . therefore , When you back up the logs later , Saved up binlog There is no such sentence in it . And then you'll see , If you need to use this binlog To restore the temporary storage , Because of the binlog The loss of , This temporary library will be short of this update , The restored line c The value is 0, It is different from the value of the original library .
      2. First write binlog Post write redo log. If in binlog After you've written crash, because redo log Not yet , This transaction is invalid after crash recovery , So this line c The value of is 0. however binlog It has been recorded “ hold c from 0 Change to 1” This journal . therefore , Use after binlog When it comes to recovery, one more transaction comes out , The restored line c The value is 1, It is different from the value of the original library .
        You can see , If not used “ Two-phase commit ”, Then the state of the database may be inconsistent with the state of the database recovered with its logs .

      In short ,redo log and binlog Can be used to represent the commit state of a transaction , The two-phase submission is to keep these two states Be logically consistent .

  • summary

    Physical log redo log And logic log binlog

    redo log Used to guarantee crash-safe Ability .innodb_flush_log_at_trx_commit This parameter is set to 1 When , For each transaction redo log All persist directly to disk . It is recommended that you set this parameter to 1, This ensures MySQL Data will not be lost after abnormal restart .

    sync_binlog This parameter is set to 1 When , For each transaction binlog All persistent to disk . This parameter is also built I suggest you set it to 1, This ensures MySQL After abnormal restart binlog No loss .

In the third quarter :03. The transaction isolation : Why do you change? I can't see ?

  1. Isolation and isolation level

    The transaction 4 A feature :ACID(Atomicity、Consistency、Isolation、Durability, Atomicity 、 Uniformity 、 Isolation, 、 persistence )

    When there are multiple transactions in the database executing at the same time , There may be dirty reading (dirty read)、 It can't be read repeatedly (non- repeatable read)、 Fantasy reading (phantomread) The problem of , To solve these problems , And then there is “ Isolation level ” The concept of Read .

    The tighter the isolation , The less efficient . So many times , We all need to find a balance between the two .SQL Standard transaction isolation levels include : Read uncommitted (read uncommitted)、 Read the submission (read committed)、 Repeatable (repeatable read) And serialization (serializable ).

    Read uncommitted means , When a transaction has not yet been committed , The changes it makes can be seen by other things .

    Read submission means , After a transaction is committed , The changes it makes will be seen by other things .

    Repeatable reading means , Data seen during the execution of a transaction , It is always the same as the data seen when the transaction is started To . Of course, at the level of repeatable read isolation , Uncommitted changes are also invisible to other transactions .

    Serialization , As the name suggests, for the same line of records ,“ Write ” Will add “ Write lock ”,“ read ” Will add “ Read the lock ”. When a read-write lock conflict occurs When , Subsequent transactions must wait for the previous transaction to complete , In order to proceed .

    • Give examples of these isolation levels

      Suppose the data sheet T in There is only one column , The value of one of the lines is 1, Here is the behavior of executing two transactions in chronological order .

    create table T(c int) engine=InnoDB;
    insert into T(c) values(1)·

    ​ If isolation level is “ Read uncommitted ”, be V1 The value is 2. It's time for business B Although not yet submitted , But the result has been A I saw it . therefore ,V2、V3 All of them are 2.

    ​ If isolation level is “ Read the submission ”, be V1 yes 1,V2 The value of is 2. Business B Can't be updated until it's submitted A notice . therefore , V3 The value of is also 2.

    ​ If isolation level is “ Repeatable ”, be V1、V2 yes 1,V3 yes 2. The reason V2 still 1, That's what we're following : The data that the transaction sees during execution must be consistent before and after .

    ​ If isolation level is “ Serialization ”, It's business B perform “ take 1 Change to 2” When , Will be locked . Until transaction A After submission , Business B To continue . So from A From the perspective of , V1、V2 The value is 1,V3 The value of is 2.

    On the implementation , A view will be created in the database , When accessing, the logical result of the view shall prevail . stay “ Repeatable ” Isolation Below grade , This view is created when the transaction starts , This view is used throughout the life of the transaction . stay “ Read the submission ” Isolation level Don't go down , This view is in each SQL Created when the statement begins execution . What needs to be noted here is ,“ Read uncommitted ” Isolation Level directly returns the latest value on the record , There is no view concept ; and “ Serialization ” Under the isolation level, lock directly to avoid No parallel access .

    Query the current MySQL Isolation level

    show variables like 'transaction_isolation';
  2. Implementation of transaction isolation

    stay MySQL in , In fact, each record will record a rollback operation at the same time when it is updated . The latest value on the record , through Over rollback operation , Can get the value of the previous state .

    Suppose a value from 1 Has been changed in order to 2、3、4, In the rollback log, there will be records like the following .

    The current value is 4, But when looking up this record , There will be different transactions started at different times read-view. As shown in the picture To the , In view A、B、C Inside , The values of this record are 1、2、4, There can be multiple records in the system A version , It's multi version concurrency control of database (MVCC). about read-viewA, In order to get 1, You have to put the current Values are obtained by performing all rollback operations in the figure in turn .

    At the same time, you will find , Even if there's another transaction going on right now 4 Change to 5, This business follows read-viewA、B、C Corresponding Transactions do not conflict .

  • When is the rollback log deleted ?

    The system will judge , When no more transactions need to use these rollback logs , The rollback log will be deleted . When is it not needed ? When there is no earlier rollback log in the system read-view When .

  • Why is it recommended not to use long transactions ?

    Long transactions mean that there will be a very old transaction view in the system . Because these transactions can access any number in the database at any time According to the , So before this transaction is committed , All possible rollback records in the database must be kept , This will lead to a large number of Use storage space .

    Impact on rollback segments , Long transactions also take up lock resources , It could also drag down the entire warehouse ,

  1. How to start a transaction

    MySQL There are several ways to start a transaction :

    • Start the transaction statement explicitly , begin or start transaction. The accompanying commit statement is commit, The rollback statement is rollback.
    • set autocommit=0, This command will turn off the automatic submission of this thread . It means if you only execute one select sentence , This transaction starts , And it doesn't automatically commit . This business continues until you take the initiative to commit or rollback sentence , Or disconnect .

    Some client connection frameworks will default to one after successful connection set autocommit=0 The order of . This leads to the next Queries are in transactions , If it's a long connection , It leads to an unexpected long affair .

    It is recommended to use set autocommit=1, Start the transaction through an explicit statement .

    stay autocommit by 1 Under the circumstances , use begin Explicitly started transactions , If you execute commit Then commit the transaction . If you execute commit work and chain, Commit the transaction and start the next transaction automatically , This also saves the need to execute again begin language The cost of sentences .

    stay information_schema Library innodb_trx Query long transactions in this table

    # The search duration exceeds 60s The business of
    select * from information_schema.innodb_trx where TIME_TO_SEC(timediff(now(),trx_started))>60
  2. How to avoid long transactions
    • From the application development side :

      • Confirm whether it is used set autocommit=0, Observe general_log

        general log Will all arrive at MySQL Server Of SQL Statement down .

        Generally, the on function is not turned on , because log It's going to be huge . But in individual cases, it may be temporarily open for a while general log For trouble shooting .
        Related parameters are 3:general_log、log_output、general_log_file

        show variables like 'general_log'; -- Check whether the log is on

        set global general_log=on; -- Turn on the log function

        show variables like 'general_log_file'; -- Look at where the log files are saved

        set global general_log_file='tmp/general.lg'; -- Set the storage location of log files

        show variables like 'log_output'; -- Look at the log output type table or file

        set global log_output='table'; -- Set the output type to table

        set global log_output='file'; -- Set the output type to file

      • Confirm if there are unnecessary read-only transactions , Read only transactions can be removed

      • adopt SETMAX_EXECUTION_TIME command , To control the maximum execution time of each statement , Avoid a single statement that takes too long to execute unexpectedly .

    • From the database side :

      • monitor information_schema.Innodb_trx surface , Set long transaction threshold , Call the police if you exceed it / perhaps kill;
      • Percona Of pt-kill This tool
      • The test phase requires the output of all general_log, Analyze log behavior to discover problems ahead of time ;
      • MySQL 5.6 Or update , hold innodb_undo_tablespaces( Used to set the created undo The number of table spaces , stay mysql_install_db After initialization , It can't be changed any more , Modifying this value will result in MySQL Can't start ) Set to 2( Or bigger value ). If there is a large transaction, the rollback segment is too large , It's more convenient to clean up after setting .

The fourth quarter, : Index in simple terms ( On )

  1. Three common data structures of index : Hashtable 、 Ordered arrays and search trees .

  2. The difference between the three models :

    • This structure of hash table is applicable to the scenario with only equivalent query . such as Memcached And others NoSQL lead Qing Dynasty .
    • Ordered array , When updating data, if you insert a record in the middle, you must move all the subsequent records , The cost is too high . The performance in the equivalent query and range query scenarios is excellent . Only for static storage engines .
    • The search tree is more balanced .
  3. InnoDB The index model of

    stay MySQL in , Indexing is implemented at the storage engine level , So there is no uniform index standard ,

    InnoDB Used B+ Tree index model , Every index is in InnoDB It corresponds to a B+ Trees .

    Illustrate with examples :

    # The primary key column is ID Table of , There are fields in the table k, And in k There's an index on .
    create table T(
    id int primary key,
    k int not null,
    name varchar(16),
    index (k))engine=InnoDB;

    In the table R1~R5(ID,k) Values, respectively (100,1)、(200,2)、(300,3)、(500,5) and (600,6)

    It's not hard to see from the picture , According to the content of leaf node , Index types include primary key index and non primary key index .

    The leaf node of the primary key index stores the whole row of data . stay InnoDB in , Primary key indexes are also called clustered indexes (clustered index).

    The leaf node content of the non primary key index is the value of the primary key . stay InnoDB in , Non primary key indexes are also called secondary indexes (secondary index).

    As can be seen from the above example :

    • If the statement is select *fromTwhere ID=500, That is, the primary key query method , Just search ID This tree B+ Trees ;
    • If the statement is select *fromTwhere k=5, That is, common index query method , You need to search k Tree index , obtain ID The value of is 500, Until then ID Index tree search once . This process is called back to table .

    Query based on non primary key index needs to scan an index tree

  4. Index maintenance

    B+ Tree to maintain index order , You need to do the necessary maintenance when inserting new values . Take the picture above as an example , If you insert Enter a new line ID The value is 700, Then you only need to R5 Insert a new record after the record of . If the new insert ID The value is 400, Just Relatively troublesome , We need to move the following data logically , Vacant position .

    And what's worse is , If R5 The data page is full , according to B+ Tree algorithm , At this time, you need to apply for a new Data pages , Then move some of the data in the past . This process is called page splitting . under these circumstances , Performance is naturally affected .

    In addition to performance , Page splitting also affects data page utilization . Data originally placed on one page , Now it's divided into two pages , Overall space utilization is reduced by about 50%.

    When two adjacent pages are deleted due to data , After a very low utilization , Will merge data pages . close And the process of , It can be regarded as the reverse process of the splitting process .

    The auto increment primary key refers to the primary key defined on the auto increment column , This is the general definition in the table creation statement : NOTNULL PRIMARY KEY AUTO_INCREMENT.

    You can not specify when inserting a new record ID Value , The system will get the current ID Maximum plus 1 As the next record ID value . Insert data mode of auto primary key , It's in line with the incremental insert scenario we mentioned earlier . Insert one at a time New record , All are additional operations , It doesn't involve moving other records , It will not trigger the split of leaf nodes .

    The smaller the primary key length , The smaller the leaf node of a normal index is , The less space a normal index takes . therefore , In terms of performance and storage space , Auto primary key is often a more reasonable choice

    Index may be deleted , Or page Crack and other reasons , Cause data page to be empty , The process of re indexing creates a new index , Insert data in order , such Page utilization is the highest , That is, the index is more compact 、 More save a space .

Section 5 : Index in simple terms ( Next )

  1. Case study

    perform select *fromTwhere k between 3 and 5, You need to perform several tree searches Cable operation , How many lines will be scanned ?

    create table T (
    ID int primary key,
    k int NOT NULL DEFAULT 0,
    s varchar(16) NOT NULL DEFAULT '',
    index k(k))
    insert into T values(100,1, 'aa'),(200,2,'bb'),(300,3,'cc'),(500,5,'ee'),(600,6,'ff'),(700,7,'gg');

    This article SQL The execution process of query statement :

    1. stay k Found on index tree k=3 The record of , obtain ID = 300;
    2. Until then ID Index tree found ID=300 Corresponding R3;
    3. stay k Index tree takes next value k=5, obtain ID=500;
    4. Back to ID Index tree found ID=500 Corresponding R4;
    5. stay k Index tree takes next value k=6, Not meeting the conditions , The loop ends

    Go back to the process of searching the primary key index tree , We call it return watch . This query process read k Of the index tree 3 Bar record ( step 1、3 and 5), I went back to my watch twice ( step 2 and 4)

In this case , Because the data needed for query results is only on the primary key index , So I have to go back to my watch . that , Is there any Possibly index optimized , Avoid the return process ?

  1. Overlay index

    If the executed statement is select ID fromTwhere k between 3 and 5, At this time, we just need to check ID Value , and ID Value Already in k On the index tree , So you can directly provide query results , There is no need to return the form . in other words , In this query , Indexes k already “ covers ” Our query needs , We call it coverage index .

    Tree searches can be reduced by overwriting the index , Significantly improve query performance , So using overlay index is a common Performance optimization means of .

    It should be noted that , Use the overlay index inside the engine to index k I actually read three records ,R3~R5( The corresponding index k Record item on ), But for MySQL Of Server Layer , It just looked for the engine and got two records , therefore MySQL Think the number of scan lines is 2.

  2. Leftmost prefix principle

    B+ Tree is an index structure , Available with index “ Left most prefix ”, To locate records .

    CREATE TABLE `tuser` (
    `id` int(11) NOT NULL,
    `id_card` varchar(32) DEFAULT NULL,
    `name` varchar(32) DEFAULT NULL,
    `age` int(11) DEFAULT NULL,
    `ismale` tinyint(1) DEFAULT NULL,
    PRIMARY KEY (`id`),
    KEY `id_card` (`id_card`),
    KEY `name_age` (`name`,`age`)
    ) ENGINE=InnoDB

    Index items are sorted according to the order of the fields in the index definition .

    Not just the full definition of index , As long as the leftmost prefix is satisfied , You can use indexes to speed up retrieval . This is the leftmost The prefix can be the leftmost of the union index N A field , It can also be the leftmost of a string index M Characters

    When building a federated index , How to arrange The order of fields in the index ?

    The first principle is , If by adjusting the order , can Maintain one index less , So this order is often the one that needs to be prioritized .

    ,name The field is the ratio age Field large , Create a (name,age) And a (age) Single field index of .

  3. Index push down

    # Search out the “ The first word of the name is Zhang , And the age is 10 All the boys at the age of ”
    select * from tuser where name like ' Zhang %' and age=10 and ismale=1

    Execution steps :

    • This statement searches the index tree , Only use “ Zhang ”, Find the first satisfaction Record of conditions ID3.

    • Of course, it is to judge whether other conditions are met .

      • stay MySQL 5.6 Before , Only from ID3 Start back to watch one by one . Find the data row on the primary key index , Then compare the field values .
      • MySQL 5.6 Index push down optimization introduced (index condition pushdown), During index traversal , Opposite rope The fields contained in the citation are judged first , Filter out unqualified records directly , Reduce the number of times to return to the table .

      Execution flow chart ( Each dashed arrow represents a return to the table )

      stay (name,age) I took it out of the index age Value , This process InnoDB Not going to see age Value , Just put... In order “name The first word is ’ Zhang ’” Take out the records one by one and return them to the table . therefore , We need to go back to the table 4 Time .

      InnoDB stay (name,age) Inside the index, we judge age Is it equal to 10, For not equal to 10 Of Record , Judge directly and skip . In our case , Only need to ID4、ID5 These two records get data from the table break , Just go back to the table 2 Time .

Section 6 : Global lock and table lock : How can adding a field to a table be so much of a hindrance

According to the range of locking ,MySQL The locks inside can be roughly divided into global locks 、 Table level lock and row lock .

  1. Global lock

    Global lock is to lock the whole database instance .MySQL It provides a way to add global read lock , The order is Flush tables with read lock (FTWRL). When you need to make the entire library read-only , You can use this life Make , After that, the following statements of other threads will be blocked : Data update statement ( Data addition, deletion and modification )、 Data definition statement ( Include Build table 、 Modify table structure, etc ) Commit statements for and update class transactions .

    A typical use scenario for global locks is , Make a full library logical backup

    The logical backup tool is mysqldump. When mysqldump Using parameter –single-transaction When , Before importing data Start a transaction , To ensure a consistent view . And because the MVCC Support for , During this process, the data can be updated normally .

    single-transaction Method only applies to all tables using the transaction engine's Library . If some tables are used, do not A transaction engine , Then backup can only be through FTWRL Method . This is often DBA Ask business developers to use InnoDB replace MyISAM One of the reasons .

    Full library read only , Why not use set global readonly=true What is the way? ?

    • One is , In some systems ,readonly The value of will be used to do other logic , For example, it is used to judge whether a database is the primary database or the standby database library . therefore , modify global The way the variables affect the surface is larger , I don't recommend that you use .
    • Two is , There are differences in exception handling mechanisms . If you execute FTWRL After the command, due to the abnormal disconnection of the client , that MySQL This global lock will be released automatically , The whole library can be updated normally . Instead, set the entire library to readonly after , If the client has an exception , Then the database will remain readonly state , This will lead to the whole The library is not writable for a long time , High risk .
  2. Table lock

    Table locks are generally used when the database engine does not support row locks

    MySQL There are two types of lock at the inner table level : One is Table locks , One is Metadata lock (meta data lock,MDL).

    • The syntax of table lock is lock tables …read/write. And FTWRL similar , It can be used unlock tables Active release lock , It can also be released automatically when the client is disconnected . We need to pay attention to ,lock tables Syntax will restrict the reading and writing of other threads Outside , It also defines the next operation objects of this thread .

      for instance , If in a thread A In the implementation of lock tables t1 read, t2 write; This statement , Other threads write t1、 read Write t2 All of the statements will be blocked . meanwhile , Threads A In execution unlock tables Before , It can only be read t1、 Reading and writing t2 Gymnastics do . Linking t1 Not allowed , Naturally, you can't access other tables .

    • Metadata lock MDL You don't need to explicitly use , When accessing a table, it will be Automatically add .MDL The role of is , Ensure the correctness of reading and writing .

    stay MySQL 5.5 The version introduces MDL, When adding, deleting, modifying and querying a table , Add MDL Read the lock ; When When you want to change the structure of a table , Add MDL Write lock .

    • Read locks are not mutually exclusive , So you can have multiple threads to add, delete, modify and query a table at the same time .
    • Read-write lock 、 Write locks are mutually exclusive , To ensure the security of the operation to change the structure of the table . therefore , If there are two lines The program should add fields to a table at the same time , One of them can't be executed until the other has finished .
  3. Example

    session A Start... First , I'll check my watch t Add one more MDL Read the lock . because session B What is needed is MDL Read the lock , So it can be executed normally .

    after session C Will be blocked, Because session A Of MDL The lock has not been released , and session C need MDL Write lock , So it can only be blocked .

    If only session C It doesn't matter that I'm blocked , But after that, it's all on the watch t New application MDL The request to read the lock is also Will be session C Blocking . We said that before , All operations of adding, deleting, modifying and querying tables need to be applied for first MDL Read the lock , They were all Lock the , It means that it's completely unreadable .

    If the query statements on a table are frequent , And the client has a retry mechanism , That is to say, there will be a new one after the timeout session If you ask again , The threads in this library will soon be full .

    In the transaction MDL lock , Apply at the beginning of statement execution , But it will not be released immediately after the statement is finished discharge , It will wait until the whole transaction is committed before releasing .

  4. After class questions :

    Backup is usually performed on the standby database , You are using –single-transaction Methods do logic During the backup process , If a small table on the main database makes a DDL, Like adding a column to a table . Now , from What will you see on the standby database ?

    /* other tables */
    Q3:SAVEPOINT sp;
    /* moment 1 */
    Q4:show create table `t1`;
    /* moment 2 */
    Q5:SELECT * FROM `t1`;
    /* moment 3 */
    /* moment 4 */
    /* other tables */

    At the beginning of the backup , To make sure RR( Repeatable ) Isolation level , Set it again RR Isolation level (Q1);

    Start transaction , Here we use WITH CONSISTENT SNAPSHOT Make sure that this statement is executed to get a consistency View (Q2);

    Set a save point , This is very important (Q3);

    showcreate To get the structure of the watch (Q4), And then we're going to officially import the data (Q5), Roll back to SAVEPOINT sp, Here The function of the is to release t1 Of MDL lock (Q6. Of course, this part belongs to “ Superclass ”, It is not mentioned in the text above .

    DDL The time from the main database varies according to the effect , I played four times . Set the title as a small table , We assume that when we arrive , If you start to execute , Then it can be executed quickly .

    The answers are as follows :

    1. If in Q4 Statement arrives before execution , The phenomenon : No impact , What the backup got was DDL Table structure after .
    2. If in “ moment 2” arrive , The table structure has been changed ,Q5 When it comes to execution , newspaper Table definition has changed, please retry transaction, The phenomenon :mysqldump End ;
    3. If in “ moment 2” and “ moment 3” Arrive between ,mysqldump Occupy t1 Of MDL Read the lock ,binlog Blocked , The phenomenon : Master-slave delay , until Q6 Execution completed .
    4. from “ moment 4” Start ,mysqldump The release of the MDL Read the lock , The phenomenon : No impact , What the backup got was DDL Front watch structure .

Section 7 : The merits and demerits of line lock : How to reduce the impact of row locks on performance ?

​ MySQL The row lock is implemented by each engine in the engine layer . But not all engines support row locks

  1. Start with two-stage locking

    In the following sequence of operations , Business B Of update What happens when a statement is executed ? Suppose the word paragraph id It's a watch t Primary key of

    The conclusion of this question depends on the transaction A At the end of the two update After the statement , What locks are held , And when to release .

    In fact, the business B Of update The statement will be blocked , Until transaction A perform commit after , Business B only Can carry on . Business A Hold the row lock of two records , It's all in commit It was released when .

    stay InnoDB Transaction , Row locks are added when needed , But it's not to release immediately when you don't need it discharge , It's about waiting until the end of the transaction . This is the two-stage lock protocol .

    If you need to lock multiple rows in your transaction , To put Most likely to cause lock conflicts 、 The lock most likely to affect concurrency should be put back as far as possible .

  2. Deadlock and deadlock detection

    Now , Business A Waiting for business B Release id=2 The row lock , And the business B Waiting for business A Release id=1 The row lock . Business A and Business B Waiting for each other's resources to be released , Is to enter the deadlock state . When there's a deadlock , There are two strategies :

    • One strategy is , Direct entry waiting , Until timeout . This timeout can be set by the parameter innodb_lock_wait_timeout To set up .
    • Another strategy is , Initiate deadlock detection , After deadlock is found , Actively roll back a transaction in the deadlock chain , Let other things Be sure to continue . The parameter innodb_deadlock_detect Set to on, Indicates that the logic is turned on .

    stay InnoDB in ,innodb_lock_wait_timeout The default value of is 50s


  1. 【计算机网络 12(1),尚学堂马士兵Java视频教程
  2. 【程序猿历程,史上最全的Java面试题集锦在这里
  3. 【程序猿历程(1),Javaweb视频教程百度云
  4. Notes on MySQL 45 lectures (1-7)
  5. [computer network 12 (1), Shang Xuetang Ma soldier java video tutorial
  6. The most complete collection of Java interview questions in history is here
  7. [process of program ape (1), JavaWeb video tutorial, baidu cloud
  8. Notes on MySQL 45 lectures (1-7)
  9. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  10. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  11. 精进 Spring Boot 03:Spring Boot 的配置文件和配置管理,以及用三种方式读取配置文件
  12. Refined spring boot 03: spring boot configuration files and configuration management, and reading configuration files in three ways
  13. 【递归,Java传智播客笔记
  14. [recursion, Java intelligence podcast notes
  15. [adhere to painting for 386 days] the beginning of spring of 24 solar terms
  16. K8S系列第八篇(Service、EndPoints以及高可用kubeadm部署)
  17. K8s Series Part 8 (service, endpoints and high availability kubeadm deployment)
  18. 【重识 HTML (3),350道Java面试真题分享
  19. 【重识 HTML (2),Java并发编程必会的多线程你竟然还不会
  20. 【重识 HTML (1),二本Java小菜鸟4面字节跳动被秒成渣渣
  21. [re recognize HTML (3) and share 350 real Java interview questions
  22. [re recognize HTML (2). Multithreading is a must for Java Concurrent Programming. How dare you not
  23. [re recognize HTML (1), two Java rookies' 4-sided bytes beat and become slag in seconds
  24. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  25. RPC 1: how to develop RPC framework from scratch
  26. 造轮子系列之RPC 1:如何从零开始开发RPC框架
  27. RPC 1: how to develop RPC framework from scratch
  28. 一次性捋清楚吧,对乱糟糟的,Spring事务扩展机制
  29. 一文彻底弄懂如何选择抽象类还是接口,连续四年百度Java岗必问面试题
  30. Redis常用命令
  31. 一双拖鞋引发的血案,狂神说Java系列笔记
  32. 一、mysql基础安装
  33. 一位程序员的独白:尽管我一生坎坷,Java框架面试基础
  34. Clear it all at once. For the messy, spring transaction extension mechanism
  35. A thorough understanding of how to choose abstract classes or interfaces, baidu Java post must ask interview questions for four consecutive years
  36. Redis common commands
  37. A pair of slippers triggered the murder, crazy God said java series notes
  38. 1、 MySQL basic installation
  39. Monologue of a programmer: despite my ups and downs in my life, Java framework is the foundation of interview
  40. 【大厂面试】三面三问Spring循环依赖,请一定要把这篇看完(建议收藏)
  41. 一线互联网企业中,springboot入门项目
  42. 一篇文带你入门SSM框架Spring开发,帮你快速拿Offer
  43. 【面试资料】Java全集、微服务、大数据、数据结构与算法、机器学习知识最全总结,283页pdf
  44. 【leetcode刷题】24.数组中重复的数字——Java版
  45. 【leetcode刷题】23.对称二叉树——Java版
  46. 【leetcode刷题】22.二叉树的中序遍历——Java版
  47. 【leetcode刷题】21.三数之和——Java版
  48. 【leetcode刷题】20.最长回文子串——Java版
  49. 【leetcode刷题】19.回文链表——Java版
  50. 【leetcode刷题】18.反转链表——Java版
  51. 【leetcode刷题】17.相交链表——Java&python版
  52. 【leetcode刷题】16.环形链表——Java版
  53. 【leetcode刷题】15.汉明距离——Java版
  54. 【leetcode刷题】14.找到所有数组中消失的数字——Java版
  55. 【leetcode刷题】13.比特位计数——Java版
  56. oracle控制用户权限命令
  57. 三年Java开发,继阿里,鲁班二期Java架构师
  58. Oracle必须要启动的服务
  59. 万字长文!深入剖析HashMap,Java基础笔试题大全带答案
  60. 一问Kafka就心慌?我却凭着这份,图灵学院vip课程百度云