watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

          subject         part

stay Oracle in ,LogMiner What is it? ? What are its uses ? Please briefly LogMiner Process of use .


     
          Answer section          



as everyone knows , All changes to user data and data dictionary are recorded in Oracle Of Redo In the log , therefore ,Redo The log contains all the information needed for the recovery operation . however , The original Redo The log file cannot be understood , therefore ,Oracle from 8i Later, a very useful analysis tool is provided , be called LogMiner. You can easily get Redo Log files ( Contains archive log files ) The specific content of . In particular, the tool can analyze all the information about the database DML operation (INSERT、UPDATE、DELETE etc. ) sentence .Oracle 9i After that, we can analyze DDL sentence , In addition, we can analyze and get some necessary rollback SQL sentence .LogMiner One of the most important uses is to recover a change of the database without recovering all the databases . This tool is especially suitable for debugging 、 Audit or roll back a particular transaction .

LogMiner Tools can be used to analyze online logs , It can also be used to analyze offline log files , You can analyze the redo log files of your own database , It can also be used to analyze redo log files of other databases . When analyzing redo log files from other databases , It should be noted that ,LogMiner The instance dictionary must be used to analyze the file , Instead of installing LogMiner The dictionary file generated by the database of , in addition , The installation must be guaranteed LogMiner The character set of the database is the same as that of the analyzed database . Source database (Source Database) The platform has to build and analyze databases (Mining Database) Platform like .

Oracle adopt LogMiner Tool pair Redo Log mining , Display a series of readable information , This process is called log mining .LogMiner adopt V$LOGMNR_CONTENTS View display Redo Information in the log .

On the whole ,LogMiner The main uses of tools are :

1、 Tracking changes in the database : You can track database changes offline , Without affecting the performance of the online system

2、 Changes in the fallback database : Fallback specific change data , Reduce Point-In-Time Recovery Implementation

3、 Optimization and expansion plan : You can analyze the growth pattern of the data by analyzing the data in the log file

4、 Determine the logical corruption time of the database : Accurately locate the time and SCN

5、 Determine the fine logical recovery operations to be performed at the transaction level , We can get the corresponding Undo operation

6、 Perform follow-up audit

( One ) install LogMiner Tools

By default ,Oracle Already installed LogMiner Tools . If not installed , You can run the following two scripts :

1$ORACLE_HOME/rdbms/admin/dbmslm.sql
2$ORACLE_HOME/rdbms/admin/dbmslmd.sql
     


Both scripts must be written with SYS User id run . The first script is used to create DBMS_LOGMNR package , This package is used to analyze log files . The second script is used to create DBMS_LOGMNR_D package , This package is used to create a data dictionary file . To enable ordinary users to have log mining rights , You can do the following SQL Empowering :

1GRANT EXECUTE ON  DBMS_LOGMNR TO LHR_TEST;
     

After the script is executed ,LogMiner Contains two PL/SQL Package and several views :

1、DBMS_LOGMNR_D package , Includes a process for extracting data dictionary information , namely DBMS_LOGMNR_D.BUILD() The process , It also includes a reconstruction LogMiner Table process ,DBMS_LOGMNR_D.SET_TABLESPACE. By default ,LogMiner My watch is built on SYSTEM Table space .

2、DBMS_LOGMNR package , It has 3 Stored procedures :

l ADD_LOGFILE(NAME VARCHAR2,OPTIONS NUMBER) Used to add or delete log files for analysis

l START_LOGMNR(START_SCN NUMBER,END_SCN NUMBER,START_TIME NUMBER,END_TIME NUMBER,DICTFILENAME VARCHAR2,OPTIONS NUMBER) Used to turn on log analysis , At the same time, determine the time of analysis or SCN Window and confirm whether to use the extracted data dictionary information

l END_LOGMNR() Stored procedures are used to terminate the analysis session , It will be recycled LogMiner Memory used

And LogMiner Related data dictionary view :

l V$LOGHIST: Display some information about the history log file

l V$LOGMNR_DICTIONARY: because LOGMINER There can be multiple dictionary files , So this view shows the dictionary file information

l V$LOGMNR_PARAMETERS: Show LOGMINER Parameters of

l V$LOGMNR_LOGS: Display log list information for analysis

l V$LOGMNR_CONTENTS:LOGMINER result

( Two )LogMiner The data dictionary

In order to completely transform Redo The contents of the log ,LogMiner Need to access a database dictionary .LogMiner Using this dictionary will Oracle The internal object identifier and data type are converted to the object name and external data format . There is no dictionary ,LogMiner Will use 16 Hexadecimal characters show internal objects ID.

for example , For the following SQL sentence :

1INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
     

Without a data dictionary ,LogMiner Display as :

1INSERT INTO Object#2581(col#1, col#2) VALUES (hextoraw('4a6f686e20446f65'),hextoraw('c306'));"
     

LogMiner Provides 3 There are two ways to extract dictionary files :

① Extract the dictionary file as a Flat File( Flat file or intermediate interface file )

② Extract the dictionary file as Redo journal

③ Use Online Catalog( Online log )

The following is about 3 Ways of planting :

① Extract the dictionary file as a Flat File( Flat file or intermediate interface file )

In order to extract the database dictionary information as Flat File, Need to use with STORE_IN_FLAT_FILE Parametric DBMS_LOGMNR_D.BUILD Program .DBMS_LOGMNR_D.BUILD The program needs to access a directory where dictionary files can be placed . because PL/SQL Programs usually don't have direct access to user directories , You have to manually specify one by DBMS_LOGMNR_D.BUILD The directory used by the program . To specify the directory , Must modify... In initialization file UTL_FILE_DIR Parameters :

1ALTER SYSTEM SET UTL_FILE_DIR ='/home/oracle' SCOPE=SPFILE;
     

Then restart the database . Make sure to create Flat File In the process of documentation , Can not have DDL The operation is performed . Creating Flat File When you file , The database must be in OPEN state , And then execute DMBS_LOGMNR_D.BUILD Program :

1EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/home/oracle');
     

After the script is executed, it will be in /home/oracle Next build one dictionary.ora Text file for . The file contains a series of table building statements and insert statements .

② Extract the dictionary file as Redo journal

In order to extract the dictionary file as Redo journal , The database must be in OPEN state , And in archive mode . Extract the dictionary as Redo In the process of logging , There can't be a database system DDL Statement executed . In order to extract the dictionary as Redo journal , Need to use with STORE_IN_REDO_FILES Parametric DBMS_LOGMNR_D.BUILD Program :

1SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
     

It should be noted that , Extract the dictionary file as Redo You need to open the additional log when you open the file , As shown below :

1ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
     

In these Redo After the logs are filed , By querying V$ARCHIVED_LOG View to query :

1SELECT *
2  FROM V$ARCHIVED_LOG A
3 WHERE A.NAME IS NOT NULL
4   AND (A.DICTIONARY_BEGIN = 'YES' OR A.DICTIONARY_END = 'YES');
     


If the dictionary information is extracted as Redo file , So in use DBMD_LOGMNR.ADD_LOGFILE When specifying the log file to analyze , You need to put these Redo Files are added in as well . In use at the same time START_LOGMNR At the beginning of the analysis , You need to specify the DICT_FROM_REDO_LOGS Parameters of .

③ Use Online Catalog( Online log )

In order to make LogMiner Directly use the dictionary currently used by the database , At the beginning LogMiner You can specify the online directory as the dictionary source :

1SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
     

Use the online directory , It means that there is no need to extract the dictionary file , Is the fastest way to start analyzing logs . In addition to being able to analyze online Redo Out of the log , You can also analyze archive log files on the same system that generates archive log files . However , Keep in mind that online directories can only rebuild files that are applied to the latest version of a table SQL sentence . Once the table is modified , The online directory cannot reflect the previous version of the table . That means LogMiner Cannot rebuild... On an older version of a table SQL sentence .

( 3、 ... and ) track DDL sentence

When LogMiner When activated , It automatically creates its own internal dictionary . If the source dictionary is Flat File Dictionary or Redo The dictionary in the log , You can use DDL_DICT_TRACKING Parameters to track DDL sentence .DDL Tracing is off by default . To turn this on , Can be started at LogMiner When using DDL_DICT_TRACKING Parameters :

1SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING);
     

When using DDL_DICT_TRACKING when , We need to pay attention to the following points :

v  When using the online directory (Online catalog) when , That is, when using DICT_FROM_ONLINE_CATALOG When parameters are , Can't use DDL_DICT_TRACKING Option .

v  Use DDL_DICT_TRACKING when , The database is required to be in OPEN state .

v  Try to be LogMiner Provides a separate table space . By default LogMiner The table is created in SYSTEM In tablespace . Use DBMS_LOGMNR_D.SET_TABLESPACE These can be rebuilt in a separate table space LogMiner Table of :SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');.

( Four ) Filter LogMiner Returned data

have access to COMMITTED_DATA_ONLY Parameter to display only those committed transactions :

1SQL> EXEC DBMS_LOGMNR.START_LOGMNR(OPTIONS=>DBMS_LOGMNR.COMMITTED_DATA_ONLY);
     

When using this parameter ,LogMiner Will all DML Operations are grouped according to the relationship of transactions . These transactions are displayed in the order in which they are committed .

have access to SKIP_CORRUPTION Parameter to ignore Redo All the errors in the log :

1SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS =>DBMS_LOGMNR.SKIP_CORRUPTION);
     

You can use STARTTIME and ENDTIME Parameters filter data by time , You can also use STARTSCN and ENDSCN Parameters according to SCN(System Change Number) To filter the data .

1BEGIN
2    DBMS_LOGMNR.START_LOGMNR(STARTSCN => 23573690,ENDSCN   => 23632671,OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
3END;
4/
5BEGIN 
6    DBMS_LOGMNR.START_LOGMNR(STARTTIME => '2015-06-03 11:10:12',ENDTIME => '2015-06-03 11:13:06',OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE); 
7END; 
8/
     


( 5、 ... and ) Typical LogMiner step

A typical LogMiner The following steps are involved in the operation of :

1、 Make initialization settings : Open additional logs , Set up LogMiner Table space , Set up UTL_FILE_DIR The value of the parameter ;

2、 Extract a dictionary : Extract the dictionary file as Flat File or Redo journal , Or use it directly Online Catalog;

3、 Specify the Redo Log files : utilize DBMS_LOGMNR.ADD_LOGFILE To add logs ;

4、 Start LogMiner: perform DBMS_LOGMNR.START_LOGMNR To start up LogMiner;

5、 Inquire about V$LOGMNR_CONTENTS View ;

6、 end LogMiner: Through execution EXECUTE DBMS_LOGMNR.END_LOGMNR To end the analysis .

Here is an example of how to use Online Catalog As an example of data dictionary for log mining :

 1SQL> alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:mi:ss';
 2Session altered.
 3SQL> select member from v$logfile;
 4MEMBER
 5---------------------------------------------------
 6+DATA/orclasm/onlinelog/group_3.263.850260263
 7+FRA/orclasm/onlinelog/group_3.259.850260267
 8+DATA/orclasm/onlinelog/group_2.262.850260259
 9+FRA/orclasm/onlinelog/group_2.258.850260263
10+DATA/orclasm/onlinelog/group_1.261.850260255
11+FRA/orclasm/onlinelog/group_1.257.850260259
126 rows selected.
13SQL> execute dbms_logmnr.add_logfile('+DATA/orclasm/onlinelog/group_1.261.850260255',dbms_logmnr.new);
14PL/SQL procedure successfully completed.
15SQL> execute dbms_logmnr.add_logfile('+DATA/orclasm/onlinelog/group_2.262.850260259',dbms_logmnr.addfile);
16PL/SQL procedure successfully completed.
17SQL> execute dbms_logmnr.add_logfile('+DATA/orclasm/onlinelog/group_3.263.850260263',dbms_logmnr.addfile);
18PL/SQL procedure successfully completed.
19SQL> exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog+dbms_logmnr.committed_data_only);
20PL/SQL procedure successfully completed.
21SQL> select sql_redo,sql_undo from v$logmnr_contents where table_name='T' and OPERATION='INSERT';
22SQL_REDO                                              SQL_UNDO
23-------------------------------------------     ------------------------------------------------------------------------ 
24insert into "LHR"."T"("ID") values ('1');        delete from "LHR"."T" where "ID" = '1' and ROWID = 'AAAaN7AAEAAAnhjAAA';
25insert into "LHR"."T"("ID") values ('2');        delete from "LHR"."T" where "ID" = '2' and ROWID = 'AAAaN7AAEAAAnhjAAB';