watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=

Q           subject  

stay Oracle 12c in , In the data pump (expdp) What are the new features of the enhancement ?


     
A           answer          



Oracle 12c A lot of new features have been added to the data pump , They are as follows :

(1) stay Data Pump New TRANSFORM The option to DISABLE_ARCHIVE_LOGGING, This provides shutdown for tables and indexes during import Redo The flexibility of log generation . When it comes to TRANSFORM Option specifies DISABLE_ARCHIVE_LOGGING:Y value , So during the entire import , Index and table Redo The log will be closed , Only a small number of logs are generated . This feature eases the pressure when importing large tables , And reduce excessive Redo produce , This speeds up the introduction of . This property applies to both tables and indexes . Whether it's used in non archive or archive situations DISABLE_ARCHIVE_LOGGING Will reduce the import time , Reduce the number of Archives . But here's the thing , If the database is in FORCE LOGGING Pattern , that DISABLE_ARCHIVE_LOGGING The parameter will be invalid .

following SQL Demonstrated this function :

l impdp directory=dpump dumpfile=abcd.dmp TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y  logfile=abcd.log

l impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y -- Both tables and indexes close the log

l impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE -- Only the table closes the log

l impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp SCHEMAS=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TRANSFORM=DISABLE_ARCHIVE_LOGGING:N:INDEX

“TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y” Indicates that both the table and index close the log .

The following points need to be noted about this new feature :

① After the import is complete , Status of tables and indexes ( Include LOGGING The state of whether the state and index are valid ) Will return to the state before importing .

② After importing the data file with this parameter , If the corresponding datafile By restored and recovered, Then the next query involving the target table will report ORA-01578 and ORA-26040 Bad block error of . for example :

for example :

SQL> select * from test_nologging;

 

ORA-01578: ORACLE data block corrupted (file #11, block # 84)

ORA-01110: data file 4: '/oradata/users.dbf'

ORA-26040: Data block was loaded using theNOLOGGING option

therefore , After importing the data file with this parameter , We need to do the related data files immediately RMAN Backup .

 



 

(2) Use VIEWS_AS_TABLES Option allows the data pump to convert the view to a table and then export . It should be noted that , Export to dmp After the document , The definition of view has been automatically converted to the definition of table . After performing the import operation , It will be in the form of a table .

following SQL Demonstrated this function :

l expdp system/lhr DIRECTORY=data_pump_dir DUMPFILE=expdp_vw.dmp LOGFILE=expdp_vw.log VIEWS_AS_TABLES=lhr.my_view

l impdp system/lhr DIRECTORY=data_pump_dir DUMPFILE=expdp_vw.dmp LOGFILE=impdp_vw.log remap_schema=lhr:lhr01

l impdp lhr/lhr DIRECTORY=data_pump_dir DUMPFILE=expdp_vw.dmp LOGFILE=impdp_vw.log remap_schema=lhr:lhr1  VIEWS_AS_TABLES=lhr.my_view

l impdp lhr/lhr DIRECTORY=data_pump_dir DUMPFILE=expdp_vw.dmp LOGFILE=impdp_vw.log remap_schema=lhr:lhr1  TABLES=lhr.my_view

Table data preparation :

create table lhr.my_tab1 (nr number, txt varchar2(10));

insert into lhr.my_tab1 values (1,'Line 1');

insert into lhr.my_tab1 values (2,'Line 2');

 

create table lhr.my_tab2 (nr number, col2 number, col3 varchar2(10));

insert into lhr.my_tab2 values (1,1,'c3_1');

insert into lhr.my_tab2 values (2,2,'c3_2');

commit;

create view lhr.my_view (nr, txt, col3) as

   select t1.nr, t1.txt, t2.col3

     from lhr.my_tab1 t1, lhr.my_tab2 t2

where t1.nr=t2.nr;

(3)LOGTIME Parameter determines whether the timestamp will be included in expdp and impdp In the output of the function .LOGTIME The available values for are as follows :

① NONE: The default value is , Indicates that the timestamp is not included in the output , The output is similar to the previous version .

② STATUS: The timestamp is included in the console output , But it doesn't appear in the log file .

③ LOGFILE: The timestamp appears in the log file , But it doesn't output to the console .

④ ALL: Timestamps appear in log files and console output .

About Oracle 12c There are many new features in the data pump , For example, compression 、 encryption 、 Audit, etc , No more details here , Readers can refer to the author's blog or read relevant official documents for learning .