Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

oracle Expdp / Impdp utility theory,examples, practical ,Errors,solutions,real time scenarios as well as imp/exp comparison.

 Data pump is a new feature in Oracle10g that provides fast parallel data load. With direct path and parallel execution, data pump is several times faster then the traditional exp/imp. Traditional exp/imp runs on client side. But impdp/expdp runs on server side. So we have much control on expdp/expdp compared to traditional exp/imp. When compared to exp/imp, data pump startup time is longer. Because, it has to setup the jobs, queues, and master table. Also at the end of the export operation the master table data is written to the dump file set, and at the beginning of the import job the master table is located and loaded in the schema of the user.



Following are the process involved in the data pump operation:


Client Process : This process is initiated by client utility. This process makes a call to the data pump API. Once the data pump is initiated, this process is not necessary for the progress of the job.

Shadow Process : When client log into the database, foreground process is created.It services the client data pump API requests. This process creates the master table and createsAdvanced queuing queues used for communication. Once client process ends, shadow process also go away.

Master Control Process : MCP controls the execution of the data pump job. There is one MCP per job.MCP divides the data pump job into various metadata and data load orunload jobs and hands them over to the worker processes.

Worker Process : MCP creates worker process based on the valule of the PARALLEL parameter.
The worker process performs the task requested by MCP.

Advantage of Data pump
1. We can perform export in parallel. It can also write to multiple files on different disks.
(Specify parameters PARALLEL=2 and the two directory names with file specification
DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace.
Ex, REMAP_SCHEMA, REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp,
we have this filter option only in exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the parameter DIRECT.
But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs, dba_datapump_sessions etc)

Exp & Expdp common parameters: These below parameters exists in both traditional exp and expdp utility.
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)

Comparing exp & expdp parameters: These below parameters are equivalent parameters between exp & expdp.
Exp and corresponding Expdp parameters...
FEEDBACK => STATUS
FILE => DUMPFILE
LOG => LOGFILE
OWNER => SCHEMAS
TTS_FULL_CHECK => TRANSPROT_FULL_CHECK

New parameters in expdp Utility

ATTACH Attach the client session to existing data pump jobs

CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)

DIRECTORY Location to write the dump file and log file.

ESTIMATE Show how much disk space each table in the export job consumes.

ESTIMATE_ONLY It estimate the space, but does not perform export

EXCLUDE List of objects to be excluded

INCLUDE List of jobs to be included

JOB_NAME Name of the export job

KEEP_MASTER Specify Y not to drop the master table after export

NETWORK_LINK Specify dblink to export from remote database

NOLOGFILE Specify Y if you do not want to create log file

PARALLEL Specify the maximum number of threads for the export job

VERSION DB objects that are incompatible with the specified version will not be exported.

ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear
text in the dump file set when the password is not specified. We can define any string as a password for this parameter.

COMPRESSION Specifies whether to compress metadata before writing to the dump file set.
The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We can use NONE
if we want to disable during the expdp.

SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.

Imp & Impdp common parameters: These below parameters exist in both traditional imp and impdp utility.
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES

Comparing imp & impdp parameters: These below parameters are equivalent parameters between imp & impdp.


imp and corresponding impdp parameters...
DATAFILES => TRANSPORT_DATAFILES
DESTROY =>REUSE_DATAFILES
FEEDBACK =>STATUS
FILE =>DUMPFILE
FROMUSER =>SCHEMAS, REMAP_SCHEMAS
IGNORE =>TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW=>SQLFILE
LOG =>LOGFILE
TOUSER =>REMAP_SCHEMA

New parameters in impdp Utility
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database.Valid only when NETWORK_LINK parameter is used.

FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.

NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter.The dump file will be not be created in server when we use this parameter.To get a consistent export from the source database, we can use the 

FLASHBACK_SCN or FLASHBACK_TIME parameters.These two parameters are only valid when we use NETWORK_LINK parameter.

REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.

REMAP_SCHEMA Loads objects to a different target schema name.

REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.

TRANSFORM We can specify that the storage clause should not be generated in the DDL for import.This is useful if the storage characteristics of the source and target database are different.The valid values are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE statement DDL,whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and storage attributes.

TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.

For instance, TRANSFORM=storage:N:table

ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export operation.

CONTENT, INCLUDE, EXCLUDE are same as expdp utilities.

Prerequisite for expdp/impdp:
Set up the dump location in the database.
system@orcl> create directory dumplocation
2 as 'c:/dumplocation';
Directory created.

system@orcl> grant read,write on directory dumploc to scott;
Grant succeeded.

Let us experiment expdp & impdp utility as different scenario...... We have two database orcl, ordb.All the below scenarios are tested in Oracle10g R2 version. We have two database orcl, ordb. Oracle10g R2 version.

Scenario 1 :
Export the whole orcl database.

Export Parfile content:
userid=system/password@orcl
dumpfile=expfulldp.dmp
logfile=expfulldp.log
full=y
directory=dumplocation

Scenario 2
Export the scott schema from orcl and import into ordb database.While importing, exclude some objects(sequence,view,package,cluster,table).Load the objects which came from RES tablespace into USERS tablespace in target database.

Export Parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
schemas=scott


Import parfile content:

userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:”in(‘LOAD_EXT’)”

Scenario 3

Export the emp table from scott schema at orcl instance and import into ordb instance.

Expdp parfile content:
userid=system/password@orcl
logfile=tableexpdb.log
directory=dumplocation
tables=scott.part_emp
dumpfile=tableexpdb.dmp

Impdp parfile content:
userid=system/password@ordb
dumpfile=tableexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=REPLACE


Scenario 4

Export only specific partition in emp table from scott schema at orcl and import into ordb database.

Expdp parfile content:
userid=system/password@orcl
dumpfile=partexpdb.dmp
logfile=partexpdb.log
directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20

Impdp parfile content:
If we want to overwrite the exported data in target database, then we need to delete emp table for deptno in(10,20).

scott@ordb> delete part_emp where deptno=10;
786432 rows deleted.

scott@ordb> delete part_emp where deptno=20;
1310720 rows deleted.

scott@ordb> commit;
Commit complete.

userid=system/password@ordb
dumpfile=partexpdb.dmp
logfile=tabimpdb.log
directory=dumplocation
table_exists_action=append


Scenario 5
Export only tables in scott schema at orcl and import into ordb database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
include=table
schemas=scott

Impdp parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

Scenario 6

Export only rows belonging to department 10 and 20 in emp and dept table from orcl database. Import the dump file in ordb database. While importing, load only deptno 10 in target database.
Expdp parfile content:
userid=system/password@orcl
dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log
directory=dumplocation
content=data_only
schemas=scott
include=table:”in(‘EMP’,’DEPT’)”
query=”where deptno in(10,20)”

Impdp parfile content:

userid=system/password@ordb
dumpfile=data_filter_expdb.dmp
logfile=data_filter_impdb.log
directory=dumplocation
schemas=scott
query=”where deptno = 10?
table_exists_action=APPEND


Scenario 7

Export the scott schema from orcl database and split the dump file into 50M sizes.
Import the dump file into ordb datbase.

Expdp parfile content:

userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table
As per the above expdp parfile, initially, schemaexp_split_01.dmp file will be created. Once the file is 50MB,the next file called schemaexp_split_02.dmp will be created. Let us say, the dump file size is 500MB,then it creates 10 dump file as each file size is 50MB.

Impdp parfile content:

userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario 8

Export the scott schema from orcl database and split the dump file into four files. Import the dump file
into ordb datbase.

Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
parallel=4
schemas=scott
include=table
As per the above parfile content, initially four files will be created schemaexp_split_01.dmp,
schemaexp_split_02.dmp, schemaexp_split_03.dmp, schemaexp_split_04.dmp. Notice that every occurrenceof the substation variable is incremented each time. Since there is no FILESIZE parameter, no more files will be created.

Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario 9
Export the scott schema from orcl database and split the dump file into three files. The dump fileswill be stored in three different location. This method is especially useful if you do not have enoughspace in one file system to perform the complete expdp job. After export issuccessful, import the dump file into ordb database.

Expdp parfile content:
userid=system/password@orcl
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M
schemas=scott
include=table
          As per above expdp par file content, it place the dump file into three different location. Let us say,entire expdp dump file size is 1500MB. Then it creates 30 dump files(each dump file size is 50MB) andplace 10 files in each file system.

Impdp parfile content:
userid=system/password@ordb
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace

Scenario 10
         We are in orcl database server. Now export the ordb data and place the dump file in orcl database server.After expdp is successful, import the dump file into orcl database. When we use network_link, the expdp userand source database schema users should haveidentical privileges. If there no identical privileges,then we get the below error.

C:\impexpdp>expdp parfile=networkexp1.par

Export: Release 10.2.0.1.0 – Production on Sunday, 17 May, 2009 12:06:40
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Produc
tion
With the Partitioning, OLAP and Data Mining options
ORA-31631: privileges are required
ORA-39149: cannot link privileged user to non-privileged user

Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table
network_link=ordb
              As per the above parfile, expdp utility exports the ordb database data and place the dump file in orcl server. Since we are running expdp in orcl server. This is basically exporting the data from remote database.


Impdp parfile content:

userid=system/password@orcl
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace

Scenario 11
Export scott schema in orcl and import into ordb. But do not write dump file in server. The expdp and impdp should be completed without writing dump file in the server. Here we do not need to export the data. We can import the data without creating the dumpfile.
Here we run the impdp in ordb server and it contacts orcl DB and extract the data and import into ordb database.
If we do not have much space in the file system to place the dump file, then we can use this option to load the data.

Impdp parfile content:
userid=scott/tiger@ordb
network_link=orcl
logfile=networkimp2.log
directory=dumplocation
table_exists_action=replace

Scenario 12
Expdp scott schema in ordb and impdp the dump file in training schema in ordb database.

Expdp parfile content:
userid=scott/tiger@orcl
logfile=netwrokexp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
schemas=scott
include=table

Impdp parfile content:
userid=system/password@ordb
logfile=networkimp1.log
directory=dumplocation
dumpfile=networkexp1.dmp
table_exists_action=replace
remap_schema=scott:training

Scenario 13
Expdp table on orcl database and imdp in ordb. When we export the data, export only 20 percent of the table data. We use SAMPLE parameter to accomplish this task.
SAMPLE parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported.
The sample_percent indicates the probability that a block of rows will be selected as part of the sample.
It does not mean that the database will retrieve exactly that amount of rows from the table.
The value you supply for sample_percent can be anywhere from .000001 up to, but not including, 100.
If no table is specified, then the sample_percent value applies to the entire export job. The SAMPLE parameter
is not valid for network exports.

Expdp parfile content:

userid=system/password@orcl
dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log
directory=dumplocation
tables=scott.part_emp
SAMPLE=20
             As per the above expdp parfile, it exports only 20 percent of the data in part_emp table.

Impdp parfile content:
userid=system/password@ordb
dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log
directory=dumplocation
table_exists_action=replace

-------------->>>>>>>>>>>>>>------------------------------------->>>>>>>>>>>>>>>>>----

We are creating 2 schemas.


1.       User1 – This schema is having partition table SALES which needs to be export
2.       User2 – schema where we are going to import sales table with new name as SALES_T.


1.Create users:
[oracle@NVMBD01PSR183 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 19 13:15:44 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user user1 identified by user1;
User created.
SQL> create user user2 identified by user2;
User created.
SQL> grant connect, resource to user1, user2;
Grant succeeded.

2.Create datapump directory.
SQL>CREATE OR REPLACE DIRECTORY EXPDP_DIR AS ' /oracle/data1/expdp';
SQL> GRANT READ, WRITE ON DIRECTORY EXPDP_DIR to user1,user2;
3.Create partition table SALES.
SQL> conn user1/user1
Connected.


SQL> CREATE TABLE sales
( name varchar2(5),
time_id DATE)
PARTITION BY RANGE (time_id)
(PARTITION sales_AUG VALUES LESS THAN (TO_DATE('01-AUG-2014','dd-MON-yyyy'))
TABLESPACE users,
PARTITION sales_SEP VALUES LESS THAN (TO_DATE('01-SEP-2014','dd-MON-yyyy'))
TABLESPACE users,
PARTITION sales_OCT VALUES LESS THAN (TO_DATE('01-OCT-2014','dd-MON-yyyy'))
TABLESPACE users,
PARTITION sales_NOV VALUES LESS THAN (TO_DATE('01-NOV-2014','dd-MON-yyyy'))
TABLESPACE users,
PARTITION sales_DEV VALUES LESS THAN (TO_DATE('01-DEC-2014','dd-MON-yyyy'))
TABLESPACE users );


SQL> SELECT partition_name FROM user_tab_partitions WHERE table_name = 'SALES';
PARTITION_NAME
------------------------------
SALES_AUG
SALES_DEV
SALES_NOV
SALES_OCT
SALES_SEP


EXPDP:
1. Export entire table including all partitions
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table.dmp tables=sales
logfile=sales_table.log

Export: Release 11.2.0.3.0 - Production on Fri Dec 19 11:13:37 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USER1"."SYS_EXPORT_TABLE_01":  user1/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
tables=sales logfile=sales_table.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "USER1"."SALES":"SALES_NOV"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_OCT"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_SEP"                5.421 KB       1 rows
. . exported "USER1"."SALES":"SALES_AUG"                    0 KB       0 rows
. . exported "USER1"."SALES":"SALES_DEV"                    0 KB       0 rows
Master table "USER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table.dmp
Job "USER1"."SYS_EXPORT_TABLE_01" successfully completed at 11:13:46

2. Export specific partition of table:
[oracle@NVMBD01PSR183 ~]$ expdp user1/user1 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log

Export: Release 11.2.0.3.0 - Production on Fri Dec 19 12:31:06 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USER1"."SYS_EXPORT_TABLE_01":  user1/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
tables=sales:SALES_NOV logfile=sales_table_partition.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "USER1"."SALES":"SALES_NOV"                5.421 KB       1 rows
Master table "USER1"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USER1.SYS_EXPORT_TABLE_01 is:
  /oracle/data1/expdp/sales_table_partition.dmp
Job "USER1"."SYS_EXPORT_TABLE_01" successfully completed at 12:31:13

IMPDP
Move dmp file to target host (ftp, scp etc)Or load data to another schema using remap_schema
1. Import entire partition table into new schema with new name.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table.dmp remap_table=sales:sales_t
remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 - Production on Fri Dec 19 11:19:25 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER2"."SYS_IMPORT_FULL_01":  user2/******** directory=EXPDP_DIR dumpfile=sales_table.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "USER2"."SALES_T" exists. Data will be appended to existing table but all dependent metadata will
be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "USER2"."SALES_T":"SALES_SEP"               5.421 KB       1 rows
. . imported "USER2"."SALES_T":"SALES_AUG"                   0 KB       0 rows
. . imported "USER2"."SALES_T":"SALES_DEV"                   0 KB       0 rows
. . imported "USER2"."SALES_T":"SALES_OCT"               5.421 KB       1 rows
. . imported "USER2"."SALES_T":"SALES_NOV"               5.421 KB       1 rows
Job "USER2"."SYS_IMPORT_FULL_01" successfully completed at 11:19:30

Let`s check:

SQL> conn user2/user2
Connected.

SQL> select * from sales_t;
NAME  TIME_ID
----- ---------
prod1 01-AUG-14
prod2 01-SEP-14
prod3 01-OCT-14

2. Now import single partition only.
[oracle@NVMBD01PSR183 ~]$ impdp user2/user2 directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action = append;

Import: Release 11.2.0.3.0 - Production on Fri Dec 19 13:05:26 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER2"."SYS_IMPORT_FULL_01":  user2/******** directory=EXPDP_DIR dumpfile=sales_table_partition.dmp
remap_table=sales:sales_t remap_schema=user1:user2 remap_tablespace=user1:user2 table_exists_action=append
Processing object type TABLE_EXPORT/TABLE/TABLE
Table "USER2"."SALES_T" exists. Data will be appended to existing table but all dependent metadata
will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "USER2"."SALES_T":"SALES_NOV"               5.421 KB       1 rows
Job "USER2"."SYS_IMPORT_FULL_01" successfully completed at 13:05:28

Here we have successfully imported single partition.
If a partition name is specified, it must be the name of a partition or subpartition in the associated table.Only the specified set of tables, partitions, and their dependent objects are unloaded.
  When you use partition option (PARTITION_OPTIONS) of DataPump you have to select below options:
   None - Tables will be imported such that they will look like those on the system on which the export was created.
  Departition - Partitions will be created as individual tables rather than partitions of a partitioned table
  Merge - Combines all partitions into a single table.


Managing Data Pump jobs
The datapump clients expdp and impdp provide an interactive command interface.Since each expdp and impdp operation has a job name, you can attach to that job from any computer andmonitor the job or make adjustment to the job.

Here are the data pump interactive commands.
ADD_FILE Adds another file or a file set to the DUMPFILE set.
CONTINUE_CLIENT Changes mode from interactive client to logging mode
EXIT_CLIENT Leaves the client session and discontinues logging but leaves the current job running.
KILL_JOB Detaches all currently attached client sessions and terminates the job
PARALLEL Increase or decrease the number of threads
START_JOB Starts(or resume) a job that is not currently running. SKIP_CURRENT option can skipthe recent failed DDL statement that caused the job to stop.
STOP_JOB stops the current job, the job can be restarted later
STATUS Displays detailed status of the job, the refresh interval can be specified in seconds.
The detailed status is displayed to the output screen but not written to the log file.



Scenario14:- Let us start the job and in between, we stop the job in middle and resume the job.After some time, let us kill the job and check the job status for every activity....
We can find what jobs are running currently in the database by using the below query.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01

SQL>

C:\impexpdp>impdp parfile=schemaimp1.par


Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:06:51
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par
Processing object type SCHEMA_EXPORT/TABLE/TABLE

Import> stop_job
Are you sure you wish to stop this job ([yes]/no): yes


C:\impexpdp>

When we want to stop the job, we need press Control-M to returnImport> prompt. Once it is returned to prompt(Import>),
we can stop the job as above by using stop_job command.
After the job is stoped, here is the job status.

SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
NOT RUNNING SYS_IMPORT_FULL_01

Now we are attaching job again..... Attaching the job does not restart the job.
C:\impexpdp>impdp system/password@ordb attach=SYS_IMPORT_FULL_01


Import: Release 10.2.0.1.0 - Production on Sunday, 17 May, 2009 14:17:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Job: SYS_IMPORT_FULL_01
Owner: SYSTEM
Operation: IMPORT
Creator Privs: FALSE
GUID: 54AD9D6CF9B54FC4823B1AF09C2DC723
Start Time: Sunday, 17 May, 2009 14:17:12
Mode: FULL
Instance: ordb
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND parfile=schemaexp1.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND parfile=schemaimp1.par
TABLE_EXISTS_ACTION REPLACE
State: IDLING
Bytes Processed: 1,086,333,016
Percent Done: 44
Current Parallelism: 1
Job Error Count: 0
Dump File: c:/impexpdp\networkexp1.dmp
Worker 1 Status:
State: UNDEFINED
Import>

After attaching the job, here is the job status.
SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
IDLING SYS_IMPORT_FULL_01
Attaching the job does not resume the job. Now we are resuming job again.....
Import> continue_client
Job SYS_IMPORT_FULL_01 has been reopened at Sunday, 17 May, 2009 14:17
Restarting "SYSTEM"."SYS_IMPORT_FULL_01": parfile=schemaimp1.par


SQL> select state,job_name from dba_datapump_jobs;
STATE JOB_NAME
------------------------------ ------------------------------
EXECUTING SYS_IMPORT_FULL_01



Now again we are killing the same job.... Before we kill, we need to press Control-C to return the Import> prompt.
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): yes

C:\impexpdp>
Now the job is disappared in the database.

SQL> select state,job_name from dba_datapump_jobs;
no rows selected

------>>>>>>>>>>>--------------->>>>>>>>>>>>>>>---------->>>>>>>>----------------->>>>>>>>>>

Parallelism
Data Pump is much faster than the old exp and imp client commands. One new feature that really helps make it faster is the “parallel” option. With this option, the Data Pump will pump data in four different threads.

Data pump performance can be significantly improved by using the PARALLEL parameter. This should be used in conjunction with the “%U” wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read.
$expdp scott/tiger directory=dumpdir dumpfile=scott2.dmp parallel=4 job_name=scott2

 $Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4

DATA PUMP VIEWS – Data Pump maintains a number of user- and DBA-accessible views to monitor the progress of jobs:
DBA_DATAPUMP_JOBS: This shows a summary of all active Data Pump jobs on the system.
USER_DATAPUMP_JOBS: This shows a summary of the current user’s active Data Pump jobs.
DBA_DATAPUMP_SESSIONS: This shows all sessions currently attached to Data Pump jobs.
V$SESSION_LONGOPS: A row is maintained in the view showing progress on each active Data Pump job. The OPNAME column displays the Data Pump job name.

 Data Pump Import:
> impdp username/password DIRECTORY=dpump_dir1 DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim

In Data pump utility  the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA Option.Example, export of an entire database to a dump files with all GRANTS, INDEXES, and data
 > exp username/password FULL=y FILE=dba.dmp GRANTS=y INDEXES=y ROWS=y LOG=exportdba.log

> expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL

Following are few EXPDP and IMPDP commands with additional attributes.

$ expdp dpuser/dpuser@TDB10G schemas=dpuser include= TABLE:\”IN (\’EMP\’, \’DEPT\’)\”  directory=dpump_dir1 dumpfile=dpuser.dmp log=dpuser.log
 $expdp dpuser/dpuser@TDB10G schemas=dpuser exclude=TABLE:\”= \’EMP_DETAILS\’\”  directory=dpump_dir1 dumpfile=dpuser2.dmp logfile=dpuser.log
 $Impdp dpuser/dpuser@TDB10G  schemas=dbuser  directory=dumpdir dumpfile=expdp_TDB10G_dbuser.200470215_07.%U.dmp parallel=4
 $impdp dpuser/dpuser@TDB10G  schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out
 $expdp dpuser/dpuser@TDB10G schemas=dpuser   directory=dpump_dir1 parallel=4 dumpfile=dpuser_%U.dmp logfile=dpuser.log

 CREATING ORACLE DATABASE USING IMPDP and EXPDP

Steps for creating the Oracle database:
 Before creating the database we have to take Backup/ Export the entire existing Schema.
Syntax: > expdp username/password FULL=y INCLUDE=GRANT INCLUDE= INDEX   DIRECTORY=dpump_dir1 DUMPFILE=dba.dmp CONTENT=ALL

Delete Database – Drops all objects and public synonyms for objects owned by the schema
Create the Database again using the Oracle Configuration Assistant and clone of deleted database.Connect to the Oracle Database using system user and sysdba user and runs scripts for tablespace creation, creation of users – roles, granting of privileges.

Syntax:
create tablespace Tablespace_name
datafile ‘/&oradata_dir/oradata/&orasid/tablespace_name_datafile01.dbf’ size 24M
 autoextend on next 24M maxsize 15000M
extent management local
segment space management auto
/

Create the dump directory using CREATE OR REPLACE DIRECTORY (dump_dir1) and grants read-write priviledges to dump directory object for the user.
The following SQL statements creates a user, a directory object named dpump_dir1 and     grants the permissions to the user.

 Syntax:
SQLPLUS system/manager@TDB10G as sysdba

SQL> create user dpuser identified by dpuser;
SQL> grant connect, resource to dpuser;
SQL> CREATE DIRECTORY dpump_dir1 AS ‘/opt/app/oracle’;
SQL> grant read, write on directory dpump_dir1 to dpuser

Connect to the newly created schema to test the connection.
SQL> dpuser/ dpuser@TDB10G;

Run the IMPDP command from the command prompt
 Syntax:
$impdp dpuser/dpuser@TDB10G schemas=dbuser dumpfile=DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.%U.dmp logfile= DATA_PUMP_DIR:expdp_TDB10G_dbuser.200507_165936.out

Verify the log file for errors or discrepancies.
We need to copy the .DMP files in default database directory (DATA_PUMP_DIR) path.
I:\oracle\product\10.2.0\admin\TDB10G\dpdump\
We can also place these .DMP files in database directory path where we have created.

Table Level Exports/Imports

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Level Exports/Imports
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log


Database Level Exports/Imports

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log

------------------------------------------------------------------

Comments

Popular posts from this blog

How to find the server is whether standby (slave) or primary(master) in Postgresql replication ?

7 Steps to configure BDR replication in postgresql

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database

Ora2PG - Oracle/MySQL to Postgres DB migration Version 20.0

PostgreSQL Introduction