Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Export/Import - Data Pump interviews questions and answwer-- 1 in oracle

1) What is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.

2) What is use of DIRECT=Y option in exp?
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA,
bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.

3) What is use of COMPRESS option in exp?
Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization.
Export the objects (especially tables and indexes) with COMPRESS=Y.
If table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on disk (tablespace), and do not want imports to fail.

4) How to improve exp performance?
a). Set the BUFFER parameter to a high value. Default is 256KB.
b). Stop unnecessary applications to free the resources.
c). If you are running multiple sessions, make sure they write to different disks.
d). Do not export to NFS (Network File Share). Exporting to disk is faster.
e). Set the RECORDLENGTH parameter to a high value.
f). Use DIRECT=yes (direct mode export).

5) How to improve imp performance?
a). Place the file to be imported in separate disk from datafiles.
b). Increase the DB_CACHE_SIZE.
c). Set LOG_BUFFER to big size.
d). Stop redolog archiving, if possible.
e). Use COMMIT=n, if possible.
f). Set the BUFFER parameter to a high value. Default is 256KB.
g). It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import.
Indexes can easily be recreated after the data was successfully imported.
i). Disable the INSERT triggers, as they fire during import.
j). Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.

6) What is use of INDEXFILE option in imp?
Will write DDLs of the objects in the dumpfile into the specified file.

7) What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.

8) What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.

9) Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.

10)  How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.

11) How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.

12) In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?

Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.

Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

13) What is the order of importing objects in impdp?

 Database links
 Materialized views

14) How to import only metadata?

15) How to import into different user/tablespace/datafile/table?



16) Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?

Import data pump can always read export datapump dumpfile sets created by older versions of database. In your case it works, normal expdp on 10g and impdp on 11g
VERSION parameter in datapump is for other way around, if you want to import data taken from 11g into 10g database you need
to specify VERSION while taking backup.

17) How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?

ANS: [http://satya-dba.blogspot.in/2010/01/oracle-transportable-tablespaces-tts.html ]

We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.

With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.

With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.

With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).

 select * from v$transportable_platform order by platform_id;

How to determine the Schemas inside an Oracle Data Pump Export file ?

 strings dumpfile.dmp | grep SCHEMA_LIST
$ strings myfile.dmp|more


Popular posts from this blog

PostgreSQL pgBadger

ORA-01261: Parameter db_recovery_file_dest destination string cannot be translated ORA-01262: Stat failed on a file destination directory Linux-x86_64 Error: 2: No such file or directory

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database

PostgreSQL Pgbadger Installation On Linux

PostgreSQL Database startup / shutdown /restart