expdp – impdp
3 ’/orabkp/export_muslera/’;
Directory created.
2- Oluşturduğumuz ORACLE_BACKUP dizinine ELMANDER kullanıcısına read, write hakkı veriyoruz
Ayrıca export ve import için gerekli hakları veriyoruz.
SQL> GRANT READ, WRITE ON DIRECTORY ORACLE_BACKUP TO ELMANDER WITH GRANT OPTION;
4- Aldığımız exportu zipliyoruz.
3 ’/u01/app/admin/DROGBA/dpdump/’;
Directory created.
6- Oluşturduğumuz DATA_PUMP_DIR dizinine ELMANDER kullanıcısına read, write hakkı veriyoruz.Ayrıca export ve import için gerekli hakları veriyoruz.
SQL> GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO ELMANDER;
Grant succeeded.
SQL> GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO EXP_FULL_DATABASE;
Grant succeeded.
SQL> GRANT READ, WRITE ON DIRECTORY DATA_PUMP_DIR TO IMP_FULL_DATABASE;
Grant succeeded.
SQL> exit
2 DATAFILE+DATA/DROGBA/DATAFILE/dbf_ elmander ’
3 SIZE 100M AUTOEXTEND ON NEXT 100M
4 MAXSIZE UNLIMITED
5 LOGGING
6 EXTENT
7 MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
Tablespace created.
SQL> exit
Ayrıca bazı faydalı edatapump parametreleri aşağıda listelenmiştir.
Exporting
|
|
database
|
expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log
Note: increase the parallel option based on the number of CPU’s you have
|
schema
|
expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log
|
table
|
expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only
|
tablespace
|
expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log
|
Importing
|
|
database
|
impdp system/password full=y dumpfile=data.dmp nologfile=y
|
schema change
|
impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only
impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=… |
Other Options
|
|
directory
|
specifies a oracle directory object
|
filesize
|
split the dump file into specific sizes (could be used if filesystem has 2GB limit)
|
parfile
|
specify the parameter file
|
content
|
contents option can be ALL, METADATA_ONLY or DATA_ONLY
|
compression
|
compression is used by default but you can stop it
|
exclude/include
|
metadata filtering
|
query
|
selectively export table data using a SQL statement
|
estimate
|
Calculate job estimates where the vaild keywords are blocks and statistics
|
estimate_only
|
Calculate job estimates without performing the export
|
network link
|
you can perform a export across a network
|
encryption
|
you can encrypt data within the data pump file
|
parallel
|
increase worker processes to increase throughput, base it on number of CPU’s
|
remap_schema
|
move objects from one schema to another
|
remap_datafile
|
change the name of the datafile when moving across different systems
|
remap_tablespace
|
move from one tablespace to another
|
Useful Views
|
|
DBA_DATAPUMP_JOBS
|
summary information of all currently running data pump jobs
|
DBA_DATAPUMP_SESSIONS
|
displays the user currently running data pump jobs
|
V$SESSION_LONGOPS
|
display information like totalwork, sofar, units and opname
|
Privileges
|
|
IMP_FULL_DATABASE
|
required if using advanced features
|
EXP_FULL_DATABASE
|
required if using advanced features
|
kaynak :
http://www.oracle.com/technetwork/issue-archive/2009/09-jul/datapump11g2009-quickstart-128718.pdf
http://www.datadisk.co.uk/html_docs/oracle/data_pump.htm