Oracle 49일차

Table of Contents

[원격 exp – 로컬 imp]

copy 쪽에서 db1의 데이터를 원격으로 exp 후 copydb에 로컬 imp하는 작업.

0. 기존 데이터 삭제 & itwill4 유저 생성 – copydb

-- copydb 쪽의 데이터 삭제
select * from scott.exp_test5;
drop table scott.exp_test5;

-- itwill4 user 생성
create user itwill4 identified by oracle
quota unlimited on users;

1. 원격 exp

exp system/oracle@db1 file=test5.dmp log=test5.log owner=itwill4 statistics=none
-- copy쪽의 tnsnames.ora에 DB1이 정의되어 있어야 가능
-- tnsnames.ora 들어가보니깐 DB1의 host가 oel7이 아니라 copy라고 되어있네;;
-- 수정 안 했을 때는 imp 수행해도 입력이 안됐었음. 애초에 exp부터 없는 거를 exp했는데 잘 될리가;;
-- 수정하니깐 잘됨.
---- 아 지금 보니깐 db1이 아니라 oel7이라고 tnsnames.ora에 만들어뒀었네;;
---- 그러면 exp system/oracle@oel7 file=test5.dmp log=test5.log owner=itwill4 statistics=none
---- 이걸로 수행하면 됐겠다... 강사님이 구라핑쳐서 당했네;;

-- 나는 이거 했더니 default temp tablespace가 없다고 뜨면서 오류 뜸.
-- 실제로 temp.dbf 파일도 없었음.
-- 그래서 temp tablespace 만들거임.
CREATE TEMPORARY TABLESPACE temp01
TEMPFILE '/oracle12/recover/temp01.dbf' SIZE 50M
AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED;
-- default temp tablespace로 설정하기
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp01;

-- rm test5* 하고 다시 exp 하니깐 잘 됨.

2. 로컬 imp

imp system/oracle file=test5.dmp log=test5.log full=y

3. 데이터 조회

select owner from dba_tables where table_name='EXP_TEST5';
-- itwill4 출력됨.
select * from itwill4.exp_test5;

[전송모드 비교 – oel7 db1에서]

conventional mode vs direct mode

conventional mode

cd ~/datapump
time exp system/oracle file=full.dmp log=full.log full=y
-- time 출력
real    0m21.357s
user    0m2.091s
sys     0m2.577s

direct mode

time exp system/oracle file=full.dmp log=full.log full=y direct=y
-- time 출력
real    0m18.310s
user    0m1.990s
sys     0m2.248s

** 지금은 많이 차이 안 나지만 크기가 커질수록 direct가 훨씬 빨라진다** 지금은 많이 차이 안 나지만 크기가 커질수록 direct가 훨씬 빨라진다.

[full mode exp한 dump file에서 특정 table만 imp 시키기]

full mode exp dump file은 통째로 imp하진 않는다. 특정 테이블이 날아갔을 때 그것만 imp 시킬 수 있다.

0. delivery table orcl에서 가져오기 – oel7 db1에서

scott.delivery table이 oel7 db1에 없었음.

insert into scott.delivery
select * from scott.delivery@dblink_orcl;

1. exp full

cd ~/datapump
exp system/oracle file=full.dmp log=full.log full=y direct=y

2. scott.delivery truncate 장애 발생

truncate table scott.delivery;

3. 해당 table만 imp

imp scott/oracle file=full.dmp log=full.log tables=delivery ignore=y (constraints=n)

4. 데이터 조회

select * from scott.delivery;
-- 한글깨짐이 존재하긴 하는데 orange에서 select 하면 문제없이 나오고, sqlplus에서 조회하면 깨짐

[한글 깨짐 해결]

1. nls 관련 파라미터 수정 – 다 못 필기함…

update sys.props$ set value$ = 'KOREAN' where name = 'NLS_LANGUAGE';
update sys.pro
-- 뭐 더 있는데 못 썼다...

2. linux prompt 언어 설정

vi .bash_profile
export LANG=ko_KR.UTF-8
export NLS_LANG=KOREAN_KOREA.AL32UTF8

. .bash_profile

[datapump]

oracle 10g부터 제공되는 mig tool.
기존 exp/imp 개선
1) 보안적인 측면 개선 : 반드시 os를 거치지 않고도 수행 가능
2) 다양한 기능 : parallel 옵션 사용 가능
3) 다양한 버전 지원 : 높은 버전 expdp -> 낮은 버전 impdp 가능
4) 연속 작업 지원 : 작업이 중단된 이후 연속적으로 이어서 진행 가능
5) 작업 시간 예상 : 진행 중인 export/import에 대한 대략적인 작업시간 예측 가능

expdp -help
   Format:  expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

The available keywords and their descriptions follow. Default values are listed within square brackets.

ABORT_STEP
Stop the job after it is initialized or at the indicated object.
Valid values are -1 or N where N is zero or greater.
N corresponds to the object's process order number in the master table.
ACCESS_METHOD
Instructs Export to use a particular method to unload data.
Valid keyword values are: [AUTOMATIC], DIRECT_PATH and EXTERNAL_TABLE.
ATTACH
Attach to an existing job.
For example, ATTACH=job_name.
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].
COMPRESSION
Reduce the size of a dump file.
Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.
COMPRESSION_ALGORITHM
Specify the compression algorithm that should be used.
Valid keyword values are: [BASIC], LOW, MEDIUM and HIGH.
CONTENT
Specifies data to unload.
Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keyword values are: XML_CLOBS.
DIRECTORY
Directory object to be used for dump and log files.
DUMPFILE
Specify list of destination dump file names [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION
Encrypt part or all of a dump file.
Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
ENCRYPTION_ALGORITHM
Specify how encryption should be done.
Valid keyword values are: [AES128], AES192 and AES256.
ENCRYPTION_MODE
Method of generating encryption key.
Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].
ENCRYPTION_PASSWORD
Password key for creating encrypted data within a dump file.
ENCRYPTION_PWD_PROMPT
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.
ESTIMATE
Calculate job estimates.
Valid keyword values are: [BLOCKS] and STATISTICS.
ESTIMATE_ONLY
Calculate job estimates without performing the export [NO].
EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".
FILESIZE
Specify the size of each dump file in units of bytes.
FLASHBACK_SCN
SCN used to reset session snapshot.
FLASHBACK_TIME
Time used to find the closest corresponding SCN value.
FULL
Export entire database [NO].
HELP
Display Help messages [NO].
INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.
JOB_NAME
Name of export job to create.
KEEP_MASTER
Retain the master table after an export job that completes successfully [NO].
LOGFILE
Specify log file name [export.log].
LOGTIME
Specifies that messages displayed during export operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.
METRICS
Report additional job information to the export log file [NO].
NETWORK_LINK
Name of remote database link to the source system.
NOLOGFILE
Do not write log file [NO].
PARALLEL
Change the number of active workers for current job.
PARFILE
Specify parameter file name.
QUERY
Predicate clause used to export a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".
REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].
SAMPLE
Percentage of data to be exported.
SCHEMAS
List of schemas to export [login schema].
SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle RAC resources.
SOURCE_EDITION
Edition to be used for extracting metadata.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
TABLES
Identifies a list of tables to export.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
TABLESPACES
Identifies a list of tablespaces to export.
TRANSPORTABLE
Specify whether transportable method can be used.
Valid keyword values are: ALWAYS and [NEVER].
TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].
TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be unloaded.
VERSION
Version of objects to export.
Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.
VIEWS_AS_TABLES
Identifies one or more views to be exported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.
impdp -help
     Format:  impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

------------------------------------------------------------------------------

The available keywords and their descriptions follow. Default values are listed within square brackets.

ABORT_STEP
Stop the job after it is initialized or at the indicated object.
Valid values are -1 or N where N is zero or greater.
N corresponds to the object's process order number in the master table.
ACCESS_METHOD
Instructs Import to use a particular method to load data.
Valid keyword values are: [AUTOMATIC], CONVENTIONAL, DIRECT_PATH,
EXTERNAL_TABLE, and INSERT_AS_SELECT.
ATTACH
Attach to an existing job.
For example, ATTACH=job_name.
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC [YES].
CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keywords are: DISABLE_APPEND_HINT, SKIP_CONSTRAINT_ERRORS, REJECT_ROWS_WITH_REPL_CHAR and VALIDATE_TABLE_DATA.
DIRECTORY
Directory object to be used for dump, log and SQL files.
DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD
Password key for accessing encrypted data within a dump file.
Not valid for network import jobs.
ENCRYPTION_PWD_PROMPT
Specifies whether to prompt for the encryption password [NO].
Terminal echo will be suppressed while standard input is read.
ESTIMATE
Calculate network job estimates.
Valid keywords are: [BLOCKS] and STATISTICS.
EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".
FLASHBACK_SCN
SCN used to reset session snapshot.
FLASHBACK_TIME
Time used to find the closest corresponding SCN value.
FULL
Import everything from source [YES].
HELP
Display help messages [NO].
INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.
JOB_NAME
Name of import job to create.
KEEP_MASTER
Retain the master table after an import job that completes successfully [NO].
LOGFILE
Log file name [import.log].
LOGTIME
Specifies that messages displayed during import operations be timestamped.
Valid keyword values are: ALL, [NONE], LOGFILE and STATUS.
MASTER_ONLY
Import just the master table and then stop the job [NO].
METRICS
Report additional job information to the import log file [NO].
NETWORK_LINK
Name of remote database link to the source system.
NOLOGFILE
Do not write log file [NO].
PARALLEL
Change the number of active workers for current job.
PARFILE
Specify parameter file.
PARTITION_OPTIONS
Specify how partitions should be transformed.
Valid keywords are: DEPARTITION, MERGE and [NONE].
QUERY
Predicate clause used to import a subset of a table.
For example, QUERY=employees:"WHERE department_id > 10".
REMAP_DATA
Specify a data conversion function.
For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REMAP_DATAFILE
Redefine data file references in all DDL statements.
REMAP_SCHEMA
Objects from one schema are loaded into another schema.
REMAP_TABLE
Table names are remapped to another table.
For example, REMAP_TABLE=HR.EMPLOYEES:EMPS.
REMAP_TABLESPACE
Tablespace objects are remapped to another tablespace.
REUSE_DATAFILES
Tablespace will be initialized if it already exists [NO].
SCHEMAS
List of schemas to import.
SERVICE_NAME
Name of an active Service and associated resource group to constrain Oracle RAC resources.
SKIP_UNUSABLE_INDEXES
Skip indexes that were set to the Index Unusable state.
SOURCE_EDITION
Edition to be used for extracting metadata.
SQLFILE
Write all the SQL DDL to a specified file.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STREAMS_CONFIGURATION
Enable the loading of Streams metadata [YES].
TABLE_EXISTS_ACTION
Action to take if imported object already exists.
Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
TABLES
Identifies a list of tables to import.
For example, TABLES=HR.EMPLOYEES,SH.SALES:SALES_1995.
TABLESPACES
Identifies a list of tablespaces to import.
TARGET_EDITION
Edition to be used for loading metadata.
TRANSFORM
Metadata transform to apply to applicable objects.
Valid keywords are: DISABLE_ARCHIVE_LOGGING, INMEMORY, INMEMORY_CLAUSE,
LOB_STORAGE, OID, PCTSPACE, SEGMENT_ATTRIBUTES, SEGMENT_CREATION,
STORAGE, and TABLE_COMPRESSION_CLAUSE.
TRANSPORTABLE
Options for choosing transportable data movement.
Valid keywords are: ALWAYS and [NEVER].
Only valid in NETWORK_LINK mode import operations.
TRANSPORT_DATAFILES
List of data files to be imported by transportable mode.
TRANSPORT_FULL_CHECK
Verify storage segments of all tables [NO].
Only valid in NETWORK_LINK mode import operations.
TRANSPORT_TABLESPACES
List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION
Version of objects to import.
Valid keywords are: [COMPATIBLE], LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.
VIEWS_AS_TABLES
Identifies one or more views to be imported as tables.
For example, VIEWS_AS_TABLES=HR.EMP_DETAILS_VIEW.
Note that in network import mode, a table name may be appended
to the view name.

[datapump mode]

1. table mode

expdp userid=system/oracle tables=scott.emp

2. user mode

expdp system/oracle schemas=(scott,hr)

3. tablespace mode (block size, character set이 동일한 경우 지원)

expdp system/oracle tablespace=(users,test01)

4. full mode : dba 권한을 가지거나 export_full_database role 가지고 있는 유저만 수행 가능

expdp system/oracle full=y

[datapump 사전 설정(양쪽 서버)]

1. os 디렉터리 설정

mkdir -p /home/oracle/datapump

2. DB내 디렉터리 객체 생성 및 권한 부여

> create or replace directory datapump as '/home/oracle/datapump';
> grant read, write on directory datapump to scott;
> exec dbms_metadata_util.load_stylesheets;  -- metadat xml 변환에 필요
-- PL/SQL procedure successfully completed.

-- 만약 프로시저 실행 중 오류 발생하면 temp file 설정해줘야된다.
alter tablespace temp add tempfile '/home/oracle/oradata/db1/temp01.dbf' size 100m autoextend on;
-- copydb의 경우에는
alter tablespace temp add tempfile '/oracle12/recover/temp01.dbf' size 100 autoextend on;

[case1 : table mode expdp/impdp]

1. table 생성 – oel7 db1

create table scott.expdp_test1(no number, name varchar2(10));
insert into scott.expdp_test1 values(1,'aaa');
insert into scott.expdp_test1 values(2,'bbb');
insert into scott.expdp_test1 values(3,'ccc');
commit;

2. expdp 수행

expdp system/oracle dumpfile=expdp_test1.dmp logfile=expdp_test1.log directory=datapump tables=scott.expdp_test1
-- 사전설정해둔 datapump라는 경로에다가 파일 생성해줌. 그래서 사전설정이 필요함.

3. scp로 파일 옮기기

scp /home/oracle/datapump/expdp_test1* oracle@copy:/home/oracle/datapump

4. impdp 수행

impdp system/oracle dumpfile=expdp_test1.dmp logfile=expdp_test1.log tables=scott.expdp_test1 directory=datapump

-- 이미 테이블 존재하는 경우
table_exists_action=replace 옵션 사용 시 기존 데이터 지우고 덤프파일 데이터만 import
table_exists_action=append 옵션 사용 시 기존 데이터에 덤프파일 데이터 추가 import

5. 데이터 조회

select * from scott.expdp_test1;

[table mode 전송대상]

** 제약조건, 인덱스, 권한은 impdp 되고
** 시노님, 뷰는 impdp 안된다.

1. 테이블 생성 – oel7 db1

create table scott.expdp_test2(no number, name varchar2(10));
insert into scott.expdp_test2 values(10,'AAAA');
insert into scott.expdp_test2 values(20,'BBBB');
insert into scott.expdp_test2 values(30,'CCCC');
commit;

2. 권한 부여

grant select on scott.expdp_test2 to hr;

3. synonym 생성

-- public synonym
create public synonym public_expdp_test2 for scott.expdp_test2;
-- private synonym : scott계정에서 수행해야됨.
create synonym private_expdp_test2 for scott.expdp_test2;

4. 기타 오브젝트 생성(제약조건, 인덱스, 뷰)

-- 1. pk (no) 생성
alter table scott.expdp_test2 add constraint expdp_test2_pk_no primary key (no);

-- 2. not null (name) 생성
ALTER TABLE scott.expdp_test2 MODIFY name NOT NULL;

-- 3. index 생성 (name)
create index scott.index_expdp_test2 on scott.expdp_test2 (name);
select * from dba_indexes where table_name='EXPDP_TEST2';

-- 4. view 생성
create view scott.view_expdp_test2
as
select * from scott.expdp_test2 ;

select * from dba_views where view_name='VIEW_EXPDP_TEST2';

5. expdp

expdp system/oracle dumpfile=expdp_test2.dmp logfile=expdp_test2.log directory=datapump tables=scott.expdp_test2

6. scp

scp /home/oracle/datapump/expdp_test2* oracle@copy:/home/oracle/datapump

7. impdp – copy copydb에서

impdp system/oracle dumpfile=expdp_test2.dmp logfile=expdp_test2.log directory=datapump tables=scott.expdp_test2

8. 데이터 조회

select * from scott.expdp_test2;  -- imp 성공
select * from dba_constraints where table_name='EXPDP_TEST2';  -- 성공
select * from dba_indexes where table_name='EXPDP_TEST2';  -- 성공
select * from dba_synonyms where table_name='EXPDP_TEST2';  -- 실패
select * from dba_views where view_name='VIEW_EXPDP_TEST2';  -- 실패
select * from dba_tab_privs where table_name='EXPDP_TEST2';  -- 성공

** 제약조건, 인덱스, 권한은 impdp 되고
** 시노님, 뷰는 impdp 안된다.

[schema mode 전송대상]

** 없던 유저 생성, 제약조건, 인덱스, private 시노님, 뷰, 권한은 impdp 되고
** public 시노님은 impdp 안된다.

0. user 생성 – oel7 db1

create user datapump identified by oracle
quota unlimited on users;
grant create session, create synonym to datapump;

1. 테이블 생성

create table datapump.expdp_test3(no number, name varchar2(10));
insert into datapump.expdp_test3 values(10,'AAAA');
insert into datapump.expdp_test3 values(20,'BBBB');
insert into datapump.expdp_test3 values(30,'CCCC');
commit;

2. 권한 부여

grant select on datapump.expdp_test3 to hr;

3. synonym 생성

-- public synonym
create public synonym public_expdp_test3 for datapump.expdp_test3;
-- private synonym : datapump계정에서 수행해야됨.
create synonym private_expdp_test3 for datapump.expdp_test3;

4. 기타 오브젝트 생성(제약조건, 인덱스, 뷰)

-- 1. pk (no) 생성
alter table datapump.expdp_test3 add constraint expdp_test3_pk_no primary key (no);

-- 2. not null (name) 생성
ALTER TABLE datapump.expdp_test3 MODIFY name NOT NULL;

-- 3. index 생성 (name)
create index datapump.expdp_test3 on datapump.expdp_test3 (name);

-- 4. view 생성
create view datapump.view_expdp_test3
as
select * from datapump.expdp_test3 ;

5. expdp

expdp system/oracle dumpfile=expdp_test3.dmp logfile=expdp_test3.log directory=datapump schemas=datapump
-- 출력값 --
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

6. scp

scp /home/oracle/datapump/expdp_test3* oracle@copy:/home/oracle/datapump

7. impdp – copy copydb에서

impdp system/oracle dumpfile=expdp_test3.dmp logfile=expdp_test3.log directory=datapump
-- 출력 --
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "DATAPUMP"."EXPDP_TEST3"                    5.515 KB       3 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER

8. 데이터 조회

select * from datapump.expdp_test3;  -- imp 성공
select * from dba_constraints where table_name='EXPDP_TEST3';  -- 성공
select * from dba_indexes where table_name='EXPDP_TEST3';  -- 성공
select * from dba_synonyms where table_name='EXPDP_TEST3';  -- private만 됨.
select * from dba_views where view_name='VIEW_EXPDP_TEST3';  -- 성공
select * from dba_tab_privs where table_name='EXPDP_TEST3';  -- 성공

** 없던 유저 생성, 제약조건, 인덱스, private 시노님, 뷰, 권한은 impdp 되고
** public 시노님은 impdp 안된다.

참고 – import 시 schema 변경

datapump -> scott

impdp system/oracle directory=datapump dumpfile=expdp_test3.dmp remap_schema=datapump:scott

datapump -> schema : 새로운 유저까지 만들어준다.

impdp system/oracle directory=datapump dumpfile=expdp_test3.dmp remap_schema=datapump:schema

Leave a Comment