Importing a traditionally exported dump file in a Pluggable database:
Step 1:
Creating a new pluggable database :
Create a new pluggable database(dbca)
Step 2:
Checking the status of the created pluggable database;
select con_id,name,open_mode from v$Pdbs;
Step 3:
Command to open the PDB:
alter pluggable database PROD open;
using the below command to open all PDB’s ,
Alter pluggable database all open;
Step 4:
Connecting a PDB:
alter session set container=ORA123;
Step 5:
Checking the PDB name
show con_name
Step 6:
As per the customer’s requirement, we need to create a new tablespace or we can use the previously created one
Creating Tablespace:
create tablespace tbs1
datafile ‘D:\ORADATA\ORC\DATAFILE\tbs101.dbf’ size 6g
autoextend on
next 512m
maxsize unlimited;
Step 7:
Checking Temp tablespace is available or not:
SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = ‘DEFAULT_TEMP_TABLESPACE’;
Step 8:
Creating a new user and providing created Tablespace and Temp tablespace
create user pdbuser identified by welcome1
PROFILE DEFAULT
DEFAULT TABLESPACE tbs1
TEMPORARY TABLESPACE TEMP;
Step 9:
Checking the user is available or not
col username for a10
col account_status for a10
col created for a20
select username,account_status,default_tablespace, created from dba_users where username=’PDBUSER’;
Step 10:
Giving Permission to the user:
grant connect,resource to pdbuser;
or
grant dba to pdbuser;
Step 11:
Connecting the user through SQL developer:
conn pdbuser/pdbuser@//localhost:1521/ORA123
Step 12:
Importing the dump file:
Note: as per the requirement (verify the location )we need to move the dump file to the particular location (here D:/)
Step 12.1: D:\>imp file=D:\bkpdump\Saturday.dmp log=D:\bkpdump\Saturday.log full=y;
Step 12.2: USERNAME:pdbuser/pdbuser@//localhost:1521/ORA123
Step 13:
We can check the importing table size :
set lines 132
set pages 105
set pause OFF
set echo OFF
column “TABLESPACE” format a30
column “TOTAL ALLOC (MB)” format 9, 999, 990
column “TOTAL PHYS ALLOC (MB)” format 9, 999, 990
column “USED (MB)” format 9, 999, 990
column “FREE (MB)” format 9, 999, 990
column “% USED” format 990
SELECT a.tablespace_name “TABLESPACE”,
a.bytes_alloc / ( 1024 * 1024 ) “TOTAL ALLOC (MB)”,
a.physical_bytes / ( 1024 * 1024 ) “TOTAL PHYS ALLOC (MB)”,
Nvl(b.tot_used, 0) / ( 1024 * 1024 ) “USED (MB)”,
(Nvl(b.tot_used, 0) /a.bytes_alloc )*100 “% USED”
FROM (SELECT tablespace_name, SUM(bytes) physical_bytes,
SUM(Decode(autoextensible, ‘NO’, bytes,’YES’, maxbytes)) bytes_alloc FROM dba_data_files GROUP BY tablespace_name) a,
(SELECT tablespace_name,SUM(bytes) tot_used FROM dba_segments
GROUP BY tablespace_name) b
WHERE a.tablespace_name = b.tablespace_name (+)
AND a.tablespace_name NOT IN (SELECT DISTINCT tablespace_name
FROM dba_temp_files);
(OR)
SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) “%_COMPLETE”
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE ‘IMP%’
AND OPNAME NOT LIKE ‘%aggregate%’
AND TOTALWORK != 0
AND SOFAR != TOTALWORK
;
Step 14:
Incase of the Tablespace size alert we can add or re-sizing the datafile:
Command 1:
alter database datafile=’D:\ORADATA\ORC\DBBC46EA5974429996E583E0A75A9C11\DATAFILE\O1_MF_SYSTEM_J0BWO2C5_.DBF’ resize 50g;
(or)
Command 2:
alter tablespace system add datafile ‘D:\ORADATA\ORC\DBBC46EA5974429996E583E0A75A9C11\DATAFILE\O1_MF_SYSTEM_J0BWO2C6_.DBF’ size 19g autoextend on next 512m
maxsize unlimited;
Step 15:
After importing we can check the schemas available in the database
select username from dba_users where oracle_maintained=’N’;
Or
SQL> select OWNER,sum(bytes)/1024/1024/1000 “SIZE_IN_GB” from dba_segments group by owner order by owner;
OWNER SIZE_IN_GB
————— ———-
APEX_030200 .073125
AUDSYS .088625
BRANCH 7.4678125
BRANCHTMP .003
CTXSYS .0026875
DBSNMP .0001875
DVSYS .0045625
GSMADMIN_INTERN .001
AL
HR .0015625
IX .001
LBACSYS .0003125
MDSYS .089875
OE .008625
OJVMSYS .000375
OLAPSYS .006
ORDDATA .0013125
ORDSYS .000375
SCOTT .0003125
SH .0204375
SYS .4998125
SYSMAN .04375
SYSTEM .0124375
WMSYS .0065625
XDB .060625
———-
sum 8.394375
24 rows selected.