Archiv für die Kategorie TOI
Sharing READ ONLY Tablespaces between Databases
I came across an OTN Thread today, where the question was raised, whether it is possible to use the same READ ONLY Tablespace in multiple Databases. At first glance, I thought that this should of course be possible, though the answers where somewhat discouraging. So I have done a quick test to prove it:
SQL> host mkdir /home/oracle/shared SQL> create tablespace shared datafile '/home/oracle/shared/shared01.dbf' size 10m; Tablespace created. SQL> grant create session,create table to shared identified by shared; Grant succeeded. SQL> alter user shared quota unlimited on shared; User altered. SQL> create table shared.t tablespace shared as select 'Test' as word from dual; Table created. SQL> alter tablespace shared read only; Tablespace altered.
The tablespace is now ready to get „transported“. We pretend that the datafile is placed on a SAN, accessible from the other Database. I have already created the directory dpdir in the original database, pointing to the directory /home/oracle/dpdir
$ expdp transport_tablespaces=shared directory=DPDIR dumpfile=shared.dmp
Export: Release 10.2.0.1.0 - Production on Wednesday, 03 February, 2010 18:23:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":
/******** AS SYSDBA transport_tablespaces=shared directory=DPDIR dumpfile=shared.dmp
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
/home/oracle/dpdir/shared.dmp
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 18:23:43
Now I can plug in the tablespace into another database – I have setup one called tst for that purpose already:
$ export ORACLE_SID=tst
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Feb 3 18:24:38 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> create directory DPDIR as '/home/oracle/dpdir';
Directory created.
SQL> grant create session,create table to shared identified by shared;
Grant succeeded.
All that is now left to do is to import the description with DataPump into the tst Database:
$ impdp transport_datafiles=/home/oracle/shared/shared01.dbf directory=DPDIR dumpfile=shared.dmp
Import: Release 10.2.0.1.0 - Production on Wednesday, 03 February, 2010 18:33:13 Copyright (c) 2003, 2005, Oracle. All rights reserved. Username: / as sysdba Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01": /******** AS SYSDBA
transport_datafiles=/home/oracle/shared/shared01.dbf directory=DPDIR dumpfile=shared.dmp Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 18:33:19
Now I am going to test whether the table is also accessible in the second Database:
$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Wed Feb 3 18:34:21 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options SQL> select * from shared.t; WORD ---- Test
It is. So I am sharing now one tablespace between two completely different Databases 🙂
How to audit sys into an OS file owned by root
Sometimes, the requirement may arise to audit all actions of the Oracle Database superuser sys. This feature was introduced already in the 9i version, together with the parameter AUDIT_SYS_OPERATIONS. If set to true, that leads to the generation of files in the directory, determined by the parameter AUDIT_FILE_DEST. These files will contain then a protocol of all the actions of sys. By default, the files in that directory – all with the suffix aud – contain only the protocol of all connections as sys, but not all the actions done after the connect. They look like this:
Audit file /u01/app/oracle/admin/orcl/adump/ora_1057.aud
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
System name: Linux
Node name: uhesse
Release: 2.6.9-55.ELsmp
Version: #1 SMP Fri Apr 20 17:03:35 EDT 2007
Machine: i686
Instance name: orcl
Redo thread mounted by this instance: 0 <none>
Oracle process number: 0
Unix process pid: 1057, image: oracle@uhesse
Tue Feb 2 17:43:12 2010
ACTION : 'CONNECT'
DATABASE USER: '/'
PRIVILEGE : SYSDBA
CLIENT USER: oracle
CLIENT TERMINAL: pts/6
STATUS: 0
But even if we set AUDIT_SYS_OPERATIONS=true, although the audit file will then contain all the actions of sys also, it belongs to the operating system user, used to install the database – usually the user oracle. And likely, the person who has the privilege to connect as sys also has the privilege to connect as oracle on the OS level. That makes the approach with AUDIT_SYS_OPERATIONS somewhat useless from a security viewpoint. That is why we introduced in version 10g (already) the possibility to have the audit file owned by root – which may make it harder to manipulate or delete it for the DBA person. This is how that works:
First, we have to set two parameters – the cooperation of someone able to logon as sysdba is required here…
SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production SQL> show parameter audit NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_file_dest string /u01/app/oracle/admin/orcl/adu mp audit_sys_operations boolean FALSE audit_syslog_level string audit_trail string NONE SQL> alter system set audit_sys_operations=true scope=spfile; System altered. SQL> alter system set audit_syslog_level='LOCAL1.WARNING' scope=spfile; System altered.
Then, we logon as root and modify the file /etc/syslog.conf:
-bash-3.00$ su - Password: [root@uhesse ~]# echo local1.warning /var/log/audit.log >> /etc/syslog.conf [root@uhesse ~]# cat /etc/syslog.conf # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log local1.warning /var/log/audit.log
After having done that, we need to restart the syslog logger, also as root user:
[root@uhesse ~]# /etc/rc.d/init.d/syslog restart Shutting down kernel logger: [ OK ] Shutting down system logger: [ OK ] Starting system logger: [ OK ] Starting kernel logger: [ OK ]
The last step is to restart the instance, in order to make the previous changes of the non-dynamic parameters active. If afterwards someone connects to the instance as sysdba, all his actions are recorded:
SQL> connect / as sysdba
Connected.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 orcl
uhesse
10.2.0.1.0 02-FEB-10 OPEN NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
On the OS level, the oracle user can not even see the content of the file, nor modify or delete it:
-bash-3.00$ ls -rtl /var/log/ total 1260 [...] -rw------- 1 root root 34457 Feb 2 17:40 cron -rw------- 1 root root 66395 Feb 2 17:40 boot.log -rw------- 1 root root 377235 Feb 2 17:44 messages -rw------- 1 root root 1410 Feb 2 17:44 audit.log -bash-3.00$ cat /var/log/audit.log cat: /var/log/audit.log: Permission denied -bash-3.00$ whoami oracle
Only root can:
-bash-3.00$ su -
Password:
[root@uhesse ~]# cat /var/log/audit.log | grep instance
Feb 2 17:44:46 uhesse Oracle Audit[1184]:
ACTION : 'select * from v$instance'
DATABASE USER: '/'
PRIVILEGE : SYSDBA
CLIENT USER: oracle
CLIENT TERMINAL: pts/6 STATUS: 0
Of course, the usual aspects of auditing apply in this case also: It is an effort to do (for the system and for people who maintain it), it is space consuming, it may impose performance problems – especially if now large actions are performed by sys, like running catalog.sql or catproc.sql, for example. These actions may lead to very much auditing now…
How to change the size of Online Redologs?

Having just reviewed this article I can confirm that it is still valid also for 12c: The exact same commands can be used to change logfile sizes with that version.
If you deal with Oracle Databases for years on a daily basis like I do, you sometimes forget that things that seem elementary to you ain’t necessarily so for others, especially newbies in the Oracle realm. I noticed that when I saw some discussions in forums about redo logfiles and their sizes.
When I go back in time to 1998, that’s where I was the first time responsible for an Oracle Database (version 7, but some things never change). I inherited that Database from the previous DBA who did the setup, creating quite small log groups of a couple 100 KBs in size – which was not uncommon these days, because Oracle templates mentioned that size as „medium“, if I recall it right.
I got log switches almost every 10 seconds or faster, but didn’t think about it since nobody complained and everything seemed to run fine. Then I attended an Oracle University course and I learned that this was way too many log switches in that short time and that I should increase the log files. Believe me, that was an exciting thing to do for me those days! I sat there, fingers crossed (mentally, because I had to type in the commands) and did it veeeery carefully on our production database then.
Some things to consider with changing the size of online redo logs:
- You do it for performance reasons – mostly because your original files are too small and you get log switches too often
- It is an Online Operation – no end user will even notice about it – apart from maybe better performance afterwards
- You will not put your Database at risk with it – if you do it right 🙂
Let me give you an example:
SQL> select group#,members,status,bytes/1024/1024 as mb from v$log; GROUP# MEMBERS STATUS MB ---------- ---------- ---------------- ---------- 1 1 INACTIVE 50 2 1 INACTIVE 50 3 1 CURRENT 50 SQL> select group#,member from v$logfile; GROUP# MEMBER ---------- ------------------------------------------------------------ 3 /u01/app/oracle/oradata/orcl/redo03.log 2 /u01/app/oracle/oradata/orcl/redo02.log 1 /u01/app/oracle/oradata/orcl/redo01.log
This is my configuration as done by the DBCA. Now I want to get 100m sized logfiles and I want them mirrored for security reason. First, I create new log groups of the desired size:
SQL> alter database add logfile
('/u01/app/oracle/oradata/orcl/redo_g4m1.rdo',
'/u01/app/oracle/oradata/orcl/redo_g4m2.rdo')
size 100m;
Database altered.
SQL> alter database add logfile
('/u01/app/oracle/oradata/orcl/redo_g5m1.rdo',
'/u01/app/oracle/oradata/orcl/redo_g5m2.rdo')
size 100m;
Database altered.
SQL> alter database add logfile
('/u01/app/oracle/oradata/orcl/redo_g6m1.rdo',
'/u01/app/oracle/oradata/orcl/redo_g6m2.rdo')
size 100m;
Database altered.
SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;
GROUP# MEMBERS STATUS MB
---------- ---------- ---------------- ----------
1 1 INACTIVE 50
2 1 INACTIVE 50
3 1 CURRENT 50
4 2 UNUSED 100
5 2 UNUSED 100
6 2 UNUSED 100
6 rows selected.
I like the above naming convention, because I can determine by the name easily what member of what group I see and because the suffix rdo is not so easy misinterpreted as a mere ascii textfile that can be deleted if space is needed – believe me, that has happened many times and made the DBA of that site curse and suffer 🙂
Now I am going to drop the small files – with SQL commands first!
SQL> alter database drop logfile group 1; Database altered.
That was easy, wasn’t it? Notice that the OS file is not gone yet – we have to delete them manually (if they are not OMF). All groups that have the status INACTIVE can be dropped that way.
SQL> select group#,members,status,bytes/1024/1024 as mb from v$log; GROUP# MEMBERS STATUS MB ---------- ---------- ---------------- ---------- 2 1 INACTIVE 50 3 1 CURRENT 50 4 2 UNUSED 100 5 2 UNUSED 100 6 2 UNUSED 100 SQL> alter database drop logfile group 2; Database altered.
If the status is CURRENT or ACTIVE, it won’t be possible to drop the group – you can do nothing wrong here!
SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-01623: log 3 is current log for instance orcl (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/u01/app/oracle/oradata/orcl/redo03.log'
CURRENT means that LGWR is actually using this group to write the redo entries into. If a log switch happens, the status can become ACTIVE or INACTIVE:
SQL> alter system switch logfile; System altered. SQL> select group#,members,status,bytes/1024/1024 as mb from v$log; GROUP# MEMBERS STATUS MB ---------- ---------- ---------------- ---------- 3 1 ACTIVE 50 4 2 CURRENT 100 5 2 UNUSED 100 6 2 UNUSED 100
ACTIVE means, the checkpoint associated to the redo entries in that group is not yet finished. It is in other words still needed for a possible instance recovery and cant be dropped therefore. But we can force the checkpoint:
SQL> alter system checkpoint; System altered.
SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;
GROUP# MEMBERS STATUS MB
---------- ---------- ---------------- ----------
3 1 INACTIVE 50
4 2 CURRENT 100
5 2 UNUSED 100
6 2 UNUSED 100
SQL> alter database drop logfile group 3;
Database altered.
Now I am finished with my SQL commands and I can continue with deleting the files of the small groups from the OS. Never delete the OS files before the SQL drop commands! The instance was OPEN all the time and no user was in any way concerned by this whole action.
By the way, I mirrored the log groups to protect my database against really serious problems that would arise otherwise, if a whole log group gets lost – for example because someone deletes a file from the OS. You would want to have the mirrored files on different devices, of course 🙂
