At our company we had to replace the storage cabinets of a two node RAC cluster. The RAC cluster uses ASM diskgroups with NORMAL redundancy. All diskgroups exist of two failgroups, with each failgroup located on one of the storage cabinets, for redundancy purposes, as seen here:
When setting up a Oracle RAC cluster on 126.96.36.199 ( in our case patchlevel 170117 ), creating a database using dbca can run into an ORA-12154 unable to complete the database creation.
The setup of an Oracle RAC cluster is that you have first of all the Oracle Grid Infrastructure, the so-called clusterware. This software piece is the cluster, and it is responsible for everything else but the actual RAC database.
On Oracle 12.1 doing a GI CPU installation it is possible that, when trying to install one of the quarterly patch updates, that 'opatchauto' complains about an actions.xml file missing from one of the patch subdirectories.
In below DocID 2086760.1 on Oracle Support this problem is explained to be caused by a too low version of opatchauto. When using an opatch version 188.8.131.52.7 or higher this problem will be gone.
However, today I ran into the same problem, having opatch version 184.108.40.206.8 !!
Because of all the comments and remarks I received about my Blog discussing a Manual Standby solution, I decided to update this recovery process and make it more stable. My new findings are described below. A downloadable PDF version of the documentation can be found here:
Interval partitioning is a partitioning method introduced in Oracle 11g. This is a helpful addition to range partitioning where Oracle automatically creates a partition when the inserted value exceeds all other partition ranges.
When you've installed Oracle Enterprise Manager Grid Control and you have configured Management Agents for database monitoring, all your databases will show up in the Grid Control application.
We have found a 'glitch' in this behaviour that in my opinion is very misleading.
When a database target is added/discovered in Grid Control, all possible Packs are enabled by default. This means that if you're running Standard Edition databases, you're violating the SE-license using e.g. the Diagnostic Pack as soon as your database is discovered in Grid Control.
Now I must dissapoint you all, if you really have some kind of printing problem related to an Oracle database installation. This blog is about something totally different. It handles about a printing problem and it is related to Oracle, but it is not technically related to Oracle.
In our datawarehouse we often run into the "problem" that tablespaces contain a lot of free space within the tablespace itself. Most of the time, this is caused by the fact that the warehouse programs first build up new objects and afterwards destroy old objects. In the future we might need to have a more definitive solution for this, like converting the tablespaces to one with an uniform size.
However until then I use to do a tablespace reorganization. This means I pick a block_id of the tablespace datafile, somewhere around the point of total usage.
During a schema copy action from an old database to a new one, we ran into problems that certain packages where not compiled successfully. Some dependencies failed, because the schema user in the new database failed to have execution priviliged on certain SYS packages, like DBMS_LOCK.
The settings of both schema's where identical, so the rights must have been set in the past by using the PUBLIC account. For those who don't know, the PUBLIC role is a role used for setting database global rights, which every user is allowed to have.
What DBA never moved databases to new hardware ?? And what DBA never forgot to unregister the database first from the RMAN catalog ??
If you didn't unregister the database after a database move, then at those crucial moments, months or years later, you might end up facing a problem when trying to e.g. restore that database from backup.
Imagine you're in the middle of disaster, the production database server crashed, management is breathing down your neck, and you need to restore this database as soon as possible.
I would like to share some PL/SQL code that enables us DBA's to do index maintenance. In our data warehouse there were a couple of procedures that saved very basic index information. Then the indexes where dropped and after the warehouse load they were recreated. The pitfall of these simple procedures was that they were not able to cope with partitioned indexes and other complex structures. These more complex indexes were rebuild as normal non-partitioned indexes afterwards.
Last year we installed Oracle Enterprise Manager Grid Control 11g.
The main reason was the fact that we were (still) on version 10.2.0.3 and we experienced some troubles with that version. Also Oracle anounced the problem with this version regarding the end-of-life of the SSL-certificate on Dec 31th 2010.
Following OTN tread http://forums.oracle.com/forums/thread.jspa?threadID=2154801&tstart=0 about copying archived logfiles from ASM to a UNIX filesystem, I am posting a script we used in the past.
Nowadays we use RMAN to get the files from ASM directly into a TSM diskpool, but back then we used another method for putting the archived logfiles on backup, which resulted in the need for this script:
With Oracle ASM the Oracle DBA has a new very powerfull storage medium where he/she can put the database files. However ASM already was introduced in Oracle 10g, for a lot of DBA's it is still a black-box with only very small holes in it where you can peek in.
In the beginning, I must confess I was very suspicious about ASM, especially because of being a black-box. However it occured to me very soon that it looks like a black-box because there is nothing much to see.
In this section the installation/configuration of Oracle's Configuration Manager is explained.
What is this Configuration Manager?
Oracle's Configuration Manager is a tool that enables users to link their Oracle database configuration to an account on the Oracle Support site.
The advantages are huge !!
By linking your system to the Oracle Support Site the system is analyzed, checked and monitored constantly. This means that Oracle Support is able to warn you up front of any problem, risk or abnomality in your configuration.
After upgrading a database from Oracle 220.127.116.11 to 18.104.22.168 an XDB installation was performed to be able to send Emails from the database. As of Oracle 11g the sending of Emails needs the creation of an Access Control List, to control which database users can access a remote host for relaying ( or sending ) the Email to.
On the Internet there are scripts available to create such an ACL.
We used the script below to first create a procedure that takes care of all the steps involved in creating an ACL:
A couple of months ago, we setup our first RAC cluster consisting of two LPAR on an IBM AIX Power6 server. Although it has been months ago, and the cluster has been running a production database now successfully for over a month, I do want to share the 'frustration's we've had with setting up the user-equivalence, needed for the installation.
During the installation of a two Node RAC cluster we decided to install CRS and ASM on the latest stable Oracle level ( 22.214.171.124) and for reasons of application compatibility installed the database with release 10.2.0.4
All installation actions went fine. CRS, ASM and the database installation had their own owners and installation directories. All environments were physically seperated. CRS, ASM and the listener were running OK. However during the creation of the 10.2.0.4 database we ran into troubles.
With the introduction of AMM or ASMM as of Oracle 10g the option to lock the SGA into memory has become obsolete. Creating a database using dbca, especially with Oracle 11g, will turn on ASMM. By default lock_sga is set to FALSE.
If one favors the use of LOCK_SGA, ASMM needs to be disabled. To do this, both MEMORY_TARGET & MEMORY_MAX_TARGET needs to be removed from the init- or spfile.