Oracle VM Site Review - Oracle VM Health Check
view counter

Feed items

Getting sql statements out of a trace file

The focus on this post started off in one direction and ended up in another. Originally I had been running a drop user script which had hung and even when I killed the process I could not drop the users as it gave a “ORA-01940: cannot drop a user that is currently connected” – despite the users having left the company months ago and there being no chance of them actually having connected sessions.

Running two oracle installations from the same terminal

Two posts from me on the same day. The other one about Datapatch is about a brand new utility in 12c and is probably new to most people. This post caused mixed reactions when I mentioned it at work last week. Some people laughed at my naivety in not knowing about it, others took the same view as me and were interested to hear about it as it may prove useful one day.

Issue with Datapatch – AKA SQL Patching Tool after cloning a database

There have been a few changes in the way patches are managed and monitored in 12c and whilst looking at this I found a potential problem that might occur when you clone or copy databases around, or even build them from a template file.

Firstly when you apply a PSU and run an opatch lsinventory command you now see a description of the patch rather than just a patch number – here showing that PSU 1 has been applied. This came in at 11.2.0.3 and in my opinion is really helpful.

 

opatch lsinventory gives “line 384: [: =: unary operator expected”

I noticed the error message when running lsinventory against a  12.1.0.2 Oracle_Home. As the command worked I didn’t think anymore of it until on the same server against an 11.2.0.1 home I got the same error message.

opatch lsinventory
 tr: extra operand `y'
 Try `tr --help' for more information.
 /app/oracle/product/11.2.0.1/dbhome_1/OPatch/opatch: line 384: [: =: unary operator expected

There is a Mos note which provides a solution – 551584.1

Changing a database link password

I recently found out  that it is possible to change a database link password without dropping and recreating a database link in its entirety.

To be honest I thought this might have existed forever and I had just never come across it but it actually come out in 11GR2

The ALTER DATABASE LINK statement can be used and you do not need to specify the target service either  – all you need  is to run the following command from the user that owns a pre-existing database link

ALTER DATABASE LINK JOHN connect to USER identified by  PASSWORD;

New ASM power levels in 11.2.0.2 and beyond

I recently saw the following command in a script that was to be run and thought an error had been made and the power level should have been 5 not 500.

ALTER DISKGROUP DATA REBALANCE POWER 500;

Upon doing some research it was not a mistype but a new method of disk balancing which came in from 11.2.0.2

Previously setting the power limit from 0 to 11 basically caused an additional number of ARBx process to be created to match the power level and these were removed once the rebalance had finished.

Stopping one ASM listener in Flex ASM environment takes down ASM instance

Stopping one ASM listener in Flex ASM environment takes down ASM instance

My bizarre question of 2015 already

To be honest it was asked 2 years ago in a blog about ASM and rebalancing and someone asked the following question

Using grant connect through to manage database links

Nobody can say that I am not current and topical with my posts. This post refers to functionality that was introduced in 9i, however I have just come across it and thought it useful enough to blog about it.

The command ALTER USER USERB GRANT CONNECT THROUGH USERA allows a proxy connection to be made using the username and password of USERA but  to connect in as USERB. The purpose is so that management of a user can be done without knowing that users password or changing it. This is most commonly going to be used by support teams.

I will give an example

Controlling RMAN channels on RAC

This was sent to me for posting  by my friend and ex-colleague Vitaly Kaminsky ….

I have recently worked with the customer where standard RMAN backups of production 2-node cluster (11.2.0.3) were getting too big and taking longer than 24 hours to run.

The problem with this particular cluster was the fact that ALLOCATION of RMAN connections to the instances of the cluster was controlled by SCAN and driven by the load-balancing algorithm of SCAN.

view counter