The title of this post is intentionally slightly unclear and hopefully it is intrigued people to view the post and even better, add their comments.
SQL Plan Management has been around since 11G came out which is back in 2007. It does not require a tuning pack, so the package DBMS_SPM can be used without additional licensing but if the SQL Tuning advisor is used to generate new profiles via tuning task then that does require a tuning pack license.
I am pleased to say that I have been offered at a slot at the UKOUG Tech conference to present my paper on “The design, creation and maintenance of an AWR repository”. I have presented this once already to a SIG meeting and I felt it went down well and the feedback and comments I received afterwards supported that and allowed me to develop the ideas further.
I have had these images for a few weeks but I was reminded to make a post after seeing Doug Burns present on using the monitoring tool in 12c at a UKOUG SIG meeting last week and he mentioned how you can see bind variables quite easily. Of course you could always work your way through a 10046 trace but I think you might find these options easier.
Firstly how to get them from the command line
select sql_id, sid from v$sql_monitor where sql_exec_id = 16779111;
I am pleased to be presenting at the UKOUG AIM and Database SIG in London on Thursday 18th July. I am talking about building and using an AWR repository.
This is just a reminder for what I think is a strong Core DBA centric agenda being organised by the UKOUG
The location is the Met Hotel in Leeds, right by the station and the last meeting in Leeds several years ago, was well attended so I am sure this one will be as well.
The agenda is below
Date: Thursday 9th May 2013
Time: 09:00 – 17:00
Location: Leeds/ West Yorkshire
Venue: The Met, Leeds
Changing a database to use a new oracle home is easy isn’t it? A simple interview question that you would never get wrong. Everyone knows that you shutdown the database , edit the oratab file and restart the database again. So then the interviewer presses and is obviously looking for more.
You confidently follow-up with ‘of course I would have copied the passwd and the init.ora file to the new ORACLE_HOME/dbs folder during the outage.’ The interviewer says “go on, what else will you do?”.
I noticed that a small datapump export involving a few tables was taking more than 30 minutes. I tried enabling parallelism, which was a mistake as for small tables parallelism is not utilised even if the parameter is used in the parameter file – that is probably worth a blog in itself. I then assumed it was a disk performance issue until I tried it on other systems and realised it was quite a general thing.
I then came across bug 10153617 which is quite old (Sept 2010) but the 18.104.22.168 release is old now as well.
I have a long to-do list of things I want to test out and one is rebuilding a standby by using an incremental backup from primary. Then along comes a note from my ex-colleague Vitaly Kaminsky who had recently been faced with the problem when a customer relocated two Primary 2-node RACs and a single node standby databases to a new location and just happened to start the standby databases in read-only mode. Vitaly tells the story :-
When we put a new system into production we get the whole set of infrastructure penetration tested. Reading a recent review I saw the following recommendation as part of the database section.