This post was forwarded to me by Vitaly Kaminsky who did work with me but has now bettered himself elsewhere. He writes :-
I have recently been involved with performance tuning of a database layer for the major Dutch website which was preparing for the “crazy days” of sales on the 2nd to 5th of October.
The general setup was as follows:
2-node physical RAC cluster with fast storage layer running ORACLE 11gR2 SE and ASM.
A short entry showing two problems that I have come across recently with datapump export and the workarounds I used.
Using ASM disk for exports
The title of this post is intentionally slightly unclear and hopefully it is intrigued people to view the post and even better, add their comments.
SQL Plan Management has been around since 11G came out which is back in 2007. It does not require a tuning pack, so the package DBMS_SPM can be used without additional licensing but if the SQL Tuning advisor is used to generate new profiles via tuning task then that does require a tuning pack license.
I am pleased to say that I have been offered at a slot at the UKOUG Tech conference to present my paper on “The design, creation and maintenance of an AWR repository”. I have presented this once already to a SIG meeting and I felt it went down well and the feedback and comments I received afterwards supported that and allowed me to develop the ideas further.
I have had these images for a few weeks but I was reminded to make a post after seeing Doug Burns present on using the monitoring tool in 12c at a UKOUG SIG meeting last week and he mentioned how you can see bind variables quite easily. Of course you could always work your way through a 10046 trace but I think you might find these options easier.
Firstly how to get them from the command line
select sql_id, sid from v$sql_monitor where sql_exec_id = 16779111;
I am pleased to be presenting at the UKOUG AIM and Database SIG in London on Thursday 18th July. I am talking about building and using an AWR repository.
This is just a reminder for what I think is a strong Core DBA centric agenda being organised by the UKOUG
The location is the Met Hotel in Leeds, right by the station and the last meeting in Leeds several years ago, was well attended so I am sure this one will be as well.
The agenda is below
Date: Thursday 9th May 2013
Time: 09:00 – 17:00
Location: Leeds/ West Yorkshire
Venue: The Met, Leeds
Changing a database to use a new oracle home is easy isn’t it? A simple interview question that you would never get wrong. Everyone knows that you shutdown the database , edit the oratab file and restart the database again. So then the interviewer presses and is obviously looking for more.
You confidently follow-up with ‘of course I would have copied the passwd and the init.ora file to the new ORACLE_HOME/dbs folder during the outage.’ The interviewer says “go on, what else will you do?”.
I noticed that a small datapump export involving a few tables was taking more than 30 minutes. I tried enabling parallelism, which was a mistake as for small tables parallelism is not utilised even if the parameter is used in the parameter file – that is probably worth a blog in itself. I then assumed it was a disk performance issue until I tried it on other systems and realised it was quite a general thing.
I then came across bug 10153617 which is quite old (Sept 2010) but the 188.8.131.52 release is old now as well.