I have a long to-do list of things I want to test out and one is rebuilding a standby by using an incremental backup from primary. Then along comes a note from my ex-colleague Vitaly Kaminsky who had recently been faced with the problem when a customer relocated two Primary 2-node RACs and a single node standby databases to a new location and just happened to start the standby databases in read-only mode. Vitaly tells the story :-
When we put a new system into production we get the whole set of infrastructure penetration tested. Reading a recent review I saw the following recommendation as part of the database section.
Last time a database server SIG was held in Leeds we had a very good attendance and hopefully this will be repeated on Thursday May 9th 2013 when the Metropole Hotel will be the host. This 4 star hotel is very conveniently placed not more than a couple of hundred yards from the station and should be a very good venue.
As always we are looking for good presentations from UKOUG members so that we can have a really strong database focused agenda.
According to My Oracle Support note – “How To Add a New Disk(s) to An Existing Diskgroup on RAC (Best Practices). [ID 557348.1]” you should create a test diskgroup using new storage before adding it to an existing diskgroup. That seems eminently sensible, although it is not something I normally do. It proves you can access the disk, and if there is a conflict (i.e the disk is already mapped and in use elsewhere) you are not risking your production DATA diskgroup.
Within MoS there are a set of notes which list all the patches of each PSU and show in which PSU they came in (PSUs are cumulative). The READme.html file with each PSU does not contain this information
126.96.36.199 Patch Set Updates – List Of Fixes In Each PSU [ID 1337836.1]
188.8.131.52 Patch Set Updates – List Of Fixes In Each PSU [ID 1340010.1]
184.108.40.206 Patch Set Updates – List Of Fixes In Each PSU [ID 1340011.1]
220.127.116.11 Patch Set Updates – List Of Fixes In Each PSU [ID 1449750.1]
Actually it is a cardboard cutout – not me, the server. We have real ones downstairs in the data centre but my cardboard one doesn’t cost as much to support or keep air-conditioned.
Merry Xmas to all
Today was Unconference day at UKOUG which was a pot-pourri of talks all lasting about 20 minutes each. The talks were interesting although sparsely attended, which was actually a benefit as there was plenty of opportunity for interaction and indeed complete disregard of the main topic under discussion.
I didn’t actually get to see any scheduled presentations but I had seen enough in the previous two days to keep me going for a while.
So adios UKOUG at Birmingham and welcome to the Tech conference in Manchester next December.
A deep dive into Dataguard is a tough way to start the day off but Emre Baransel handled it well. I jotted down about 6 or 7 takeaways for later investigation. Much of the talk was about tuning the redo log apply and I must admit that I don’t consider we have many problems in that area across the estate but it is still worth reviewing. One topic that stuck me as looking at in more detail was the ability to recover a standby database from a primary backup. Chatting to a few people afterwards they had all heard of the capability to do that but had never tested it.
The UKOUG conference comes around again and the first two presentations re-confirmed why I attend. Jonathan Lewis was talking about generating data for test cases and I realised that I could generate the data for an interesting issue at work involving the non use of an index and a histogram with a bucket size of 60. It wasn’t so much that the data was hard to create it was that it had just not occurred to me to do so.
This is basically a set of notes I wrote for myself about adding new voting disks and OCR disks to a sandpit RAC cluster as part of testing for migration between HP XP disk array and HP 3PAR disk array. The o/s was HPUX with 18.104.22.168 database.