The question is not HOW TO DO IT but WHETHER YOU CAN DO IT !
A typical backup script would contains something like
BACKUP DATABASE PLUS ARCHIVELOG:
backup database format '/u99/backup/DB01/20150518/full_0_%d_s%s_p%p' plus archivelog format '/u99/backup/DB01/20150518/arc_%d_s%s_p%p';
Let’s continue with this series about inserting 1M rows and let’s perform the same test with a new variation by using SQL Server In-Memory features. For this blog post, I will still use a minimal configuration that consists of only 1 virtual hyper-V machine with 1 processor, 512MB of memory. In addition my storage includes VHDx disks placed on 2 separate SSDs (one INTEL SSDC2BW180A3L and one Samsung SSD 840 EVO). No special configuration has been performed on Hyper-V.
Let's begin by the creation script of my database DEMO:
"There is much worth noticing that often escapes the eye."
- Norton Juster, The Phantom Tollbooth
Using BULK COLLECT in PL/SQL blocks and procedures can drmatically speed array processing, but it can, if the DBA isn’t prepared, ‘hide’ any errors that occur in the bulk processing list. A ‘plain vanilla’ EXCEPTION handler may not report all errors that are thrown. Let’s look at an example intentionally set up to fail inserts based on data from the EMP table.
I post before how to enable oracle database vault using make command, in this post i will talk about how to enable database vault but using chopt command which easier and faster.Common Syntax :-chopt [enable | disable] db_option
I’ve never really spent time on evolution because most of the time I use baselines for emergency sql plan management and not with the added value of controlled evolution with which the feature was conceived.
But some observations on sql plan baseline evolution originating from the questions:
Starting point – one baselined plan
1. FTS plan in memory from SQL which should do FTS and put in a baseline
ORA-29504: invalid or missing schema nameORA-06512: at line 8 :msg
Online redefinition is a great way to make structural change on "big" tables having "lots of" DML. Using online redefinition, partitioning-nonpartitioning, adding-dropping columns, changing column data types, moving to another tablespace and more can be done with a very small unavailability of the table when compared with direct operations.
Here are some online redefinition MOS notes which
The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary. The standby database was just restored and necessary archived logs was missing somehow (That's another story). It was something i already did in the past so we set to work with my previous notes. Took the backup, copied files to standby server and recovered standby database. But the problem
In an Oracle Database we can mention following auditing types:
Mandatory Auditing causes database start-up/shut-down and SYSDBA-SYSOPER login logout information to be written into AUDIT_FILE_DEST. This auditing cannot be turned off and it's always written into operating system directory specified with
Let us take a look at the process of configuring Goldengate 12c to work in an Oracle 12c Grid Infrastructure RAC or Exadata environment using DBFS on Linux x86-64.
Simply put the Oracle Database File System (DBFS) is a standard file system interface on top of files and directories that are stored in database tables as LOBs.
In one of my earlier posts we had seen how we can configure Goldengate in an Oracle 11gR2 RAC environment using ACFS as the shared location.