I have a test evnironment in which the physical filesize on the ASM storage is 2 times the size of the logical size.
1. How is this possible?
2. What can I do to solve this?
Always check out the original article at http://www.oraclequirks.com for latest comments, fixes and updates.
May be user-defined aggregate functions are not among the most frequent hits in the life on a PL/SQL or APEX developer, but today I wanted to find an elegant solution for a problem submitted by a customer without reinventing the wheel:
given a report containing the labor expressed as
So there I was troubleshooting GoldenGate issue and was puzzled as to why GoldenGate transactions were not seen from Oracle database.
I had the transaction XID correct; however, I was filtering by ACTIVE transaction from Oracle which was causing the issue.
Please allow me to share a test case so that you don’t get stumped like I did.
Identify current log and update table
Want to advance your career ?
We’ve seen DBAs become managers, managers become directors, directors become VPs and CIOs go from lesser known companies to some of the best known in the world. Why did they get promoted? Because they brought in Delphix.
Delphix increases the speed, the agility of IT often enabling development teams to go twice as fast, an increase that is unprecedented.
Companies that have this advantage will outperform the competitors.
There was a discrepancy in the failgroups of couple of ASM disks in Exadata. In Exadata, the cell name corresponds to the failgroup name. But there were couple of disks with different failgroup names. Using the following plan to rectify the issue online without any downtime:
1) Check disks and their failgroup:
col name format a27
col path format a45
SQL> select path,failgroup,mount_status,mode_status,header_status,state from v$asm_disk order by failgroup, path;
One of the well-known best practices for HDFS is to store data in few large files, rather than a large number of small ones. There are a few problems related to using many small files but the ultimate HDFS killer is that the memory consumption on the name node is proportional to the number of files stored in the cluster and it doesn’t scale well when that number increases rapidly.
The idea of this blog post is to describe what the delayed durability feature is in SQL Server 2014 and to describe a use case from an application development perspective.
I had an interesting discussion as part of my latest presentation at the UKOUG RAC CIA & Database Combined SIG. Part of my talk was about the implications of the new threaded execution model in Oracle.