Chris Saxon, who is the "domain admin" for the Database Design and author of many of the weekly quizzes, would love to get feedback from our players on these quizzes. He's been doing them now for six months and figures it's time to check in with all of you.
So he's used Google Forms to set up a quick, anonymous survey:
People invent really complicated ways to achieve simple tasks such as copying a table from one database to another. With database link (which is arguably RDBMS’s the most under appreciated feature) it is just one SQL command:
create table emp as select * from scott.emp@dblink1;
But who has time to create database links? In SQLDeveloper 4.0 you don’t have to. Assuming that you have the right connection the script
Analytic functions are powerful and efficient, but sometimes they just aren’t enough. When you try analytics and they alone don’t solve the problem, it’s time to think about the MODEL clause – or upgrade to 12c and use MATCH_RECOGNIZE. All three can use partitions and ordering for simpler, more efficient processing. To illustrate, here’s a […]
Being a huge fan of Logger, the PL/SQL logging utility, I really wanted this be to included in the project that I'm currently working on. So I downloaded it (link at the bottom of this blog) and included it in our deployment scripts. Done.... at least I thought so, but of course this wasn't the case.
So far in the joins series we’ve looked at the effect removing joins (via denormalization) has on performance. We’ve seen that joins can cause primary key looks to do more work. Lowering the normalization level to remove these can negatively impact “search” style queries though. More importantly, we’ve seen the real cost of denormalizing to remove joins is when updating records, potentially leading to concurrency waits and application bugs.
So are joins always “good”?
Warning: this post is not technical and it is not about Oracle. Brendan Eich recently resigned under pressure from his job as CEO of Mozilla, the makers of Firefox. The reason given was a campaign contribution that Mr. Eich made in 2008. The State of California has the “referendum”: a proposition is submitted to a […]
In the previous article in the joins series we compared query performance between a third normal form schema and the same schema denormalized to second normal form. We then extended it the example so our denormalized schema was in just first normal form.
Funny that oracle can easily cast ‘nan’,’inf’,’infinity’,’-inf’,’-infinity’ to corresponding binary_float_infinity,binary_double_nan, but there is no any format models for to_char(binary_float_infinity,format) or to_binary_***(text_expr,format) that can output the same as to_char(binary_float_infinity)/to_binary_float(‘inf’) without format parameter:
Continuing the series on joins, I’m going to look at denormalization. This process reduces the number of joins necessary to return results for a schema.
One of the big arguments against normalizing data is “for performance”. The process of normalization creates new tables as relations are decomposed according to their functional dependencies. This means (more) joins are necessary to return the same results.