Today’s blog post is part two of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to do basic configuration setup to get started with OpenStack. In my first blog post, I explained why I decided to use OpenStack.
I am using a two-node setup in my environment, but you can still follow these steps and configure everything on single node. The below configuration reflects my setup. Kindly modify it as per your subnet and settings.
This Saturday October 11, I will be speaking at SQL Saturday Bulgaria 2014 in Sofia. It’s my first time in the country and I’m really excited to be part of another SQL Saturday :)
I will be speaking about Buffer Pool Extension, a new feature on SQL Server 2014. If you want to learn a little more about the new SQL Server version, don’t hesitate to attend the event. Looking forward to seeing you there!
It seems its all about cloud these days. Even the hardware is being marketed with cloud in perspective. Databases like Oracle, SQL Server and MySQL are ahead in the cloud game and this Log Buffer Edition covers that all.
Most companies want to deploy features faster, and fix bugs more quickly—at the same time, a stable product that delivers what the users expected is crucial to winning and keeping the trust of those users. At face value, stability and speed appear to be in conflict; developers can either spend their time on features or on stability.
Today’s blog post is part one of seven in a series dedicated to Deploying a Private Cloud at Home. In my day-to-day activities, I come across various scenarios where I’m required to do sandbox testing before proceeding further on the production environment—which is great because it allows me to sharpen and develop my skills.
Today’s blog post completes our three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In the first two posts, we discussed the impact of big data on today’s organizations, and its challenges.
Today, we’ll be sharing what organizations can accomplish by using the Microsoft Hadoop solution:
Oracle Open World is in full bloom. Enthusiasts of Oracle and MySQL are flocking to extract as much knowledge, news, and fun as possible. SQL Server aficionados are not far behind too.
Frank Nimphius have announced REST support for ADF BC feature on OOW today. Probably this functionality will be available in the next JDeveloper 12c update release.
Today’s blog post is the second in a three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In our first blog post, we revealed just how much data is being generated globally every minute – and that it has doubled since 2011.
Today’s blog post is the first in a three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data.
As companies increasingly rely on big data to steer decisions, they also find themselves looking for ways to simplify its storage, management, and analysis. The need to quickly access large amounts of data and use them competitively poses a technological challenge to organizations of all sizes.