view counter

Virtualization Feed

Xen Hypervisor, KVM Hypervisor and Oracle virtualization resources, news, and support articles.

Migration of Oracle GI quorum disk to another diskgroup

When installing Oracle RAC (or in its more modern name – GI) version and above, you can use Oracle ASM DiskGroup as your CRS+Voting file location.
It is fairly simple changing the disk membership in Oracle ASM DiskGroup, however, when you face some unknown bugs which prevent you from doing just that, or when you are required to modify the ASM DiskGroup on which the CRS+Voting files are placed, the article below is the one for you. You would have to remember, in addition, the ASM spfile.

Xen Project 4.5.1 Maintenance Release Available

I am pleased to announce the release of Xen 4.5.1. We recommend that all users of the 4.5 stable series update to this first point release.

Xen 4.5.1 is available immediately from its git repository:;a=shortlog;h=refs/heads/stable-4.5
    (tag RELEASE-4.5.1)

2015 Xen Project Developer Summit Line-up Announced

I am pleased to announce the schedule for the Xen Project Developer Summit. The event will take place in Seattle on August 17-18, 2015.

Future of Xen Project – Video Spotlight with ARM’s Thomas Molgaard

ARM joined Xen Project two years ago as part of its drive into servers, networking and the emerging “Internet of Things” markets. In our latest “Future of Xen” video, Thomas Molgaard, Manager of Software Marketing – Systems & Software at ARM, talks about changes unfolding in enterprise and cloud computing that are creating new opportunities for his company and virtualization.

Xen Project now in OpenStack Nova Hypervisor Driver Quality Group B

A few weeks ago, we introduced the Xen Project – OpenStack CI Loop, which is testing Nova commits against the Xen Project Hypervisor and Libvirt. Xen Project community is pleased to announce that we have moved from Quality Group C to B, as we’ve made significant progress in the last few weeks and the Xen Project CI loop is now voting on Nova commits.

Once again about pros/cons of Systemd and Upstart

Upstart advantages.
1. Upstart is simpler for porting on the systems other than Linux while systemd is very rigidly tied on Linux kernel opportunities.Adaptation of Upstart for work in Debian GNU/kFreeBSD and Debian GNU/Hurd looks quite real task that it is impossible to tell about systemd;
2. Upstart is more habitual for the Debian developers, many of which in combination participate in development of Ubuntu. Two members of Technical committee Debian (Steve Langasek and Colin Watson) are a part of group of the Upstart developers.

Hardening Hypervisors Against VENOM-Style Attacks

This is a guest blog post by Tamas K. Lengyel, a long-time open source enthusiast and Xen contributor. Tamas works as a Senior Security Researcher at Novetta, while finishing his PhD on the topic of malware analysis and virtualization security at the University of Connecticut.

RDO Kilo Three Node Setup for Controller+Network+Compute (ML2&OVS&VXLAN) on CentOS 7.1

Following bellow is brief instruction  for traditional three node deployment test Controller&&Network&&Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4771 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets)
SELINUX stays in enforcing mode.
Three Libvirt networks created

Xen Project Hackathon 15 Event Report

After spending almost a week in Shanghai for the Xen Project Hackathon it is time to write up some notes.

More than 48 delegates from Alibaba, Citrix, Desay SV Automotive, GlobalLogic, Fujitsu, Huawei, Intel, Oracle, Suse and Visteon Electronics attended the event, which covered a wide range of topics.

I wanted to thank Susie Li, Hongbo Wang and Mei Yu from Intel for funding and organizing the event.

Connecting EMC/NetApp shelves as JBOD to a Linux machine

Let’s say you have old shelves of either EMC or NetApp with SAS or SATA disks in them. And let’s say you want to connect them via FC to a Linux machine and have some nice ZFS machine/cluster, or whatever else. There are few things to know, and to attend in order for it to work.

view counter