June 23, 2013

Hello Again Arch Installing Arch Linux

At this point of time I’m tired of using OSX and I’ve this feeling that I’m losing my grip on my system. I no longer know why some processes are running and for what, and if I should allow those processes to run.

For quite some time I was thinking to migrate back to some Linux distro; Ubuntu was out of question, Debian release processes are too slow for me, Fedora/CentOS are not much fun either. So, I decided to checkout Arch Linux one more time because it’s gaining a lot of traction lately and few of my hardcore friends have already taken refuge under Arch and one of i3/xmonad/awesome.

From my last memories, I liked its minimalistic approach and the fact that no process will run on it unless I wanted. Getting the installation media was easy. Esoteric and technical, but the installation process is quite simple:

  • Disk partioning and formatting
  • Mount partitions, generate filesystem table
  • Installing Arch Linux base system
  • Chroot to configure locale, time, install grub etc. and reboot and done!

Before wiping my MBA and install Arch I’ll be evaluating Arch for next few weeks in a VBox VM. To keep it simple, I’ll have only two partitions; one for / and ones for swap.

# Partition disks
cfdisk

# Format partitions
mkfs -t ext4 /dev/sda1
mkswap /dev/sda2

# Mount partitions

mount /dev/sda1 /mnt
swapon /dev/sda2

# Fix mirror list
vi /etc/pacman.d/mirrorlist

# Install base system
pacstrap /mnt base base-devel

# Generate filesystem table
genfstab /mnt >> /etc/fstab

# Chroot, config system
arch-chroot /mnt

# Change root password
passwd

# Select en/US locale
vi /etc/locale.gen
locale-gen

# Configure timezone
ln -s /usr/share/zoneinfo/Asia/Kolkata /etc/localtime

# Set hostname
echo myawesomehostname > /etc/hostname

# Install bootloader
pacman -S grub-bios

grub-install /dev/sda
mkinitcpio -p grub
grub-mkconfig -o /boot/grub/grub.cfg

# Exit out of chrooted env
exit

# Cleanup reboot
umount /mnt
swapoff /dev/sda2
reboot

After rebooting to the installed system, I enabled bunch of services like dhcpcd so it autoconfigures network/ip for me, edit pacman conf file, update/upgrade using pacman Arch’s package manager, configure sound, x-server, bunch of kernel modules, and install i3 because all the good desktop environments are so messed up and I like tiling window managers.

# Configure network, edit stuff...
dhcpcd
systemctl enable dhcpcd

# Add user
visudo  # Allow %wheel
useradd -m -g users -G storage,power,wheel -s /bin/bash bhaisaab

# Pacman
vi /etc/pacman.conf
# Update
pacman -Syy
# Upgrade
pacman -Su

# Sound
pacman -S alsa-utils
alsamixer --unmute
speaker-test -c2

# X
pacman -S xorg-server xorg-server-utils xorg-init

# VirtualBox drivers
pacman -S virtualbox-guest-utils
modprobe -a vboxguest vboxsf vboxvideo
vi /etc/modules-load.d/virtualbox.config

# X twm, clock, term
pacman -S xorg-twm xorg-clock xterm

# i3
pacman -S i3
echo "exec i3" >> ~/.xinitrc

# Startx
startx

This is what I got for now ;)

If you’re a long time Arch/i3 user share with me your experiences so far and your i3 config file, it’s always good to fork someone’s dotfiles than write from scratch :)


February 19, 2013

Building CloudStack SystemVMs Appliance build automation using Jenkins

CloudStack uses virtual appliances as part of its orchestration. For example, it uses virtual routers for SDN, secondary storage vm for snapshots, templates etc. All these service appliances are created off a template called a systemvm template in CloudStack’s terminologies. This template appliance is patched to create secondary storage vm, console proxy vm or router vm. There was an old way of building systemvms in patches/systemvm/debian/buildsystemvm.sh which is no longer maintained and we wanted to have a way for hackers to just build systemvms on their own box.

did a great job on automating DevCloud appliance building using veewee, a tool with which one can build appliances on VirtualBox. The tool itself is easy to use, you first define what kind of box you want to build, configure a preseed file and add any post installation script you want to run, once done you can export the appliance in various formats using vhd-util, qemu-img and vboxmanage. I finally fixed a solution to this problem today and the code lives in tools/appliance on master branch but this post is not about that solution but about the issues and challenges of setting up an automated jenkins job and on replicating the build job.

I used Ubuntu 12.04 on a large machine which runs a jenkins slave and connects to jenkins.cloudstack.org. After little housekeeping I installed VirtualBox from virtualbox.org. VirtualBox comes up with its command line tool, vboxmanage which can be used to clone, copy and export appliance. I used it to export it to ova, vhd and raw image formats. Next, installed qemu which gets you qemu-img for exporting a raw disk image to the qcow2 format.

The VirtualBox vhd format is compatible to HyperV virtual disk format, but for exporting VHD for Xen, we need to export the appliance to raw disk format and then use vhd-util to convert it to Xen VHD image.

Unfortunately, the vhd-util I got did not work for me, so I just compiled my own from an approach suggested on this blog:

sudo apt-get install bzip2 python-dev gcc g++ build-essential libssl-dev
uuid-dev zlib1g-dev libncurses5-dev libx11-dev python-dev iasl bin86 bcc
gettext libglib2.0-dev libyajl-dev
# On 64 bit system
sudo apt-get install libc6-dev-i386
# Build vhd-util from source
wget -q http://bits.xensource.com/oss-xen/release/4.2.0/xen-4.2.0.tar.gz
tar -xzf xen-4.2.0.tar.gz
cd xen-4.2.0/tools/
wget https://github.com/citrix-openstack/xenserver-utils/raw/master/blktap2.patch -qO - | patch -p0
./configure --disable-monitors --disable-ocamltools --disable-rombios --disable-seabios
cd blktap2/vhd
make -j 2
sudo make install

Last thing was to setup rvm for the jenkins user:

$ \curl -L https://get.rvm.io | bash -s stable --ruby
# In case of dependency or openssl error:
$ rvm requirements run
$ rvm reinstall 1.9.3

One issue with rvm is that it requires a login shell, which I fixed in build.sh using #!/bin/bash -xl. But the build job failed for me due to missing env variables. $HOME needs to be defined and rvm should be in path. The shell commands used to run the jenkins job:

whoami
export PATH=/home/jenkins/.rvm/bin:$PATH
export rvm_path=/home/jenkins/.rvm
export HOME=/home/jenkins/
cd tools/appliance
rm -fr iso/ dist/
chmod +x build.sh
./build.sh

November 27, 2012

DevCloud for CloudStack Development Xen on VirtualBox

Apache CloudStack development is not an easy task, for the simplest of deployments one requires a server where the management server, mysql server and NFS server would run, at least one host or server which would run a hypervisor (to run virtual machines) or would be used for baremetal deployment and some network infrastructure.

And talk about development, sometimes reproducing a bug can take hours or days (been there done that :) and moreover a developer may not have access to such an infrastructure all the time.

The Solution

To solve the problem of infrastructure availability for development and testing, earlier this year Edison, one of the core committers and PPMC members of Apache CloudStack (incubating), created DevCloud.

DevCloud is a virtual appliance shipped as an OVA image which runs on VirtualBox (an opensource type-2 or desktop hypervisor) and can be used for CloudStack’s development and testing. The original DevCloud required 2G of RAM, and ran Ubuntu Precise as dom0 over xen.org’s Xen server which runs as a VM on VirtualBox.

A developer would build and deploy CloudStack artifacts (jars, wars) and files to DevCloud, deploy database and start the management server inside DevCloud. The developer may then use CloudStack running inside DevCloud to add DevCloud as a host and whatnot. DevCloud is now used by a lot of people, especially during the first release of Apache CloudStack, the 4.0.0-incubating, DevCloud was used for the release testing.

My Experiment

When I tried DevCloud for the first time, I thought it was neat, an awesome all in a box solution for offline development. The limitations were; only one host could be used that too in basic zone and it would run mgmt server etc. all inside DevCloud. I wanted to run mgmt server, MySQL server on my laptop and debug with IntelliJ, so I made my own DevCloud setup which would run two XenServers on separate VirtualBox VMs, NFS running on a separate VM and all the VMs on a host-only network.

The host-only network in VirtualBox is a special network which is shared by all the VMs and the host operating system. My setup allowed me to have two hosts so I could do things like VM migration in a cluster etc. But it would crash a lot and network won’t work. I learnt how bridging in Xen worked and using tcpdump found that the packets were dropped but ARP request was allowed, the fix was to just enable host-only adapter’s promiscuous mode to allow all. I also tried to run KVM on VirtualBox, which did not work as KVM does not support PV and requires HVM so it cannot run on processors without Intel-VT or Amd-V. None of which is emulated by VirtualBox.

Motivation

CloudStack’s build system was changed from Ant to Maven, and this required some changes in DevCloud which made it possible to use the original appliance with the new build system. The changes were not straight forward so I decided to work on the next iteration of DevCloud with the following goals:

  • Two network interfaces, host-only adapter so that the VM is reachable from host os and a NAT so VMs can access Internet.
  • Can be used both as an all in one box solution like the original DevCloud but the mgmt server and other services can run elsewhere (on host os).
  • Reduce resource requirements, so one could run it in 1G limit.
  • Allow multiple DevCloud VMs hosts.
  • x86 dom0 and xen-i386 so it runs on all host os.
  • Reduce exported appliance (ova) file size.
  • It should be seamless, it should work out of the box.

DevCloud 2.0

I started by creating an appliance using Ubuntu 12.04.1 server which failed for me. The network interfaces would stop working after reboot and few users reported blank screen. I never caught the actual issue, so I tried to create the appliance using different distributions including Fedora, Debian and Arch. Fedora did not work and stripping down to a bare minimum required a lot of work. Arch VM was very small in size but I dropped my idea to work on it as it can be unstable, people may not be familiar with pacman and may fail to appreciate the simplicity of the distribution.

Finally, I hit the jackpot with Debian! Debian Wheezy just worked, took me some time to create it from scratch (more than ten times) and figure out the correct configurations. The new appliance is available for download, get DevCloud 2.0 (867MB, md5checksum: 144b41193229ead4c9b3213c1c40f005).

Install VirtualBox and import the new DevCloud2 appliance and start it. In default settings, it is reachable on ip 192.168.56.10 with username root and password password. Next start hacking either inside the DevCloud appliance or on your laptop (host os):

# ssh inside DevCloud if building inside it:
$ ssh -v [email protected]
$ cd to /opt/cloudstack # or any other directory, it does not matter
# Get the source code:
$ git clone https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git
$ cd incubator-cloudstack
# Build management server:
$ mvn clean install -P developer,systemvm
# Deploy database:
$ mvn -pl developer,tools/devcloud -Ddeploydb -P developer
# Export the following only if you want debugging on port 8787
$ export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=800m -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n"
# Run the management server:
$ mvn -pl client jetty:run
# In Global Settings check `host` to 192.168.56.1 (or .10 if inside DevCloud)
# and `system.vm.use.local.storage` to true, restart mgmt server.
# Set the maximum number of console proxy vms to 0 if you don't need one from
# CloudStack's global settings, this will save you some RAM.
# Now add a basic zone with local storage. May be start more DevCloud hosts by
# importing more appliances and changing default IPs and reboot!

Make sure your mgmt server is running and you may deploy a basic zone using preconfigured settings in tools/devcloud/devcloud.cfg:

$ mvn -P developer -pl tools/devcloud -Ddeploysvr
# Or in case mvn fails try the following, (can fail if you run mgmt server in debug mode on port 8787)
$ cd tools/devcloud
$ python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg

DIY DevCloud

Install VirtualBox and get the Debian Wheezy 7.0. I used the netinst i386 iso. Create a new VM in VirtualBox with Debian/Linux as the distro, 2G RAM, 20G or more disk and two nics: host-only with promiscuous mode “allow-all” and a NAT adapter. Next, install a base Debian system with linux-kernel-pae (generic), and openssh-server. You may download my base system from here.

Install required tools and Xen-i386:

$ apt-get install git vim tcpdump ebtables --no-install-recommends
$ apt-get install openjdk-6-jdk genisoimage python-pip mysql-server nfs-kernel-server --no-install-recommends
$ apt-get install linux-headers-3.2.0-4-686-pae xen-hypervisor-4.1-i386 xcp-xapi xcp-xe xcp-guest-templates xcp-vncterm xen-tools blktap-utils blktap-dkms qemu-keymaps qemu-utils --no-install-recommends

You’ll have to build and install mkisofs. Remove MySQL password:

$ mysql -u root -p
  > SET PASSWORD FOR root@localhost=PASSWORD('');
  > exit;

Install MySQL Python connector 1.0.7 or latest:

$ pip install mysql-connector-python
# Or, if you have easy_install you can do: easy_install mysql-connector-python

Setup Xen and XCP/XAPI:

$ echo "bridge" > /etc/xcp/network.conf
$ update-rc.d xendomains disable
$ echo TOOLSTACK=xapi > /etc/default/xen
$ sed -i 's/GRUB_DEFAULT=.\+/GRUB_DEFAULT="Xen 4.1-i386"/' /etc/default/grub
$ sed -i 's/GRUB_CMDLINE_LINUX=.\+/GRUB_CMDLINE_LINUX="apparmor=0"\nGRUB_CMDLINE_XEN="dom0_mem=400M,max:500M dom0_max_vcpus=1"/' /etc/default/grub
$ update-grub
$ sed -i 's/VNCTERM_LISTEN=.\+/VNCTERM_LISTEN="-v 0.0.0.0:1"/' /usr/lib/xcp/lib/vncterm-wrapper
$ cat > /usr/lib/xcp/plugins/echo << EOF
#!/usr/bin/env python

# Simple XenAPI plugin
import XenAPIPlugin, time

def main(session, args):
    if args.has_key("sleep"):
        secs = int(args["sleep"])
        time.sleep(secs)
    return "args were: %s" % (repr(args))

if __name__ == "__main__":
    XenAPIPlugin.dispatch({"main": main})
EOF

$ chmod -R 777 /usr/lib/xcp
$ mkdir -p /root/.ssh
$ ssh-keygen -A -q

Network settings, /etc/network/interfaces:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

allow-hotplug eth1
iface eth1 inet manual

auto xenbr0
iface xenbr0 inet static
        bridge_ports eth0
        address 192.168.56.10
        netmask 255.255.255.0
        network 192.168.56.0
        broadcast 192.168.56.255
        gateway 192.168.56.1
        dns_nameservers 8.8.8.8 8.8.4.4
        post-up route del default gw 192.168.56.1; route add default gw 192.168.56.1 metric 100;

auto xenbr1
iface xenbr1 inet dhcp
        bridge_ports eth1
        dns_nameservers 8.8.8.8 8.8.4.4
        post-up route add default gw 10.0.3.2

Preseed the SystemVM templates in /opt/storage/secondary, follow directions from here. Configure NFS server and local storage.

$ mkdir -p /opt/storage/secondary
$ mkdir -p /opt/storage/primary
$ hostuuid=`xe host-list |grep uuid|awk '{print $5}'`
$ xe sr-create host-uuid=$hostuuid name-label=local-storage shared=false type=file device-config:location=/opt/storage/primary
$ echo "/opt/storage/secondary *(rw,no_subtree_check,no_root_squash,fsid=0)" > /etc/exports
$ #preseed systemvm template, may be copy files from devcloud's /opt/storage/secondary
$ /etc/init.d/nfs-kernel-server restart
Please email your queries on the ACS ML `[email protected]`

November 21, 2012

CloudStack Cloudmonkey python powered command line interface

About 2-3 weeks ago I started writing a CLI (command line interface) for Apache CloudStack. I researched some options and finally chose Python and cmd. Python comes preinstalled on almost all Linux distros and Mac (Windows I don’t care :P it’s not developer friendly), cmd is a standard package in Python with which one can write a tool which can work as a command line tool and as an interactive shell interpretor. I named it cloudmonkey after the project’s mascot. In this blog and elsewhere I use the name as Cloudmonkey or cloudmonkey, but not CloudMonkey :P


Cloudmonkey on OSX

Apache CloudStack has around 300 restful APIs give or take, and writing handlers (autocompletion, help, request handlers etc.) seemed a mammoth task at first. Marvin (the ignored robot) came to rescue. Marvin is a Python package within CloudStack and was written by Edison and now maintained by Prasanna which provides bunch of classes with which one can implement a client for CloudStack and provides cloudstackAPI. It’s interesting how cloudstackAPI is generated. A developer writes an API and fills the boilterplate with API specific details such as required params etc. and java doc string. This information is picked up by an api writer class which generates an xml containing information about each API, its docstring and parameters. This is used by apidocs artifact to generate API help docs and used by Marvin’s code generator to create a module for cloudstackAPI which contains command and response classes. When I understood the whole process I thought if I can reuse this somehow I won’t have to deal with the 300 APIs directly.


Cloudmonkey on Ubuntu

I’ve always been a fan of functional programming, iterative or object oriented programming was not going to help. So, I grouped the apis based on their first lowercase chars, for example for the api listUsers, the verb is list. Based on such pattern, I wrote the code so that it would group APIs based on such verbs and create handlers on the fly and add them to the shell class. The handlers are actually closures so, this way every handler is actual a dynamic function in memory enclosed by the closure generator for a verb. In the initial version, when a command was executed first time based on its verb, command class from appropriate module from cloudstackAPI would be loaded and a cache dictionary would be populated if a cache miss was hit. In later version, I wrote a cache generator which would precache all the APIs at build time to cheat on the runtime lookup overhead from O(n) to O(1). This cache would contain for each verb the api name, required params, all params and help strings. This dictionary is used for autocompletion for the verbs, the commands and their parameters, and for help strings.

grammar = ['list', 'create', 'update', 'delete', ...]
for rule in grammar:
    def add_grammar(rule):
        def grammar_closure(self, args):
            if not rule in self.cache_verbs:
                self.cache_verb_miss(rule)
            try:
                args_partition = args.partition(" ")
                res = self.cache_verbs[rule][args_partition[0]]

            except KeyError, e:
                self.print_shell("Error: invalid %s api arg" % rule, e)
                return
            if ' --help' in args or ' -h' in args:
                self.print_shell(res[2])
                return
            self.default(res[0] + " " + args_partition[2])
        return grammar_closure

Right now cloudmonkey is available as a community distribution on the cheese shop, so pip install cloudmonkey already! It has a wiki on building, installation and usage instructions, or watch a screencast (transcript, alternate link) I made for users. As the userbase grows, it will only get better. Feel free to reachout to me and the Apache CloudStack team on IRC or on the mailing lists.


November 2, 2012

Apache CloudStack Hyderabad Meetup the first official hyd meetup


We had our first official Apache CloudStack meetup in Hyderabad yesterday, 1 Nov 2012, at Lemon Tree, Hyderabad. Earlier we gave a small bird eye view presentation on Apache CloudStack during a local Hadoop User Group (HUG) meetup. This time, the meetup was totally focussed on Apache CloudStack.



The meetup started at 5PM and the presentations ended at 7:20PM, followed by about an hour of networking and discussions, and t-shirts for everyone. It was attended by 134 people and organised by 7 people that includes (Nitin who was out of station at the time), Kishan, Prasanna, Bidisha, Sadhu, Praveen, Hari P and myself.

We started by welcoming the attendees and showed them an introductory video on Apache CloudStack. Next we did a poll and found that all of the attendees have heard of cloud computing, most of them use or are going to use cloud or are excited to learn about it. Most of the crowd rated themselves as users, some managers and some engineers. Most of them said they use open source technologies on daily basis and there were few who contribute to opensource.

Kevin Kluge (Apache CloudStack committer and VP Cloud Platforms Group, Citrix) gave his keynote on building your own IaaS cloud with Apache CloudStack and gave a small demo of CloudStack.



Next, Prasanna Santhanam (Apache CloudStack committer, amateur violinist dude) gave a brief talk on Apache Software Foundation, how the opensource community works and how to participate.



The last presentation was by Chirag Jog (CTO, Clogeny) on migrating application to the IaaS clouds.



All the presentations and photos are downloadable from here, some select photos are here. The meetup presentations were followed by questions;



discussions;



and people picking up the CloudStack t-shirts on their way out. Big thanks to all the speakers, attendees and my co-organizers. See you in the next meetups, in FOSS.in 2012 or during CloudStack Collaboration 2012.




© Rohit Yadav 2009-2013 | Report bug or fork source | Last updated on 23 Jun 2013
Ohloh profile for Rohit Yadav FOSS ITBHU hacker emblem