Apache CloudStack development is not an easy task, for the simplest of deployments one requires a server where the management server, mysql server and NFS server would run, at least one host or server which would run a hypervisor (to run virtual machines) or would be used for baremetal deployment and some network infrastructure.
And talk about development, sometimes reproducing a bug can take hours or days (been there done that :) and moreover a developer may not have access to such an infrastructure all the time.
To solve the problem of infrastructure availability for development and testing, earlier this year Edison, one of the core committers and PPMC members of Apache CloudStack (incubating), created DevCloud.
DevCloud
is a virtual appliance shipped as an OVA image which runs on VirtualBox (an opensource type-2 or desktop hypervisor) and can be used for CloudStack’s development and testing. The original DevCloud required 2G of RAM, and ran Ubuntu Precise as dom0 over xen.org’s Xen server which runs as a VM on VirtualBox.
A developer would build and deploy CloudStack artifacts (jars, wars) and files to DevCloud, deploy database and start the management server inside DevCloud. The developer may then use CloudStack running inside DevCloud to add DevCloud as a host and whatnot. DevCloud is now used by a lot of people, especially during the first release of Apache CloudStack, the 4.0.0-incubating, DevCloud was used for the release testing.
When I tried DevCloud for the first time, I thought it was neat, an awesome all in a box solution for offline development. The limitations were; only one host could be used that too in basic zone and it would run mgmt server etc. all inside DevCloud. I wanted to run mgmt server, MySQL server on my laptop and debug with IntelliJ, so I made my own DevCloud setup which would run two XenServers on separate VirtualBox VMs, NFS running on a separate VM and all the VMs on a host-only network.
The host-only
network in VirtualBox is a special network which is shared by all the VMs and the host operating system. My setup allowed me to have two hosts so I could do things like VM migration in a cluster etc. But it would crash a lot and network won’t work. I learnt how bridging in Xen worked and using tcpdump found that the packets were dropped but ARP request was allowed, the fix was to just enable host-only adapter’s promiscuous mode to allow all. I also tried to run KVM on VirtualBox, which did not work as KVM does not support PV and requires HVM so it cannot run on processors without Intel-VT or Amd-V. None of which is emulated by VirtualBox.
CloudStack’s build system was changed from Ant to Maven, and this required some changes in DevCloud which made it possible to use the original appliance with the new build system. The changes were not straight forward so I decided to work on the next iteration of DevCloud with the following goals:
I started by creating an appliance using Ubuntu 12.04.1 server which failed for me. The network interfaces would stop working after reboot and few users reported blank screen. I never caught the actual issue, so I tried to create the appliance using different distributions including Fedora, Debian and Arch. Fedora did not work and stripping down to a bare minimum required a lot of work. Arch VM was very small in size but I dropped my idea to work on it as it can be unstable, people may not be familiar with pacman and may fail to appreciate the simplicity of the distribution.
Finally, I hit the jackpot with Debian! Debian Wheezy just worked, took me some time to create it from scratch (more than ten times) and figure out the correct configurations. The new appliance is available for download, get DevCloud 2.0 (867MB, md5checksum: 144b41193229ead4c9b3213c1c40f005).
Install VirtualBox and import the new DevCloud2 appliance and start it. In default settings, it is reachable on ip 192.168.56.10
with username root
and password password
. Next start hacking either inside the DevCloud appliance or on your laptop (host os):
# ssh inside DevCloud if building inside it: $ ssh -v [email protected] $ cd to /opt/cloudstack # or any other directory, it does not matter # Get the source code: $ git clone https://git-wip-us.apache.org/repos/asf/incubator-cloudstack.git $ cd incubator-cloudstack # Build management server: $ mvn clean install -P developer,systemvm # Deploy database: $ mvn -pl developer,tools/devcloud -Ddeploydb -P developer # Export the following only if you want debugging on port 8787 $ export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=800m -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n" # Run the management server: $ mvn -pl client jetty:run # In Global Settings check `host` to 192.168.56.1 (or .10 if inside DevCloud) # and `system.vm.use.local.storage` to true, restart mgmt server. # Set the maximum number of console proxy vms to 0 if you don't need one from # CloudStack's global settings, this will save you some RAM. # Now add a basic zone with local storage. May be start more DevCloud hosts by # importing more appliances and changing default IPs and reboot!
Make sure your mgmt server is running and you may deploy a basic zone using preconfigured settings in tools/devcloud/devcloud.cfg
:
$ mvn -P developer -pl tools/devcloud -Ddeploysvr # Or in case mvn fails try the following, (can fail if you run mgmt server in debug mode on port 8787) $ cd tools/devcloud $ python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
Install VirtualBox and get the Debian Wheezy 7.0. I used the netinst i386 iso. Create a new VM in VirtualBox with Debian/Linux as the distro, 2G RAM, 20G or more disk and two nics: host-only with promiscuous mode “allow-all” and a NAT adapter. Next, install a base Debian system with linux-kernel-pae (generic), and openssh-server. You may download my base system from here.
Install required tools and Xen-i386:
$ apt-get install git vim tcpdump ebtables --no-install-recommends $ apt-get install openjdk-6-jdk genisoimage python-pip mysql-server nfs-kernel-server --no-install-recommends $ apt-get install linux-headers-3.2.0-4-686-pae xen-hypervisor-4.1-i386 xcp-xapi xcp-xe xcp-guest-templates xcp-vncterm xen-tools blktap-utils blktap-dkms qemu-keymaps qemu-utils --no-install-recommends
You’ll have to build and install mkisofs. Remove MySQL password:
$ mysql -u root -p > SET PASSWORD FOR root@localhost=PASSWORD(''); > exit;
Install MySQL Python connector 1.0.7 or latest:
$ pip install mysql-connector-python # Or, if you have easy_install you can do: easy_install mysql-connector-python
Setup Xen and XCP/XAPI:
$ echo "bridge" > /etc/xcp/network.conf $ update-rc.d xendomains disable $ echo TOOLSTACK=xapi > /etc/default/xen $ sed -i 's/GRUB_DEFAULT=.\+/GRUB_DEFAULT="Xen 4.1-i386"/' /etc/default/grub $ sed -i 's/GRUB_CMDLINE_LINUX=.\+/GRUB_CMDLINE_LINUX="apparmor=0"\nGRUB_CMDLINE_XEN="dom0_mem=400M,max:500M dom0_max_vcpus=1"/' /etc/default/grub $ update-grub $ sed -i 's/VNCTERM_LISTEN=.\+/VNCTERM_LISTEN="-v 0.0.0.0:1"/' /usr/lib/xcp/lib/vncterm-wrapper $ cat > /usr/lib/xcp/plugins/echo << EOF #!/usr/bin/env python # Simple XenAPI plugin import XenAPIPlugin, time def main(session, args): if args.has_key("sleep"): secs = int(args["sleep"]) time.sleep(secs) return "args were: %s" % (repr(args)) if __name__ == "__main__": XenAPIPlugin.dispatch({"main": main}) EOF $ chmod -R 777 /usr/lib/xcp $ mkdir -p /root/.ssh $ ssh-keygen -A -q
Network settings, /etc/network/interfaces:
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual allow-hotplug eth1 iface eth1 inet manual auto xenbr0 iface xenbr0 inet static bridge_ports eth0 address 192.168.56.10 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255 gateway 192.168.56.1 dns_nameservers 8.8.8.8 8.8.4.4 post-up route del default gw 192.168.56.1; route add default gw 192.168.56.1 metric 100; auto xenbr1 iface xenbr1 inet dhcp bridge_ports eth1 dns_nameservers 8.8.8.8 8.8.4.4 post-up route add default gw 10.0.3.2
Preseed the SystemVM templates in /opt/storage/secondary
, follow directions from here. Configure NFS server and local storage.
$ mkdir -p /opt/storage/secondary $ mkdir -p /opt/storage/primary $ hostuuid=`xe host-list |grep uuid|awk '{print $5}'` $ xe sr-create host-uuid=$hostuuid name-label=local-storage shared=false type=file device-config:location=/opt/storage/primary $ echo "/opt/storage/secondary *(rw,no_subtree_check,no_root_squash,fsid=0)" > /etc/exports $ #preseed systemvm template, may be copy files from devcloud's /opt/storage/secondary $ /etc/init.d/nfs-kernel-server restart
© Rohit Yadav 2009-2012 | Report bug or fork source | Last updated on 30 Nov 2012