Install Windows Server 2008 on KVM/VirtIO

An example how to install a Windows Server 2008 guest on KVM with VirtIO (on a LVM volume in the example). Besides the installation medium you need the VirtIO drivers for windows in order to be able to access the disk device. You can get signed binary drivers here. Then set up a Windows VM with a command like this:

$ virt-install --connect qemu:///system --arch=x86_64 -n win2k8 -r 1024 --vcpus=2 
--disk pool=vmstore,size=50,bus=virtio,cache=none -c /path/to/win2k8.iso --vnc 
--noautoconsole --os-type windows --os-variant win2k8 --network network=subnet,model=e1000 
--disk path=/path/to/virtio-win-1.1.16.iso,device=cdrom,perms=ro

When the guest is running, shut it down and edit the os section of the XML file to look like this (otherwise Windows setup won’t let you install on the disk):

$ virsh destroy win2k8
$ virsh edit win2k8

Change the XML as follows:

  <boot dev='cdrom'/>
  <boot dev='hd'/>

Start the guest again:

$ virsh start win2k8

Connect to the VNC console and start the installation process. When you reach the form to select a disk device you won’t see any devices available. Click on “Load drivers” at the bottom left and load the drivers from E:\viostor\wlh{amd64|x86}. After the drivers are installed, you’ll see a disk device and can continue with the installation.


Fetchmail/Sieve on ISPMail setup – update

Quite some time ago I wrote a tutorial on how to integrate fetchmail and sieve into virtual mail with Postfix and Dovecot. As time passes and things change, here’s an update:

  1. I don’t use the sieve part anymore. Instead, I use the ManageSieve server provided by Dovecot. It integrates with Dovecot’s authentication system and you can use all kinds of clients supporting the managesieve protocol, e.g. Thunderbird or Roundcube.
  2. The ISPMail database structure changed since I wrote that tutorial which still relied on the DB structure for the Debian Etch structure. I updated the script to reflect both environments (take a look at the config file). However, it does not rely on any DB views anymore.
  3. The script (only the Fetchmail part) is now hosted on GitHub as I rarely use SVN anymore and the SVN may go offline in the near future. If anybody is interested in the Sieve part, just drop me a line.

Restore MySQL databases from raw *.frm files

I recently needed to restore data from a MySQL server where the host machine crashed and where I unfortunately didn’t have a proper dump backup – all I had was a backup of the MySQL data directory (/var/lib/mysql in case of Debian). After some googling I didn’t find a simple solution how to restore databases out of this backup. The solution which worked in the end was the following: I installed a fresh MySQL server in a virtual machine and replaced its data directory with the one from my backup (I had exactly the same MySQL versions on both machines). This allowed me to access the databases and create proper dumps which I could import in my real server later.

So, step one: in a virtual machine/spare server/local pc/whatever install a MySQL server and replace its data directory:

$ aptitude install mysql-server
$ /etc/init.d/mysql stop
$ mv /var/lib/mysql /var/lib/mysql.orig
$ cp -pr /tmp/backup/mysql /var/lib
$ chown -R mysql.mysql /var/lib/mysql

I also checked that file permissions match the normal permissions on Debian MySQL installations. Should be like this:

root@host:/var/lib/mysql# ls -al
drwx------  5 mysql mysql     4096 Mar  1 18:20 .
drwxr-xr-x 33 root  root      4096 Mar  1 18:20 ..
-rw-r--r--  1 root  root         0 Mar  1 18:07 debian-5.1.flag
-rw-rw----  1 mysql mysql 27262976 Mar  1 18:21 ibdata1
-rw-rw----  1 mysql mysql  5242880 Mar  1 18:21 ib_logfile0
-rw-rw----  1 mysql mysql  5242880 Mar  1 18:21 ib_logfile1
drwx------  2 mysql mysql     4096 Mar  1 18:20 database1
drwx------  2 mysql mysql     4096 Mar  1 18:21 database2
drwx------  2 mysql root      4096 Mar  1 18:08 mysql
-rw-------  1 root  root         6 Mar  1 18:08 mysql_upgrade_info
root@host:/var/lib/mysql# ls -al database1
drwx------ 2 mysql mysql 4096 Mar  1 18:20 .
drwx------ 5 mysql mysql 4096 Mar  1 18:20 ..
-rw-rw---- 1 mysql mysql   65 Mar  1 18:20 db.opt
-rw-rw---- 1 mysql mysql 8668 Mar  1 18:20 table1.frm
-rw-rw---- 1 mysql mysql  879 Mar  1 18:20 table2.frm
-rw-rw---- 1 mysql mysql 1520 Mar  1 18:20 table3.frm

Now you can try to start the server and look if your databases are readable:

$ /etc/init.d/mysql start
$ mysql -uroot -p -e "show databases;"
Enter password:
| Database           |
| information_schema |
| database1          |
| database1          |
| mysql              |

If this works, simply dump your needed databases with mysqldump, transfer them to your server and import them normally.

$ mysqldump -uroot -p database1 > /tmp/database1.sql
$ scp /tmp/database1.sql user@server:/tmp

On the server:

$ mysql -uroot -p -e "create database database1;"
$ mysql -uroot -p database1 < /tmp/database1.sql

And don’t forget to restore the temporary MySQL server to normal operation in case you need it later.

$ /etc/init.d/mysql stop
$ rm -rf /var/lib/mysql
$ mv /var/lib/mysql.orig /var/lib/mysql
$ /etc/init.d/mysql start

Use a LVM volume group with libvirt

A short howto how to use a LVM volume group with libvirt on Debian Squeeze (used for KVM VMs in my case). I assume your VG already exists and is dedicated for libvirt usage. In my case it’s /dev/vg1.

First of all, create the XML definition for the storage pool in /etc/libvirt/storage/vg1.xml. This is the minimal configuration needed, libvirt will extend it with things like UUID when you define it.

<pool type='logical'>

Now you can tell libvirt about the new storage pool and let it start automatically.

$ virsh pool-define /etc/libvirt/storage/vg1.xml
$ virsh pool-start vg1
$ virsh pool-autostart vg1
$ virsh pool-info vg1

Creating virtual machines inside that storage pool is easy as pie:

$ virt-install -d --hvm --vnc --name=vm01 
    --ram 512 --disk pool=vg1,size=10,bus=virtio,cache=none 
    --network network=default,model=virtio 
    --os-type=linux --os-variant=debiansqueeze


LaTeX build server with Git and Hudson on Ubuntu 10.04

LaTeX build server with Git and Hudson on Ubuntu 10.04

I’m currently working on a bigger paper for university using LaTeX. As it’s necessary to compile source files multiple times (especially when using BibTeX or TOCs), build runs can take quite some time. As an example, my current build script:

pdflatex -interaction=nonstopmode $BN.tex
bibtex $BN
pdflatex -interaction=nonstopmode $BN.tex
bibtex $BN
pdflatex -interaction=nonstopmode $BN.tex
makeindex -s $ -t $BN.glg -o $BN.gls $BN.glo
pdflatex -interaction=nonstopmode $BN.tex
pdflatex -interaction=nonstopmode $BN.tex
rm -rf $BN.aux
rm -rf $BN.lof
rm -rf $BN.lot
rm -rf $BN.out
rm -rf $BN.toc
rm -rf $BN.bbl
rm -rf $BN.blg
rm -rf $BN.brf
rm -rf $BN.idx
rm -rf $BN.glo
rm -rf $
rm -rf $BN.glg
rm -rf $BN.gls
rm -rf texput.log

This is OK on my workstation, but running a build on my notebook using a small 1.4 GHz single core processor can take up to a minute which is definitely too long. So I looked for solutions how to move the build process to a central server. As I was already using Git for source control on the project, I tried setting up a remote repository on the server which triggered a build using a post-receive script. This basically worked fine, but I wanted to go a step further. I had a look at CI servers and gave Hudson a try as it seems to have a lot of features while being quite easy to set up.

The result is the following: Hudson is polling the Git repository (can be remote or local, in my case it’s a self-hosted remote gitosis installation, but could be github too), starting a new build on changes and publishing the resulting PDF if successful. Hudson is accessible over https using an Apache2 server as frontend to a Tomcat installation.

Ready? Let’s go.

read more »

/dev/pts errors on newly created Xen DomUs (Debian Lenny)

Today was the first time I had to create a new DomU after upgrading my Xen setup to Debian Lenny. When I booted the VM and logged in via xm console I got some strange errors when installing packages:

Can not write log, openpty() failed (/dev/pts not mounted?)

Additionally, after setting up SSH, I got the following error when logging in with SSH:

Server refused to allocate pty

Solution: install udev, reboot the VM and you’re good to go.

Integrate Spamassassin into Postfix/Dovecot

As I stated before, I really like Christoph Haas’ ISPMail setup for Debian-based mailservers. I was quite fine without any server-side spam filtering solution until now, but somehow the spam amount in my inboxes increased more and more and I was looking for a decent and simple solution to filter out all that bullshit which is distracting me day after day.

I clearly wanted to go with Spamassassin (SA), as I made good experiences with it in the past and it’s more or less the standard spamfilter on linux based mailservers. The most common solutions to integrate SA into a Postfix based mailserver are the following:

  • Using amavisd-new
  • Using Postfix┬┤ content_filter

I don’t really like both of them. Amavis is quite heavy for the pure spam filtering purpose and the content filter checks both ingoing and outgoing mails by default which is obviously not in my interest. Amavis avoids checking outgoing mail just by checking if the sender domain is managed by the same system, but spammers can bypass this quite easily by faking the sender’s address to be the same as the recipient’s one (which is done quite often). There’s a discussion about this on the ISPMail page, so head there for more information. All this can be improved by using multiple Postfix instances and different ports (e.g. using 587/submission for authenticated clients and 25/smtp for normal SMTP traffic), but I want my mailserver to be as interoperable as possible without the need of any special setups on the client side.

So I was looking for another solution. I read some tutorials where people used procmail in user scripts to pass incoming mail to spamc before delivering it to the mailbox. I like this approach as the MTA isn’t involved into the spam filtering process, outgoing mail isn’t touched and you don’t need any complicated setups on the MTA side. All alias and transport definitions work fine and the final mail is checked right before being delivered to the user’s inbox.

First I thought about Sieve, which is already running through Dovecot’s Sieve implementation until I noticed that Sieve is not able to call any external programs (correct me if I’m wrong). Then I had a look at spamc and Postfix’ spamc is capable to pipe its output to another program and in the ISPMail setup, Postfix passes the mail directly to Dovecot’s deliver, so why not just let Spamassassin check the mail right before it’s getting passed to Dovecot? I gave it a try and seems to work fine. I still need some automation in training SA databases (might follow in a later post), but the plain SA checking is working reliably and mails can easily be filtered with Sieve afterwards.

So much for the backstory, let’s get our hands dirty. Note: I’m running Debian Lenny.

read more »

Continuous Integration with phpUnderControl and Git

Continuous Integration with phpUnderControl and Git

I was looking for a decent continuous integration solution for my PHP projects for some time now, but always had the problem that most of the described solutions used SVN instead of Git as VCS system. Yesterday I found an article which describes the setup exactly as I needed it: phpUnderControl with Git on a Debian/Ubuntu system. Using the article, I managed to set up a working system quickly, which basically works as expected: CruiseControl checks the repository for modifications and starts the build process if there are any new commits. The build process includes generating API documentation (phpdocumentor), running static code analysis (php-codesniffer) and executing unit tests (phpunit). If the build succeeds, the results are published and can be accessed through a nice webinterface powered by phpUnderControl (see screenshot above which I stole from the phpUnderControl site).

However, the described setup has a few issues which bugged me:

  1. CruiseControl runs from the shellscript as root, posts all output to the console and is not automatically started at boot time.
  2. CruiseControl runs on port 8080, but I wanted to manage access to the webinterface through the apache which is already running on the box
  3. There’s no authentication – everybody can access my CI server, see the build details and start new builds through the webinterface.
read more »

Backup Xen virtual machines with LVM snapshots and ftplicity/duplicity

Some time ago, I updated the backup system on a Server running multiple Xen VM instances (DomUs). Before changing the system, each virtual machine ran its own backup scripts to backup data to an external FTP server. Now, VMs are centrally backed up to FTP from the Dom0 using LVM (Logical Volume Manager) snapshots. As a backup solution I chose duplicity and ftplicity in combination with a shellscript to create automated LVM snapshots. Duplicity is a tool to create GPG-encrypted (this way you can store your backups at remote servers without having to worry about who has access to your data) incremental backups to remote servers, ftplicity is a wrapper script for duplicity which allows running duplicity without interaction (e.g. without the need to type any passwords). Ftplicity was originally published by the German computer magazine c’t, but has been undergone further development and is now hosted at SourceForge.

You can find tutorials on ftplicity/duplicity here (Note: they use the original c’t version of ftplicity):

Basically you can use this setup for any kind of LVM snapshot based system, but I’m focusing on backing up Xen VMs here. I assume you got your LVM and Xen system up and running so far. I did this on a Debian Lenny system, but it should be similar on other distros. I did all steps as root.

read more »

MySQL backup user

To create a backup user for MySQL you need at least the following privileges (to use mysqldump):

GRANT SELECT, SHOW VIEW, LOCK TABLES ON *.* TO 'backup'@'localhost';