Thursday 31 May 2012

Creating a VM template

I've chosen to go with Windows 2008 R2 SP1 as the operating system wherever possible.  It offers significant improvements in terms of stability and manageability over previous versions.  Creating a template to deploy each of the machines I'm building will provide significant time savings for each new VM.

First step for this is to upload the 2008 R2 SP1 ISO to the Datastore.  Being a 3GB file I was expecting this to take a fair amount of time but was still shocked to see the 14 hours remaining timer.  Pretty typical for a consumer grade Internet connection I guess though.  Unfortunately the upload died about 5 hours in with an I/O error.  I did a little research into this error and it turns out to be pretty common, with the most common work around being to use SSH / SCP to upload the file.  I was about to call it a night anyway, and didn't know if there were any limitations with Redstation's deployment of my dedicated box that would prevent me from enabling SSH, so I decided to give the Datastore browser upload one last go, leaving it overnight, and stabilising my side of the setup as much as possible.  I turned off wireless, plugged in an Ethernet cable and copied the ISO from a USB drive to the local system disk.  On checking the upload the following day I found it had completed successfully.
For the base hardware I've chosen to go with;
  • 2x vCPU
  • 2GB Ram
  • 30GB OS Drive
  • 8GB PageFile drive
I will be adjusting these specs for a number of the individual VM's. I want to go with 2x vCPU to prevent any single threads from maxing out CPU, the 2GB Ram is a little low, but as I'm not expecting any real load on these servers it should suffice.  The 8GB PageFile disk is to allow me to double the memory to 4GB, have static PageFiles of 1.5 times memory (6GB) and still have 25% free to prevent generating SCOM alerts etc, and all disks (unless explicitly changed) will be thin provisioned anyway.  The 30GB OS drive is a little on the small side, but again, and I'd usually recommend at least 40GB but again, as I'm not really putting any load on these it should be fine, plus I can always increase them if required.

So in the v-Sphere client I chose new VM and went with custom configuration.


For naming purposes, this won't be a production VM so I'll just go with a logical name to differentiate from any other templates I may create.
For now I'll be placing the system drives on the SSD storage.  Once that starts running a little low I'll move some lower priority ones to the SAS storage, I/O on system disks is generally pretty low aside from during bootup, and as long as PageFiles are moved off and no non standard additional services have been installed to them.

Next, I chose virtual machine version 8, and Windows 2008 R2 as the guest operating system.
Then, for CPU's I went with two sockets and one core per socket.

Dropped the memory down from the default of 4GB to 2GB.
Added a single NIC, ensuring it was on the BackNet.

I left the SCSI settings at their defaults, chose to create a new virtual disk, set the size to 30GB and set disk provisioning to thin provision.
I left Advanced options at their defaults and on the completion screen ticked the box to edit the VM's settings.

Under settings I selected the CD/DVD drive and pointed it to the Windows 2008 ISO I had uploaded earlier, setting it to connect at power on.

I then chose to add additional hardware, selecting Hard Disk.

This second disk will be for the PageFile, so I set it to 8GB and also to thin provision.

After completing that wizard I was ready to boot and configure the templates OS.  I'll cover this in my next post.

Wednesday 30 May 2012

Firewall Deployment

For the Firewall appliance, I've opted for a product named pfSense, its a free, open source FreeBSD based firewall, with one key feature I was looking for - the ability to do routing and NAT.

The installation ISO is only around 125MB, so I downloaded, unpacked it, and used vSpheres Datastore Browser to create an OS images folder on the SAS storage, and then upload the ISO to it.

Next I started the new virtual machine wizard.  I've chosen to name this RSMSGFW1. Feel free to guess why, although it's not really important :).


I've then chose a couple of vCPU's and 1GB of memory.  For the network config I upped the number of NIC's to two and selected both BackNet and FrontNet.

I gave it a 5GB OS drive and completed the wizard.  Then went into properties and configured the CD-Rom drive to map to the ISO I had previously uploaded and set the CD-ROM drive to be connected at boot up.  I then started powered on the VM an opened the console.  During bootup it prompted me to enter the adaptor names for the WAN (FrontNet) and LAN (BackNet).  If you don't know the adaptor names you can disable one at a time and it lets you know the adaptor name of the one that drops.  After completing this step boot up completed and I was at the firewall console.  This is actually a console for the firewall as if it had been booted from CD (in this case the ISO) so there is no way to actually make changes permanent, but there is an option to install the firewall to disk.

Once that was completed, I assigned one of my public IP's to the WAN interface and 10.0.0.1 to the LAN interface. 


I wanted to test connectivity from the internal network so deployed a temporary Windows VM there. (I won't go into too much detail about the process for that at this stage).  I found I couldn't access the Internet, but I could access the firewall web configuration page.
Reviewing the configuration on the firewalls web config page I could see there was no default gateway listed for the WAN interface.  After adding this in, the test VM on the 10.0.0.x network could access the Internet perfectly.

Tuesday 29 May 2012

Configuring Networking

Lets take an initial look at the network config of the virtual environment.


You can see there is just a single network, and that network can talk to the outside world.  I've only got 4 IP's on this network, plus I want some form of firewall in front of all Windows machines.  So firstly I renamed the "VM Network" to "FrontNet" and enabled promiscuous mode setting it to allow connections so that whatever I place on this network has full ability for in and outbound communications with the rest of the Internet.

Next, I went through the add network wizard, creating a vSphere standard switch, and un-ticking all physical network adaptors.  This means all machines on this network have no ability to communicate with the Internet (at least via vmware).  I labelled this network "BackNet". The idea being that I can deploy virtual appliance firewalls, giving them an interface on both networks. (The firewalls will be the only machines with interfaces on both BackNet and FrontNet).  I can then configure the VM's on the Backnet to have a default gateway of the internal interface of the firewall(s), forcing all outbound traffic through the firewall(s) where I can control what goes in and out.

You'll notice I used the term firewall(s), my general plan is to try and follow as many best practices as possible with this lab, apart from when doing so would incur additional costs.  A number of firewall appliances have the ability to provide high availability by deployment in an active/passive configuration.  However as the focus of this lab is Microsoft software, I'll probably deploy a single firewall for now, and possibly look at making it HA at a later point.

Once this configuration was complete, the network configuration of the lab appeared as below:

Sunday 27 May 2012

A first look at the virtual environment

I placed the order for the dedicated server early on a Saturday morning.  On the Redstation website it states:
Order by Monday 12pm - Receive by Tuesday 9am
Order by Friday 12pm - Receive by Monday 9am

So I wasn't expecting anything to happen until at least Monday/Tuesday the following week.  I was, however, pleasantly surprised to find an email in my mailbox stating my server was ready by 20:15 the same day.  Less than 16 hours to provision a custom server and at the weekend. Pretty impressive.

Logging into their control panel, I was presented with the two methods for accessing the server, a web interface to connect to the server via Dell's iDRAC management card, and a second login address to connect to via the v-sphere client.  I reviewed the config over the iDRAC connection, it's a pretty typical remote management interface showing server summary, along with a remote KVM utility.  Shouldn't be something I need to use much as most tasks should be able to be accomplished via VMware.

Accessing the VMware management interface I was presented with the link for downloading the vSphere client.  Download and installation was quick and went without a hitch and I was now accessing the server via the standard VMware client.

I reviewed the specification and configuration.  All appeared correct as per the order except I wasn't seeing the 2TB SAS drive.  I raised a ticked via the Redstation control panel, and an engineer was quickly assigned who advised it looked like the drive had been missed in the build.  After confirming I was happy for the server to be rebooted the drive was installed and the issue was quickly resolved.

One concern I had with the order was would the two management interfaces be on addresses from my pool of 4 IP's.  I was pleased to learn this wasn't the case, I'm sure many hosting providers will provide say 4 IP's but then deploy a config that uses a number of them, limiting how the pool can be used.

I was now in possession of a server whose hardware build was complete and could now start looking at my software deployment.

Choosing a hypervisor

As part of the process of ordering the dedicated machine I had to decide on a hypervisor.  Again my Microsoft background made my first thoughts tend towards Hyper-V.  I knew for certain I wanted the host OS installed as part of the build done by Redstation; the remote management interfaces would allow me to install my own host OS, but this is usually achieved by mounting an ISO from your local machine, although technically possible, mounting an ISO from a consumer-grade internet connection is likely to try any ones patience, particularly if that ISO is multiple GB in size.

Interestingly, Redstation offered an install of VMWare ESXi / vSphere 5.0 Hypervisor for free.  Now this would have the fairly significant benefit of allowing me to use VMware's solution exchange to download pre-built virtual appliances for functions like firewalling, load-balancing, or creating a VPN entry point to the lab. Of course, I could probably achieve a similar configuration with ISA/TMG and NLB, but from my experience, products designed with these specific purposes in mind provide a better administration experience as well as better functionality than the Microsoft combination of ISA/TMG and NLB in these circumstances.  Going this route also allows me to gain some experience more along the lines of what a typical Network Administrator usually deals with, which is always a useful thing.

So in the end I decided to go the ESXi / vSphere 5.0 route.  I can always rebuild that section of the network and go the Microsoft route at a later point, if I was to do that, it would probably end up with an experience more akin to what goes on in the wild (I'd imagine it's more common to see TMG deployed in front of an existing network than deployed and then a new network built up behind it).

Deciding on somewhere to host a lab

Having decided to go ahead with this lab build my first task was to locate somewhere where I could host it.  I decided early not to deploy at home, I didn't want to mess around with dynamic DNS,wanted more than a single public IP, didn't want to get these individually from an ISP, and wanted to avoid a potentially large initial outlay for the hardware.  My primary criteria was cost, I'm doing the lab build off my own back, so despite the experience I should be able to gain from the exercise I don't want to end up spending a small fortune, or having to cut the lab short because of spiralling costs.

Having a Microsoft background, and generally having the train of thought that a primarily Microsoft lab should be deployed the Microsoft way I first took a look at Microsoft's Azure platform.  They're currently offering a free 90 day trial which would allow a reasonable level of evaluation.  The trial is limited to 750 compute hours per month which means (as I was planning on deploying a number of VM's) I would have to keep spinning up and down the VM's to avoid running up costs.  Ideally I wanted to avoid this, it would mean spending a fair amount of time simply getting the lab into a consistent state every time I wanted to make changes, not to mention the time to test everything was running as expected after each spin up in case anything came up out of order, without which I may end up assuming changes I had just made were causing an issue and spend time looking for problems with a specific change.

I didn't rule out Azure at this stage though, and looked around for other factors that may have made it suitable / unsuitable.  Well there's quite an in depth post over on MSDN detailing why it's a bad idea to run domain controllers on Azure.  Basically comes down to the issue that Microsoft can't guarantee a consistent state of the DC if there's a hardware failure.  There's also a second MSDN article stating that the Domain Controller role will simply not work due to a lack of UDP traffic.

Having ruled out Azure I decided to take a look at Amazons EC2 offering.  I was really after somewhere that would take account of the very low CPU / Disk utilisation I was expecting in the lab in order to keep the costs as low as possible.  It didn't take me long to come accross what I see as the biggest issue with EC2 - the complexity of the pricing model, it was very difficult to come up with a realistic expectation of what it would end up costing to do this deployment.

I wanted to get a better idea of costs so I then decided to take a look at Rackspace's cloud offering.  They have a fairly good cost calculator, which gave figures around the £38 per VM per Month.  For the number of VM's I was looking at this was going to end up a little steep so again I would have had to look at spinning the VM's up and down each time I wanted to use the lab, which I wanted to avoid.  Rackspace also have a fairly good article explaining the differences between Rackspace and EC2.  Obviously, expect this article to be skewed somewhat in favour of Rackspace, but some of the pitfalls pointed out in the EC2 offering were problematic enough for me to decide to drop EC2 from my list of potential candidate hosts.

I then considered getting a dedicated server as a host and running / managing my own virtualization layer.  This had potential to be a better solution in terms of cost, as whatever I'd be paying the hoster wouldn't include licensing for guest OS's - I could use my own technet membership for these which covers lab use.  It also means I could oversubscribe the host (compared to what would be the case for a normal production system) as I wasn't too worried about performance, this option offered me a way to take account of the lower utilisation levels I was expecting compared to what a hoster offering guest VM's directly to the public would need to provision, (and thus need to take account of in their pricing).

Staying with Rackspace for the time being I started looking at their dedicated server offerings, pricing was going to start around £598 per month for a 12GB  quad core system.  The main things Rackspace push as differentiators from their competitors is their "fanatical support" and their service/availability levels, and I presume the cost of providing these services is reflected in their pricing.  For a lab build these are really not big concerns of mine  so I decided to start shopping around to see if I could find something much cheaper.

Some of the criteria I was looking for was a UK hoster, who also allowed some limited customisation of their offering, ideally with an online cost calculator to avoid the back-and-forth of emails with some sales department.

After some searching I came across Redstation, after going through their online calculator I had come up with a system with the below spec for only £134.99 per month (+VAT).

Processor Quad Core 3.20Ghz HT Sandy Bridge (E3-1230)
Memory 16GB DDR3 ECC Memory
Hard Drive 2 x 240GB Intel Solid State + 2TB 7,200 RPM SAS
Raid H200 Raid Controller - Raid 1
IP Allocation 4 Usable IP Addresses (RIPE /29)
Remote Management DRAC Enterprise (Virtual KVM) + Dedicated Port
Bandwidth 100Mbps Unlimited Usage

I was also particularly impressed with the fact there was no minimum term on the above system.  The SSD's and SAS drive would also give me two storage tiers allowing me to optimise disk performance for different VM's.  Searching around online I was unable to fine anyone else who could offer a comparable system for a comparable price.  There are some hosters who have cheaper systems but they all seemed to have static configurations which means I would be compromising some aspect of performance.

The final task before I opted for this configuration was to look around online to see if there were any limitations to Redstations service that weren't necessarily obvious from their website.  The only thing I came across was the fact that Redstation block outbound traffic to the Internet on port 25.  They offer a relay server to their customers, which all outbound mail needs to go through.  Now this isn't much of an issue for this lab deployment, but for a production system would mean a few limitations to some exchange functionality.  For example message tracking would never show outbound emails reaching the recipient server, and you couldn't take advantage of opportunistic TLS for outbound mail, nor could you enforce TLS for specific remote domains.

The only issue this was going to cause for me was that I have a few scripts that do things like MX validation of remote domains, these probably wouldn't work, but this is a minor compromise compared to the extra costs or hardware limitations I was looking at with other hosters.  Happy with the above specification I took the plunge and ordered the dedicated server.


Saturday 26 May 2012

First Post

I've been wanting to spend some time building a lab, running through various test scenarios, and exploring the technical possibilities of various Microsoft software for a while now.  Well I've finally gotten around to it, and over the coming weeks I'll be deploying a range of primarily Microsoft software into a test environment, with the main focus being on Exchange.

Along the way I'll be documenting the progress on this blog, hopefully I'll be able to impart some useful tips and tricks to anyone who's interested, and I'll no doubt hit a few issues along the way which I'll also document, along with their resolutions.

There will also be a number of decisions to be made along the way, and I'll try to include the technical reasoning behind these, this scenario (a lab build) may not be what anyone reading is looking to do, but some of the reasons may well be able to feed into others decision making processes.

Although I'm quite familiar with Exchange Server, some of the other products I'll be taking a look at (such as the newest version of System Center), will be new to me so this is also a learning exercise.

Well, that's the intro over with, in the next post I'll cover choosing the base hardware and what I'll be running this deployment on.