Sunday 27 May 2012

Deciding on somewhere to host a lab

Having decided to go ahead with this lab build my first task was to locate somewhere where I could host it.  I decided early not to deploy at home, I didn't want to mess around with dynamic DNS,wanted more than a single public IP, didn't want to get these individually from an ISP, and wanted to avoid a potentially large initial outlay for the hardware.  My primary criteria was cost, I'm doing the lab build off my own back, so despite the experience I should be able to gain from the exercise I don't want to end up spending a small fortune, or having to cut the lab short because of spiralling costs.

Having a Microsoft background, and generally having the train of thought that a primarily Microsoft lab should be deployed the Microsoft way I first took a look at Microsoft's Azure platform.  They're currently offering a free 90 day trial which would allow a reasonable level of evaluation.  The trial is limited to 750 compute hours per month which means (as I was planning on deploying a number of VM's) I would have to keep spinning up and down the VM's to avoid running up costs.  Ideally I wanted to avoid this, it would mean spending a fair amount of time simply getting the lab into a consistent state every time I wanted to make changes, not to mention the time to test everything was running as expected after each spin up in case anything came up out of order, without which I may end up assuming changes I had just made were causing an issue and spend time looking for problems with a specific change.

I didn't rule out Azure at this stage though, and looked around for other factors that may have made it suitable / unsuitable.  Well there's quite an in depth post over on MSDN detailing why it's a bad idea to run domain controllers on Azure.  Basically comes down to the issue that Microsoft can't guarantee a consistent state of the DC if there's a hardware failure.  There's also a second MSDN article stating that the Domain Controller role will simply not work due to a lack of UDP traffic.

Having ruled out Azure I decided to take a look at Amazons EC2 offering.  I was really after somewhere that would take account of the very low CPU / Disk utilisation I was expecting in the lab in order to keep the costs as low as possible.  It didn't take me long to come accross what I see as the biggest issue with EC2 - the complexity of the pricing model, it was very difficult to come up with a realistic expectation of what it would end up costing to do this deployment.

I wanted to get a better idea of costs so I then decided to take a look at Rackspace's cloud offering.  They have a fairly good cost calculator, which gave figures around the £38 per VM per Month.  For the number of VM's I was looking at this was going to end up a little steep so again I would have had to look at spinning the VM's up and down each time I wanted to use the lab, which I wanted to avoid.  Rackspace also have a fairly good article explaining the differences between Rackspace and EC2.  Obviously, expect this article to be skewed somewhat in favour of Rackspace, but some of the pitfalls pointed out in the EC2 offering were problematic enough for me to decide to drop EC2 from my list of potential candidate hosts.

I then considered getting a dedicated server as a host and running / managing my own virtualization layer.  This had potential to be a better solution in terms of cost, as whatever I'd be paying the hoster wouldn't include licensing for guest OS's - I could use my own technet membership for these which covers lab use.  It also means I could oversubscribe the host (compared to what would be the case for a normal production system) as I wasn't too worried about performance, this option offered me a way to take account of the lower utilisation levels I was expecting compared to what a hoster offering guest VM's directly to the public would need to provision, (and thus need to take account of in their pricing).

Staying with Rackspace for the time being I started looking at their dedicated server offerings, pricing was going to start around £598 per month for a 12GB  quad core system.  The main things Rackspace push as differentiators from their competitors is their "fanatical support" and their service/availability levels, and I presume the cost of providing these services is reflected in their pricing.  For a lab build these are really not big concerns of mine  so I decided to start shopping around to see if I could find something much cheaper.

Some of the criteria I was looking for was a UK hoster, who also allowed some limited customisation of their offering, ideally with an online cost calculator to avoid the back-and-forth of emails with some sales department.

After some searching I came across Redstation, after going through their online calculator I had come up with a system with the below spec for only £134.99 per month (+VAT).

Processor Quad Core 3.20Ghz HT Sandy Bridge (E3-1230)
Memory 16GB DDR3 ECC Memory
Hard Drive 2 x 240GB Intel Solid State + 2TB 7,200 RPM SAS
Raid H200 Raid Controller - Raid 1
IP Allocation 4 Usable IP Addresses (RIPE /29)
Remote Management DRAC Enterprise (Virtual KVM) + Dedicated Port
Bandwidth 100Mbps Unlimited Usage

I was also particularly impressed with the fact there was no minimum term on the above system.  The SSD's and SAS drive would also give me two storage tiers allowing me to optimise disk performance for different VM's.  Searching around online I was unable to fine anyone else who could offer a comparable system for a comparable price.  There are some hosters who have cheaper systems but they all seemed to have static configurations which means I would be compromising some aspect of performance.

The final task before I opted for this configuration was to look around online to see if there were any limitations to Redstations service that weren't necessarily obvious from their website.  The only thing I came across was the fact that Redstation block outbound traffic to the Internet on port 25.  They offer a relay server to their customers, which all outbound mail needs to go through.  Now this isn't much of an issue for this lab deployment, but for a production system would mean a few limitations to some exchange functionality.  For example message tracking would never show outbound emails reaching the recipient server, and you couldn't take advantage of opportunistic TLS for outbound mail, nor could you enforce TLS for specific remote domains.

The only issue this was going to cause for me was that I have a few scripts that do things like MX validation of remote domains, these probably wouldn't work, but this is a minor compromise compared to the extra costs or hardware limitations I was looking at with other hosters.  Happy with the above specification I took the plunge and ordered the dedicated server.

1 comment:

  1. Just to say that I enjoy reading your blog, and check most days for the latest update. Good reading.