Wright Hassall Infrastructure Refresh

During 2009 we are replacing our entire IT infrastructure.

Sunday 1 February 2009

Anatomy of an HP Blade Server

You've all heard the hype about Blade Servers but I'm sure some of you may be wandering what they look like, inside and out. Well, here it is.

As you can see, it is very narrow, not very tall and probably 2/3rd the depth of a regular server. The design is very, very neat and simple. 16 x DDR2 RAM slots will allow 8GB per slot, giving you potential for 128GB RAM (we've gone for 16GB for now) in each physical server. 2 x Quad-Core Processors per Blade at upto 2.7GHz (the spec we've gone for) and 2 x on-board 10Gbit NICs (Network Interface Cards) mean heavily specified but very small, efficient hardware.

On top of this spec you can also specify additional connectivity hardware in the form of 'Mezzanine' cards, which are essentially PCI-Express slots in a different form factor. There are 2 of these slots per Blade and we have utilised both slots on all of our Blades.

Slot 1 is a 2-port 1Gbit NIC. (for the purposes of VMware VMotion and High Availability)
Slot 2 is a 2-port 4Gbit Fiber Channel Card. (for fast connection to the Fiber Channel SAN)

















Thursday 29 January 2009

New Toys (blade servers) up and running!

Well, it's finally happened. Wright Hassall have some top quality servers in the building. This kit is pretty well regarded as perfect for what we are trying to achieve, plus a lot more. Lots of connectivity options, lots of flashing lights and enough fans to rival a small aircraft at the nearest airstrip! When those fans run full bore, they shift some serious air...

The outline of the gear is as follows:

2 blade enclosures (housing 9 blade servers), a nice big SAN (Storage Attached Network) and a fast tape drive. All attached into the new infrastructure on 4 redundant FAST links.

Specs and configuration to follow.

Enjoy.


























































Saturday 24 January 2009

{Penultimate Configuration}

It's all in and working nicely. Documentation is being produced as I type by the consultants with overall configuration. Below is a quick outline of how it all fits together:

Server Room (Core)
a. 2 core switches in the server room which the servers will plug into directly.

b. 2 x 10Gbit Ethernet links between them in trunking mode creating 20Gbit throughout and Spanning Tree Protocol to enable one link to go down and still have full communication between the two.

c. 4 x Power Supplies in each core switch. Switches can function on 2 power supplies fully, so we have massive redundancy here.

d. 2 x phased power supplies, one to each core switch (UPS). One phased supply can fail taking one core switch out with it and no loss of connectivity will be experienced anywhere.

e. 5 x 10Gbit Fiber links, one to each floor in EACH switch (10 links total) thus providing fully redundant links to every data room.

f. Routing for all 8 VLANS.

g. 1 x 8Gbit Ethernet Trunked link into old switch which currently has old servers and printers on it. Soon to be retired.

h. 4 x 10Gbit Ethernet links to forthcoming 2 x Blade enclosures, housing 9 x Blade servers. 7 x Production, 2 x spare.

Data Rooms (Tree Branches)
a. 1 x switch in each data room

b. 4 x power supplies in each switch on single phase supply (UPS).

c. 2 x 10Gbit Fiber links from the switches in each data room creating 10Gbit throughput, one into each of the core switches. This provides Spanning Tree Protocol again allowing one fiber pair to break without any interruption in each data room.

d. Between 48 and 144 x 1Gbit links in each data room to each desktop machine (currently old workstations so 90% only running at 100Mbit until workstations replaced).

e. 1 x VLAN per data room, routed at core in server room.

All of the above equipment is PoE (Power Over Ethernet) certified (with sufficient power supplies to run this immediately) for potential upgrade of phone system to VoIP (Voice Over Internet Protocol). This would mean ridding ourselves of potentially 50% of the cabling in each data room!

Our Illustrious Leader.


Matt (our IT Director), the man who made it all possible, through hours of Finance Consultancy meetings, begging letters, promises of a better world and general all-round persuasion has managed to fund this project and agreed to be photographed. Well, not exactly, but hey.








The look of horror on his face may be due to the sight of the equipment we removed from our production environment. I still can't believe we used this stuff in anger and it didn't totally fail! None of this equipment is managed, not routed, nothing. We can all breathe a sigh of relief now!



Server Room Photos And Serious Bandwidth...

Ok, so it's time for some photos of the jewel in the crown, the Server Room.

Cabling will be tidied on a rainy day but ultimately this is finished now.

Enjoy.