• Advertisement

Help with mezzanine card choice

This is for more general topics about networking and vendors.

Help with mezzanine card choice

Postby Guest » Thu Apr 03, 2008 9:43 am

I am having a hard time trying to figure out which mezzanine card to use in our blade servers half size.  We currently run vSphere 4 on rackmount servers with 3 four port nics installed for a total of 12 interfaces available to ESX.  We do this for redundancy and increased bandwidth.  If I use the 10G mezzanine card it looks like it only presents 2 10G nics to the ESX server.  Even though the bandwidth is great, this is a far cry from the 12 nics we have for redundancy.  The virtual NIC card looks good because it can present up to 128 "physical" interfaces to the ESX server.  I need to have interfaces for my LAN traffic, my iSCSI SAN traffic, and my console ports.  Which would be my best option?

 

Thanks for any help you can provide.

 

Ken

Guest
 

Advertisement

Re:Help with mezzanine card choice

Postby Guest » Thu Apr 03, 2008 11:03 am

Ken,

 

In vSphere 4 having more than one physical NIC assigned to a vSwitch increases redundancy but it does not increase your overall bandwidth.  If 4 physical 1GB nics are assigned to a virtual switch once one of the 1GB NICs reaches a 1GB sustained rate the other NICs will not be able to service data until the 1GB sustained data decreases.  With a 10GB NIC you won see this problem until your data peaks at 10GB. VMware vSwitches provide concurrent bandwidth, not aggrate bandwidth.

 

From a redundancy perspective each of your 10GB NICs on the UCS platform are pinned out to the IOMs in the chassis.  Each IOM has 4 ports, so there are 8 total ports that the NICs can use.  If you let the UCS system handle the port pinnings then it will allow for IOM port failures and migrate the traffic to another IOM port.  Also each IOM is connected to a different 61xx switch allowing for failure of an entire leg of the system without disrupting traffic to your servers.  In a failure scenario you could loose a full 6120, one IOM, and three of the ports on the surviving IOM and still have an active link.

 

The Palo adapter can provide virtual 1GB interfaces to your VMs but it is still relying on the two 10GB links to connect to the chassis so no real advantage from a redundancy perspective.

 

On my demo system I have pulled a full leg of the system without causing problems to my ESX hosts.  After using ESX hosts with 12+ 1GB NICs, the 10GB solution is amazing.  There are considerable performance benefits with iSCSI/NFS storage and overall vMotion responsiveness.

 

Hope this helps.

 

Jeff

Guest
 

Re:Help with mezzanine card choice

Postby Guest » Thu Apr 03, 2008 11:43 am

Jeff,

 

Thanks for the information.  It was helpful for me.  I question redundancy though.  With only 2 interfaces being presented to the ESX server I would need to use one for my LAN and one for my iSCSI SAN traffic.  In doing so I would not have redudancy or failover should an interface go down.  Am I just not getting it?

Guest
 

Re:Help with mezzanine card choice

Postby Guest » Thu Apr 03, 2008 1:22 pm

Ken,

 

With the 10GB solution you assign both 10GB NICs to a single vSwitch.  You can then use port groups/VLANs to segregate your iSCSI traffic to its isolated VLAN.  Prior to the 10GB solution the best practice was to isolate iSCSI traffic onto its own physical network for full bandwidth.  With 10GB iSCSI can exist on the same physical adapter as your production traffic, but use port groups/VLANs to isolate the traffic.  I am running iSCSI storage in this configuration and have had no issues with iSCSI traffic.  With DCE I am actually getting much better iSCSI performance.

 

Thanks,

Jeff

Guest
 

Re:Help with mezzanine card choice

Postby Guest » Thu Apr 03, 2008 2:41 pm

Some thoughts to help you

 

1)  Do you need FC?  If so, forget Oplin/Intel and think Menlo/Qlogic/Emulex or Palo/Cisco

2)  Don worry about redundancy, its very well catered for with just two NICs connecting to separate redundant fabrics with a consistent configurion (e.g. same VLANs).

3)  Having less NICs is a Good Thing not a Bad Thing.

4)  If you choose Menlo with the 2 x 10GbE then you will most likely create 1 vSwitch with two uplinks and port groups for each of your VLANs - COS, VMs x whatever, VMotion etc.

5)  You can only apply QOS (guarantees and other stuff) to the Virtual Interface, not per VLAN, so if you want to get fancy then Palo is your option.

6)  Lots / most UCS customers are quite happy with 2 x 10GbE NICs ... its just, different, and better.

 

:-)

 

Any Qs?

Guest
 

Next


  • Advertisement


Similar topics


Return to Anything Networking

Who is online

Users browsing this forum: No registered users and 5 guests