Updating the Testbed - External Infrastructure

Between the set up of the original testbed and the beginning of the update process, some NAS vendors also approached us to evaluate rackmount units with 10 GbE capability. This meant that the ZyXel GS2200-24 switch that we had been using in our testbed would no longer pass muster. Netgear graciously accepted our request to participate in the testbed upgrade process by providing the ProSafe GSM7352S-200, a 48-port Gigabit L3 managed switch with built-in 10 GbE.

In the first version of the testbed, we had let the ZyXel GS2200-24 act as a DHCP relay and configured the main router (Buffalo AirStation WZR-D1800H) to provide DHCP addresses to all the NAS units, machines and VMs connected to the switch. In essence, it was a live network with the ability for the VMs and the NAS under test to access the Internet too. With the GSM7352S, we decided to isolate the NAS testbed completely.

The first port of the Netgear ProSafe GSM7352S was connected to the ZyXel switch and acts as the management port. The switch acts as a DHCP client and gets a management IP address from the Buffalo router. We configured ports 1 through 12 to remain as part of the default VLAN. Clients connected to these ports obtain their IP addresses (of the form 192.168.1.x) via relay from the main router. Ports 13 through 50 were made members of a second VLAN and a DHCP server issuing addresses of the form 192.168.2.x was associated with this VLAN. No routes were set up between the 192.168.1.x and 192.168.2.x subnets.

The GbE port associated with the host OS of our testbed workstation was connected to Port 2 of the ProSafe GSM7352S. Therefore, we were able to log into the workstation via Remote Desktop from our main network. The NAS under test was connected to ports 47 and 48, which were then set up for aggregation via the switch's web UI. In the case of NAS units with 10 GbE ports, the plan is to connect them to ports 49 and 50 and aggregate them in a similar way.

All the VMs and the NAS itself are under the same subnet and can talk to each other while being isolated from the external network. Since the host OS also has an internal network for management (each VM is connected to the internal network in the 10.0.0.x subnet and also to the switch in the 192.168.2.x subnet), we were able to run all the benchmarks within the isolated network from the Remote Desktop session in the host OS.
 

Updating the Testbed - Workstation Infrastructure Thecus N4800: Testbed in Action
Comments Locked

23 Comments

View All Comments

  • extide - Thursday, November 29, 2012 - link

    It is not very clear if you are actually making use of the 1TB revo drive thing as just simply a 1TB hdd, or if you are indeed using it with the acceleration software and the SSD caching the 1TB HD.

    So, how is that bit set up?

    Honestly if I were you guys, I would set up things a bit differently. Since all the VM's are almost identical you could save a lot of space and end up making much better use of that 100GB of SSD cache. Use differencing VMDK files, so instead of having 13 copies of a 64GB VMDK, you have on copy of the 64GB VM, along with 13 vmdk's that store the "differences". This way you could probably fit everything into 100GB and either just store it on the SSD natively or use the SSD as an accelerator for the 1TB hdd, but it would have pretty much everything the VM's need/use stored on the SSD. Now, exactly how you set this up varies based on what VM app you are using, but I know it is possible with ones like VMWare and Oracle Virtual Box (which is free!).

    What do you think? I mean you could also apply this sort of concept to the rest of the VM's and condense the storage down significantly. Use one big ssd for the main file, and then several other ssd's for the difference files, perhaps say 4 difference files per 64GB ssd.
  • ganeshts - Thursday, November 29, 2012 - link

    extide, Thanks for the comments.

    No, we aren't using the acceleration software with the RevoDrive Hybrid, as it works only for boot disks.

    I am reading up on Hyper-V differencing disks [ http://technet.microsoft.com/en-us/video/how_to_cr... ], and it definitely looks like a better way to go about the process. I will experiment with the differencing method and see whether things get simplified in terms of storage requirements while also retaining ease of use.
  • yupsay - Friday, November 30, 2012 - link

    I've been using differencing disks & saving space, improving performance by a bit like that. One of the downsides to be noted is as differencing disks are dynamic under windows 2008 r2 you will be adding chunks of 2 MB at a time while Win 2008 it would be 512 kb. Problem starts when you 've multiple machines running & expanding their VHD footprint. Look out for fragmentation.
  • BellaLohan - Sunday, December 2, 2012 - link

    just as Randy responded I'm in shock that people able to get paid $7078 in four weeks on the network.(Click on Home)
    http://goo.gl/RTwam
  • eanazag - Thursday, November 29, 2012 - link

    I'd like to know when Netgear is going to support 10 GbE over cat 6 Ethernet copper. I have some new Intel copper X-540 T-2 10 GbE NICs and the switch market is incredibly weak for these. Fiber is getting decent attention, but not Ethernet. I'd love to see even low port count switches ~8 ports. I don't care if it takes a whole 1U to pull off. I don't care about LACP today. Give me even a dumb switch. VLANs would be nice. I just need a switch though. I have them direct connected at the moment and I am really losing out on use scenarios.

    All the switches (Dell and some other vendors) that support 10GbE over copper Ethernet Cat 6/a cable are $10,000+ for 24 ports.

    I have the NICs setup between ESXi and Nexenta iSCSI. I am trying to push the NAS on low counts of data streams (ie. low number of VMs to take advantage of the caching and RAID capabilities of Nexenta).
  • pablo906 - Friday, November 30, 2012 - link

    A few weeks ago the only 10GB over Cu switches slated for the near future were Cisco. This may have changed but I doubt it. Whenever Motorola comes up with a integrated design incorporating the feature then you'll see a ton of other vendors suddenly supporting the feature.
  • jhh - Friday, November 30, 2012 - link

    One of the problem is that no one is making 10G switch chips with only 4-8 ports. Broadcom's smallest switch has 24 10G ports. Some of their older chips had 24 1G ports and 4 10G ports, but no one has made those into switches with 10G Base T ports. Broadcom does have some 4x10G Base-T Phy chips coming in 1Q13, which should help, but I doubt the prices will be extremely low. The 24 port gigabit switches had a cost of close to $2000, so we aren't in the $100 switch range. Then, once one puts warranty expense, R&D recovery, and room for discounts for big customers, the price is quite often 2x or more of the cost, especially for these high-end items.

    The other option is to use SFP+ and direct connect cables, but that doesn't help with the X-540.
  • d3v1on - Thursday, November 29, 2012 - link

    Hi there, I have the same motherboard and was just wondering if the holes line up on the RV 03 case. As I understand it, Asus has decided to include 3 proprietary holes and despite having an EEB compatible case, those 3 holes wouldn't line up.

    Was this an issue when completing this build?
  • ganeshts - Thursday, November 29, 2012 - link

    It was not much of an issue. I remember some holes didn't line up, but the locations were such that it didn't cause any problems related to the stability of the motherboard inside the chassis.
  • d3v1on - Thursday, November 29, 2012 - link

    Thanks heaps Ganesh. Appreciate the quick reply.

Log in

Don't have an account? Sign up now