Archive

Posts Tagged ‘nexus’

Wrapping My Head Around The Nexus1000v – Part 1

February 11, 2011 5 comments

****Note – I am NOT in any way shape or form a VMware expert. I can’t guarantee you that I will be 100% correct in my terminology or representation of VMware, VMotion, VSphere, etc. I apologize in advance. I am just a network guy trying to understand how the Nexus 1000V ties into the VMware ecosystem. I also understand that companies other than VMware are doing virtualization. Please feel free to correct my inaccuracies via the comments.

Paradigm shifts are coming. Some of them are already here. About 5 or 6 years ago I was first introduced to server virtualization in the form of VMware ESX server. For you old mainframe people, you probably weren’t as impressed as I was when I learned about this particular technology.

When it came to VMware, I wasn’t doing anything fancy. I was just using it to host a few Windows servers. When these boxes were physical, they were only using a fraction of their CPU, memory, and disk space. In most cases, they were specific applications that vendors would only support if they were on their own server. From a networking standpoint, there was absolutely nothing fancy that I was doing. All of the traffic from the virtual machines came out of a shared 1 gig port. For me, VMware was a fantastic product in that it allowed me to reduce power, rack space, and cooling requirements.

I realize that some people will take issue with my use of the term “server virtualization”. To some, software and hardware virtualization are different animals. For the purposes of non-VMware people like myself, the fact that I used VMware to reduce the physical server sprawl means that I refer to it as “server virtualization”.

Fast forward to today. It is getting harder and harder to find a company that isn’t doing some sort of server virtualization. It isn’t just about reducing physical server footprint and maximizing CPU and memory resources. These days, you can achieve phenomenal uptime rates due to things like VMotion. For those who are unfamiliar with VMotion, it is a service within VMware that can move a virtual machine from one physical host(ie ESX/ESXi server) to another. This can happen as a result of hardware failure on the physical host itself, additional CPU/memory resource requirements, or other reasons that the VMware administrator deems important.

Today, from a networking standpoint, there are 3 options when it comes to networking inside the VMware vSphere 4 ecosystem:

vNetwork Standard Switch – 1 or more of these standard switches reside on a single ESX host. This would be the vSwitch in older versions of ESX. This is basically a no frills switch. Think of this as managing switches without the use of VTP. You have to touch a lot of these switches if certain VLAN’s reside on multiple ESX hosts.

vNetwork Distributed Switch – 1 or more of these will reside in a “Datacenter”. By “Datacenter”, I am not referring to a physical location. Rather, in VMware lingo, it is a logical grouping of ESX clusters(comprised of ESX hosts). This is the equivalent of running VTP across a network of Cisco switches. You can make changes and have them show up on each ESX host that is part of the “Datacenter”. This particular switch type has several advantages over the standard switch in terms of feature availability. It also allows you to move virtual hosts between multiple servers via vMotion and have the policies associated with that host

Cisco Nexus 1000V – Similar to the distributed switch, except it was built on NX-OS and you can manage it almost like you would any physical Cisco switch. It also has a few more features that the regular VMware distributed switch does not.

That’s the basic overview as I understand it. What I had been struggling with was the actual architecture behind it. How does it work? I can look at a physical switch like the 3750 or 6500 and get a fairly decent understanding of it. Not the level I would like to have, but I understand that vendors like Cisco don’t want to give away their “secret sauce” to everyone that comes along and asks for it.

As luck would have it, my company has purchased several instances of the Nexus 1000V and last week, I was able to spend a day with a Cisco corporate resource and one of the server/storage engineers my company employs. I didn’t realize how deficient I was in the world of VMware until I got into a room with these 2 guys and we started talking through how we would design and implement the Nexus 1000V. I kept asking them to explain things over and over. In the end, a fair amount of pictures on the white board caused the light bulb in my head to go active. I still have much reading to do, but for now I understand it a LOT more than I did. Now, let’s see if I can have it make sense to you. 🙂

The Nexus 1000V is basically comprised of 2 different parts. The VEM and the VSM. If we were to assign these 2 things to actual hardware pieces, the VEM(Virtual Ethernet Module) would be the equivalent of a line card in a switch like the Nexus 7000 or a Catalyst 6500. In essence, this is the data plane. The second piece is the VSM(Virtual Supervisor Module). This is the same as the supervisor module in the Nexus 7000 or Catalyst 6500. As you probably already guessed, this is the control plane piece.

Here’s where it gets a bit crazy. The VSM can support up to 64 VEM’s per 1000V. You can also have a second VSM that operates in standby mode until the active fails. In theory, you have a virtual chassis with 66 slots. In the Nexus 1000V CLI, you can actually type a “show module” and they will all show up. Each ESX host will show up as its own module. Will you ever have 64 VEM’s in a single VSM? Maybe. However, there are limitations around the Nexus 1000V that make that unlikely.

The VEM lives on each ESX server, but where does the VSM reside? It resides in its own guest VM. You actually create a separate virtual machine for the VSM when installing the Nexus 1000V. That guest VM resides on one of the ESX servers within the “datacenter” that the Nexus 1000V controls. You access that guest VM just like you would a physical switch in your network by using the CLI. Once the VSM is installed, the network resource can go in via SSH or Telnet and configure away.

That’s the basic components of the Nexus 1000V. There are other things that need to be mentioned such as how communication happens from the guest VM perspective to the rest of the network and vice versa. Additionally, we need to discuss the benefits of using the Nexus 1000V over the standard VMware distributed switch. There’s a lot more than just the management aspect of it. I will cover that in part 2. Additionally, I plan on doing a write up on the Nexus 1010 appliance. This allows you to REALLY move the control plane piece out of the VMware environment and put it on a box with a Cisco logo on it.

Advertisements

Nexus 7010 Competitors – Part 2

October 15, 2010 5 comments

**** Please note that these are my own thoughts and observations and should not in any way be taken to be the opinion(s) of my employer. Additionally, this is a rather long post, so please bear with me. I promise not to waste your time by babbling incessantly about non relevant things.

Finally! After many hours spent sifting through vendor websites and reading various documents, I have finished my comparison. If there’s one thing I came away with in this process, it’s that some vendors are better than others at providing specifics regarding their platforms. By far, Juniper was the best at providing in depth documentation on their hardware and software. Although Cisco has a ton of information out there about the Nexus 7000, I found that a lot of it was more on the architecture/design side and less on the actual specifics of the platform itself. Some vendors still hide documentation behind a login that only works with a valid support contract. In my opinion, that’s not a good thing. I think most people research products before they decide to buy, so why hide things that are going to cause roadblocks for people like myself trying to do some initial research? I’ve read MANY brochures, white papers, data sheets, third party “independent” tests(meaning a vendor paid for a canned report that gives a big thumbs up to their product), and other marketing documents in the past couple of weeks. I did not actively seek out conversations with sales people in regards to these products. I did have a couple of conversations around these products and not all the people I talked to were straight sales people. Some were very technical. However, I wanted to go off the things that the websites were advertising. Once the list is narrowed down to 2 or 3 platforms, the REAL work begins with an even deeper dive into the platforms.

I wish I could display the whole thing on this website and have it look pretty. Unfortunately, I don’t know how to do that and make it look nice. Remember, I get paid for networking stuff and not my web skills! In consideration of that, I have attached a PDF file of my comparison chart. I have the original in Excel format, but WordPress wouldn’t allow me to upload it. If you want a copy, I can certainly e-mail it to you. You can send me your e-mail address via a direct message in Twitter. I can be found here.

What IS included in the spreadsheet.

I would love to say that I did all of this work for the benefit of my fellow network engineers, but I would be lying if I said that. I built this out of a specific need that my employer has or will have in the coming months/years. Due to that, some of the features that were important to me may not be important to you. If you find yourself wondering why I included it, just chalk it up to it being something that I considered a
requirement. Having said that, it would be selfish not to share this information with you, so take it for what it’s worth.

When it comes to the actual numbers of things like fan trays and power supplies, I tend to build out the chassis to the full amount it will hold. If it can take 8 power supplies, I will probably use 8. Same with fabric
modules. I like to plan with the belief that I will fully populate the chassis at some point, so I want to have enough power, throughput, and cooling on board to handle any new blades. All chassis examined have the
ability to run on less than the maximum number of power supplies.

When it comes to throughput rates, you have to distinguish between full duplex numbers and half duplex numbers. They don’t always specify which is which, so you have to dig through a lot of documentation to figure out what they are really saying. Thankfully marketing people tend to favor the larger numbers so more often than not, the number given is full duplex. In the case of slot bandwidth, I used the half duplex speed. The backplane numbers are all full duplex.

What IS NOT included in the spreadsheet and why.

If I were to include every single thing these switches support, the spreadsheet would be 10 times bigger than it already is. There are quite a few things that I consider to be basic requirements. These basic things
were left out of the sheet to avoid cluttering it up with things you probably already know. For example, does the switch support IPv6? This should be a resounding yes. If it doesn’t, why in the world would I even
consider it? The same can be said with routing protocols. They all should support OSPFv2 and RIPv2 at a minimum. Most, if not all support IS-IS and BGP as well. It is also worth pointing out that I may not even need this switch to run layer 3. I am looking for 10Gig aggregation and am not necessarily concerned about anything other than layer 2. All of these switches also support QoS. Perhaps they do things a little differently
between each switch, but the basics are still the basics and I don’t really need a billion different options when it comes to QoS. That may change in a few years, but for now, I am not looking at running anything
other than non-storage traffic over these switches.

I think you see my point by now. I could go on and on about what isn’t included. If it is something well known like SSH for management purposes, I don’t need to include it in the list. It’s a given.

Special note on the TOR(Top of Rack) fabric extension.

While I primarily need 10Gig aggregation, another bonus is the ability to have 1Gig copper aggregation as well. However, I don’t want it all coming back to the chassis itself. The Nexus 7010 has the ability through the Nexus 5000’s(of which I already own several) to attach Nexus 2000 series fabric extenders that function as top of rack switches(although it’s not REALLY a switch). This is a nice bonus feature as I can aggregate a lot of copper connections back to 1 chassis without all the spaghetti wiring that is commonly seen in 6500’s and 4500’s. In the case of Brocade and Force10, they actually have the TOR extensions as nothing more than MRJ-21 patch panels. With 1 cable(which is the width of a pencil) per 6 copper ports, the amount of wiring coming back to the chassis is reduced tremendously.

Additionally, there is no power consumption at the top of the rack like there is with the Nexus 2000’s and it is a direct link to the top of rack connections unlike the Nexus model where I have an intermediate 5000 series switch in between.

One final note. The HP/H3C A12508 is listed on the HP site as the A12508, but when you click into the actual product page, it is listed as the S12508. These terms can be mixed and matched and mean the same chassis. I have chosen to use A12508 as the model number as much as possible in this post, but my previous post that mentioned the various switches used the letter “S” instead of “A”.

I plan on posting a few more thoughts on this process as it pertains to specific platforms. I was awed by several of the platforms, not just by the hardware itself, but by the approach the company is taking to the data center in general. Any of these platforms will do the job I need them to do. Some will do that job a lot better than others. As for cost, I have only seen numbers on a few of the platforms. That’s something that is important, but not the most important. You can read my previous post on this for more clarification on what my thought process is.

Remember that I am not claiming to be an expert in regards to any of these platforms. I have done many hours of research on them, but there is a chance that some information in this PDF file will be wrong. If you see any glaring errors, please let me know. I promise you won’t hurt my feelings. If anything is marked “Unknown”, rest assured that I looked at every possible piece of literature on the website that I could reasonably find. If you managed to read this far in the post, the file is below. Enjoy!

Nexus 7010 Comparison – PDF File

*****Update – The Juniper 8200 series does support multi-chassis link aggregation. It just requires another piece to make it work. The XRE200 External Routing Engine gives the 8200 this capability. Thanks to Abner Germanow from Juniper for clarifying that!