Archive

Posts Tagged ‘datacenter’

Tech Field Day 5 Is Over. Now What?

February 14, 2011 4 comments

I made it back to Nashville before noon on Saturday. A cross country red eye flight with a short layover in Atlanta put me into Nashville just in time. I was able to get a few hours with my kids, dinner with my wife and a bunch of friends from church, followed by dessert and more socializing with all those church friends over at my house. Sunday was full with church, time spent with my father explaining what this San Jose trip was all about(he was very interested in it all), a cub scout hike with my son, and more church. I’m still exhausted. I feel like I haven’t slept in days. I’ve had a nagging cough that air travel made worse and the weather is now 50 degrees warmer than when I left last week to go to California. My co-worker left my company to go work for a well known hardware vendor. His last day was Friday when I was in San Jose. As luck would have it, we had a major data center outage Friday afternoon. I spent the remaining hours in San Jose on the phone and glued to my laptop staring at switch configs. I didn’t get to really say proper goodbyes or even enjoy the final meal with everyone else as I was constantly jumping off and on a conference bridge to deal with the problems in the data center back home. In the end, the problem ended up being something outside of my control, so it was an extra kick in the teeth from the data center gods. In spite of it all, I feel like a million bucks!

Let me tell you why.

1. I love technology. – I love it to the core of my being. There is no greater joy for me than to immerse myself in the 1’s and 0’s of networking and consume mass quantities of information. I’ve never been one to understand people who do what I do for a living and have no real interest in technology outside of 8 to 5 Monday-Friday. Maybe that sounds somewhat elitist. Maybe that’s not a realistic attitude to have. I get paid to learn. That’s the coolest thing in the world. I guess I just recognize that opportunity for what it is and want to be around people who think the same way.

I have been a part of IT groups before where a core group of us had similar attitudes regarding the world of technology. We would feed off of each other and our efficiency and skillsets advanced much faster than all the other environments I have been in where not a whole lot of people shared the same drive and desires. Things change and our careers take us other places. Over time you start to shift back to what is normal for everyone else. You no longer look at Friday afternoon as an inconvenience since you have to put the toys away and go home for 2 days. You no longer wake up Monday morning excited to go into work. For a couple of days last week, I got that spark back.

Now, I don’t want you to think I have a depressing life. I LOVE my life. I love what I do for a living. I love just about everything about my life, and I work in a cubicle! My point, is that I was in the midst of a large group of technology zealots once again. Over the next couple of days, I would either witness or take part in countless discussions regarding networking, storage, virtualization, backups, or systems in general. These were discussions with people who were well versed in their respective areas. People who actually thought about technology as opposed to parroting talking points gleaned from a vendor slide deck. Some of them were published authors. I have a book collecting addiction. Being around authors rates pretty high on my scale of coolness.

2. I love talking to vendors. – My typical exposure to vendors is via their sales channel or third party reseller/integrator. This time, I was able to go straight to the source. I liked the fact that the companies I was exposed to at Tech Field Day 5 ranged from the very large like Symantec and HP, to the very small like Drobo, and Druva. I also saw the companies that fit in between those 2 groups like Xangati, Infoblox, and NetEx. I like talking to the vendors because they all want to differentiate themselves from one another. This means that in general, they have differing points of view as to how to solve a problem. By understanding each vendor’s approach, you can make a more informed decision.

I live on the corporate side of IT. If I make a recommendation in regards to the network, I need to make sure I make the BEST one possible. Yes it takes a lot of time and effort, but choices around hardware and software need to be treated with more care than one uses when selecting which brand of breakfast cereal to buy at the grocery store. I’ll talk to just about any vendor that lives within the network space. No matter how insignificant the product or company may seem, I want to know what it is they do. There is no such thing as being too prepared when it comes to making decisions about your network.

That was Tech Field Day in a nutshell for me. Lots of discussions with my peers and lots of discussions with vendors.  For now, I am still trying to digest it all. Two full days worth of briefings and discussions will take a bit to sink in for me. If anything, I have a sincere desire to shore up my virtualization and storage knowledge. I just have to find the time to fit it in. Networking on its own is enough to keep me busy for years to come!

I met some really great and SMART people at this event. Several of them I already knew from Twitter, and some of them I had read their blogs prior to this event. Others were affiliated with vendors, so I had never heard of them, except for some of the people from the larger companies. My RSS feed list has grown by quite a few entries as a result of this trip.

If I could give any advice in regards to this kind of event, it would be this. Go register to be a Gestalt IT Tech Field Day delegate. Do it NOW. If you love technology, if you love talking about technology, and if you want to mix it up with vendors in their own back yard, this is the event for you. I was taken care of very well by Claire and Steven. Nothing was overlooked. Every single vendor that presented seemed interested in us being there. Nothing was off limits in terms of what you could ask. Of course, there’s no guarantee they are going to answer it. The vendors still have to protect their intellectual property and rightfully so. Never in a million years would I have imagined that I would be able to engage someone like the CEO of Symantec and ask a direct question and get a direct answer. I also wouldn’t have imagined myself ever talking to the CEO and CTO of a company like Druva. I spent at least 15 minutes talking with them about their company, social media, and other similar things at the Computer History Museum. Without a doubt it was one of the high points of my trip to San Jose. I could go on and on about other incidents, but it wasn’t my intention to ramble on in this post.

Oh, and lest I forget to tie into the title of this post I should answer the question: “Now what?” Well, I still have to finish preparing to take the CCIE Route/Switch lab. However, I find myself wanting to give equal time to ramping up in the VMware and storage networking worlds. I spent several days in the midst of some storage and virtualization experts. What can I say? They have made me a convert. Or maybe it’s just that I want to understand a bit more of what they were talking about if I ever run into them again. 🙂 In the near future, I want to write a bit about the various vendors. In particular, I will focus on Xangati, HP, Infoblox, and NetEx. They have more of a network-ish focus and that’s the area I focus on. That’s not to say that I won’t comment on the others. I really enjoyed the data deduplication talk from Symantec!

I cannot say thank you enough to everyone who made this event possible. Stephen Foskett played the role of our fearless leader very well. Claire was the driving force behind the scenes making sure everything went off without a hitch. The audio/visual crew produced some very high quality stuff even in the face of several technological glitches. The vendors were very gracious in hosting all of us. I appreciate their interaction from the presentation standpoint as well as their active Twitter presence. Bonus points to Xangati for the bacon and chocolate espresso beans! As for the delegates, well I am humbled to have been among you. Some of you are used to interfacing with these companies at this level. I personally, am not. I do look forward to reading your writings and hope to run into you again at some point!

*****Disclaimer*****
As a Tech Field Day delegate for Gestalt IT, my flights, hotel room, food, and transportation were provided by all of the vendors that presented during this event. This was not provided in exchange for any type of publicity on my part. I am not required to write about any of the presentations or vendors. I received a few “souveniers” from the vendors which were limited to t-shirts, water bottles, pens, flash drives, notepads, and bottle openers.

Advertisements

Wrapping My Head Around The Nexus1000v – Part 1

February 11, 2011 5 comments

****Note – I am NOT in any way shape or form a VMware expert. I can’t guarantee you that I will be 100% correct in my terminology or representation of VMware, VMotion, VSphere, etc. I apologize in advance. I am just a network guy trying to understand how the Nexus 1000V ties into the VMware ecosystem. I also understand that companies other than VMware are doing virtualization. Please feel free to correct my inaccuracies via the comments.

Paradigm shifts are coming. Some of them are already here. About 5 or 6 years ago I was first introduced to server virtualization in the form of VMware ESX server. For you old mainframe people, you probably weren’t as impressed as I was when I learned about this particular technology.

When it came to VMware, I wasn’t doing anything fancy. I was just using it to host a few Windows servers. When these boxes were physical, they were only using a fraction of their CPU, memory, and disk space. In most cases, they were specific applications that vendors would only support if they were on their own server. From a networking standpoint, there was absolutely nothing fancy that I was doing. All of the traffic from the virtual machines came out of a shared 1 gig port. For me, VMware was a fantastic product in that it allowed me to reduce power, rack space, and cooling requirements.

I realize that some people will take issue with my use of the term “server virtualization”. To some, software and hardware virtualization are different animals. For the purposes of non-VMware people like myself, the fact that I used VMware to reduce the physical server sprawl means that I refer to it as “server virtualization”.

Fast forward to today. It is getting harder and harder to find a company that isn’t doing some sort of server virtualization. It isn’t just about reducing physical server footprint and maximizing CPU and memory resources. These days, you can achieve phenomenal uptime rates due to things like VMotion. For those who are unfamiliar with VMotion, it is a service within VMware that can move a virtual machine from one physical host(ie ESX/ESXi server) to another. This can happen as a result of hardware failure on the physical host itself, additional CPU/memory resource requirements, or other reasons that the VMware administrator deems important.

Today, from a networking standpoint, there are 3 options when it comes to networking inside the VMware vSphere 4 ecosystem:

vNetwork Standard Switch – 1 or more of these standard switches reside on a single ESX host. This would be the vSwitch in older versions of ESX. This is basically a no frills switch. Think of this as managing switches without the use of VTP. You have to touch a lot of these switches if certain VLAN’s reside on multiple ESX hosts.

vNetwork Distributed Switch – 1 or more of these will reside in a “Datacenter”. By “Datacenter”, I am not referring to a physical location. Rather, in VMware lingo, it is a logical grouping of ESX clusters(comprised of ESX hosts). This is the equivalent of running VTP across a network of Cisco switches. You can make changes and have them show up on each ESX host that is part of the “Datacenter”. This particular switch type has several advantages over the standard switch in terms of feature availability. It also allows you to move virtual hosts between multiple servers via vMotion and have the policies associated with that host

Cisco Nexus 1000V – Similar to the distributed switch, except it was built on NX-OS and you can manage it almost like you would any physical Cisco switch. It also has a few more features that the regular VMware distributed switch does not.

That’s the basic overview as I understand it. What I had been struggling with was the actual architecture behind it. How does it work? I can look at a physical switch like the 3750 or 6500 and get a fairly decent understanding of it. Not the level I would like to have, but I understand that vendors like Cisco don’t want to give away their “secret sauce” to everyone that comes along and asks for it.

As luck would have it, my company has purchased several instances of the Nexus 1000V and last week, I was able to spend a day with a Cisco corporate resource and one of the server/storage engineers my company employs. I didn’t realize how deficient I was in the world of VMware until I got into a room with these 2 guys and we started talking through how we would design and implement the Nexus 1000V. I kept asking them to explain things over and over. In the end, a fair amount of pictures on the white board caused the light bulb in my head to go active. I still have much reading to do, but for now I understand it a LOT more than I did. Now, let’s see if I can have it make sense to you. 🙂

The Nexus 1000V is basically comprised of 2 different parts. The VEM and the VSM. If we were to assign these 2 things to actual hardware pieces, the VEM(Virtual Ethernet Module) would be the equivalent of a line card in a switch like the Nexus 7000 or a Catalyst 6500. In essence, this is the data plane. The second piece is the VSM(Virtual Supervisor Module). This is the same as the supervisor module in the Nexus 7000 or Catalyst 6500. As you probably already guessed, this is the control plane piece.

Here’s where it gets a bit crazy. The VSM can support up to 64 VEM’s per 1000V. You can also have a second VSM that operates in standby mode until the active fails. In theory, you have a virtual chassis with 66 slots. In the Nexus 1000V CLI, you can actually type a “show module” and they will all show up. Each ESX host will show up as its own module. Will you ever have 64 VEM’s in a single VSM? Maybe. However, there are limitations around the Nexus 1000V that make that unlikely.

The VEM lives on each ESX server, but where does the VSM reside? It resides in its own guest VM. You actually create a separate virtual machine for the VSM when installing the Nexus 1000V. That guest VM resides on one of the ESX servers within the “datacenter” that the Nexus 1000V controls. You access that guest VM just like you would a physical switch in your network by using the CLI. Once the VSM is installed, the network resource can go in via SSH or Telnet and configure away.

That’s the basic components of the Nexus 1000V. There are other things that need to be mentioned such as how communication happens from the guest VM perspective to the rest of the network and vice versa. Additionally, we need to discuss the benefits of using the Nexus 1000V over the standard VMware distributed switch. There’s a lot more than just the management aspect of it. I will cover that in part 2. Additionally, I plan on doing a write up on the Nexus 1010 appliance. This allows you to REALLY move the control plane piece out of the VMware environment and put it on a box with a Cisco logo on it.

The Many Hats of the Network Engineer

November 18, 2010 8 comments

Remember when the network field wasn’t so complicated? Think back to the early 1990’s. Wireless for enterprise users was in its infancy. Firewalls seemed to be a bit easier to administer. Virtualization was limited to the mainframe community. A T-1/E-1 cost a billion dollars a month and could provide Internet connectivity for thousands of users. Voice was still confined to its own cable plant and the PBX was humming along using TDM. RIPv1 was still pretty popular. Hubs made packet captures easy to obtain, but broadcast storms constantly took down segments of the network. Storage involved connecting an external disk array to a server via a SCSI cable. ISDN was what the rich people used at home for Internet access. You know. The good old days.

Well it seems that a lot has changed since then. While I have no desire to go back to those days, I do miss the simplicity. Or at least what seems simplistic compared to today. Let’s take a look at what your typical enterprise network person has on their plate. Keep in mind that in some environments, these people also have systems related duties such as Active Directory administration, Linux/Unix administration, e-mail, database, etc.

Routing – Static, OSPF, EIGRP, and BGP

Switching – STP and its variants(RST, MST, PVST), Link aggregation(port channels/etherchannels)

Wireless – AP’s(antenna types), controllers, extras(location services, management), 802.11a/b/g/n

Circuits/WAN – T-1’s, DS-3’s/T-3’s, OC-3/12/48(SONET), Metro Ethernet, ISDN(Yes, it’s still out there), FrameRelay(Yep. That one too.), MPLS

Voice – call routing, phone(station) administration, voice mail, conferencing(audio and video), PRI’s, DID’s, signaling, codecs, voice gateways

Other Services – Multicast, load balancing, firewall, IPS, VPN, WAN optimization, content filters(web,e-mail), network management platforms, QoS, packet capture analysis(ie Wireshark,tcpdump), storage networking

Does that about sum it up? Yes, some of those things were being done back in the 90’s and in some cases, even earlier. However, a lot of them are relatively new things. Maybe you don’t have to touch all of those things. Maybe you do. For some of the service provider type things (MPLS, SONET), you may not ever have to administer that end, but if you’re buying those services, you better be familiar with them. Perhaps your organization is large enough to break out the security side of things or the voice side of things. Maybe you have a dedicated storage group that handles the storage network side. If you are lucky, you may even have a dedicated wireless engineer or two depending on the size of your wireless deployment.

It is a monumental task to become proficient in all of those areas, but wait; there’s more. For many people in the network space, they also have to become data center/facility engineers focusing on the following things:

Monitoring – temperature, humidity, water leak, smoke, power load levels

Cooling – BTU calculations, hot/cold aisle design, airflow on hardware

Power – Circuit requirements, UPS requirements, generator requirements

Cabling – Sub-floor, above the rack, CAT-5/6/7 differences, patch panel choices/locations, SM and MM fiber differences

Space Requirements – Rack deployments, 2 post, 4 post, full height, half height

Think that’s all? Well, the past few years have added some additional requirements, and more are coming. Things such as:

Virtualization – It has been around for at least 5 years now in enterprise environments. It’s not going away and without using newer hardware/software from networking vendors, you can’t see what’s going on inside the server farm.

The Return to Layer 2 in the DC – TRILL and every vendor’s particular flavor of it aim to resolve the ineffiencies of Spanning Tree and turn your network switches into an intelligent fabric. This will be similar to what storage networks have today via Fiber Channel.

Consolidation of Storage and Data/Voice Traffic – It happened to voice about 10 years ago. Now it is happening to storage. Everything will be on 1 wire in a matter of years.

Traditional Endpoint Death – No longer will the phone, desktop, and laptop rule the network. Cellular phones, tablets, and other similar compact devices will show up on the wireless networks in even greater numbers than they are today. Congratulations corporate wireless person. You just become a Google, Apple, Microsoft, Blackberry, HP, Cisco, and Avaya engineer for their mobile product set.

IPv6 – And you thought planning IPv4 deployments were interesting? The migrations to IPv6 are going to be interesting. Using NAT and 6to4/4to6 tunnels will become commonplace until the IPv4 is gone. I realize this is already happening/happened in many other parts of the world. However, in the US, there’s still a LOT of work to be done.

Now I realize that nobody is going to be an expert in all of these areas. I also know that many employers are not going to require you to even be familiar with all of these things. With things like hosted data centers, you may not ever have to deal with data center build out. Power and cooling may never be an issue for you. I also know that there are plenty of good consultants out there that specialize in one or more of these areas. Of course, nobody stays at the same company forever, so what you do at company X today doesn’t mean you won’t do a bunch of other things at company Z tomorrow. I guess the point I am trying to make is that our jobs are only going to become more complex in the years to come. The amount of hardware we use may decrease, but the functions within that hardware will increase. I can see a day in which something like WAN optimization is built into the router itself, and I don’t mean via a service module. I mean built into the processors or ASIC’s themselves. Of course, that’s assuming we’re still using TCP at that time. I don’t even want to contemplate what wireless will be like after 802.11n because it makes my head hurt just trying to understand how 802.11n works today with multiple antennas.

Start looking at the blueprint for something like a Cisco CCIE Route/Switch(Insert any other track as well) or Juniper JNCIE exam and you’ll find that it only covers a portion of what you need to know in this day and age. Anyone who has been involved in that process from start to finish knows how much information you have to know to pass. For those who don’t know, it is a TON. Yikes! Still want the job? Maybe becoming a specialist isn’t such a bad idea after all.

Nexus 7010 Competitors – Part 2

October 15, 2010 5 comments

**** Please note that these are my own thoughts and observations and should not in any way be taken to be the opinion(s) of my employer. Additionally, this is a rather long post, so please bear with me. I promise not to waste your time by babbling incessantly about non relevant things.

Finally! After many hours spent sifting through vendor websites and reading various documents, I have finished my comparison. If there’s one thing I came away with in this process, it’s that some vendors are better than others at providing specifics regarding their platforms. By far, Juniper was the best at providing in depth documentation on their hardware and software. Although Cisco has a ton of information out there about the Nexus 7000, I found that a lot of it was more on the architecture/design side and less on the actual specifics of the platform itself. Some vendors still hide documentation behind a login that only works with a valid support contract. In my opinion, that’s not a good thing. I think most people research products before they decide to buy, so why hide things that are going to cause roadblocks for people like myself trying to do some initial research? I’ve read MANY brochures, white papers, data sheets, third party “independent” tests(meaning a vendor paid for a canned report that gives a big thumbs up to their product), and other marketing documents in the past couple of weeks. I did not actively seek out conversations with sales people in regards to these products. I did have a couple of conversations around these products and not all the people I talked to were straight sales people. Some were very technical. However, I wanted to go off the things that the websites were advertising. Once the list is narrowed down to 2 or 3 platforms, the REAL work begins with an even deeper dive into the platforms.

I wish I could display the whole thing on this website and have it look pretty. Unfortunately, I don’t know how to do that and make it look nice. Remember, I get paid for networking stuff and not my web skills! In consideration of that, I have attached a PDF file of my comparison chart. I have the original in Excel format, but WordPress wouldn’t allow me to upload it. If you want a copy, I can certainly e-mail it to you. You can send me your e-mail address via a direct message in Twitter. I can be found here.

What IS included in the spreadsheet.

I would love to say that I did all of this work for the benefit of my fellow network engineers, but I would be lying if I said that. I built this out of a specific need that my employer has or will have in the coming months/years. Due to that, some of the features that were important to me may not be important to you. If you find yourself wondering why I included it, just chalk it up to it being something that I considered a
requirement. Having said that, it would be selfish not to share this information with you, so take it for what it’s worth.

When it comes to the actual numbers of things like fan trays and power supplies, I tend to build out the chassis to the full amount it will hold. If it can take 8 power supplies, I will probably use 8. Same with fabric
modules. I like to plan with the belief that I will fully populate the chassis at some point, so I want to have enough power, throughput, and cooling on board to handle any new blades. All chassis examined have the
ability to run on less than the maximum number of power supplies.

When it comes to throughput rates, you have to distinguish between full duplex numbers and half duplex numbers. They don’t always specify which is which, so you have to dig through a lot of documentation to figure out what they are really saying. Thankfully marketing people tend to favor the larger numbers so more often than not, the number given is full duplex. In the case of slot bandwidth, I used the half duplex speed. The backplane numbers are all full duplex.

What IS NOT included in the spreadsheet and why.

If I were to include every single thing these switches support, the spreadsheet would be 10 times bigger than it already is. There are quite a few things that I consider to be basic requirements. These basic things
were left out of the sheet to avoid cluttering it up with things you probably already know. For example, does the switch support IPv6? This should be a resounding yes. If it doesn’t, why in the world would I even
consider it? The same can be said with routing protocols. They all should support OSPFv2 and RIPv2 at a minimum. Most, if not all support IS-IS and BGP as well. It is also worth pointing out that I may not even need this switch to run layer 3. I am looking for 10Gig aggregation and am not necessarily concerned about anything other than layer 2. All of these switches also support QoS. Perhaps they do things a little differently
between each switch, but the basics are still the basics and I don’t really need a billion different options when it comes to QoS. That may change in a few years, but for now, I am not looking at running anything
other than non-storage traffic over these switches.

I think you see my point by now. I could go on and on about what isn’t included. If it is something well known like SSH for management purposes, I don’t need to include it in the list. It’s a given.

Special note on the TOR(Top of Rack) fabric extension.

While I primarily need 10Gig aggregation, another bonus is the ability to have 1Gig copper aggregation as well. However, I don’t want it all coming back to the chassis itself. The Nexus 7010 has the ability through the Nexus 5000’s(of which I already own several) to attach Nexus 2000 series fabric extenders that function as top of rack switches(although it’s not REALLY a switch). This is a nice bonus feature as I can aggregate a lot of copper connections back to 1 chassis without all the spaghetti wiring that is commonly seen in 6500’s and 4500’s. In the case of Brocade and Force10, they actually have the TOR extensions as nothing more than MRJ-21 patch panels. With 1 cable(which is the width of a pencil) per 6 copper ports, the amount of wiring coming back to the chassis is reduced tremendously.

Additionally, there is no power consumption at the top of the rack like there is with the Nexus 2000’s and it is a direct link to the top of rack connections unlike the Nexus model where I have an intermediate 5000 series switch in between.

One final note. The HP/H3C A12508 is listed on the HP site as the A12508, but when you click into the actual product page, it is listed as the S12508. These terms can be mixed and matched and mean the same chassis. I have chosen to use A12508 as the model number as much as possible in this post, but my previous post that mentioned the various switches used the letter “S” instead of “A”.

I plan on posting a few more thoughts on this process as it pertains to specific platforms. I was awed by several of the platforms, not just by the hardware itself, but by the approach the company is taking to the data center in general. Any of these platforms will do the job I need them to do. Some will do that job a lot better than others. As for cost, I have only seen numbers on a few of the platforms. That’s something that is important, but not the most important. You can read my previous post on this for more clarification on what my thought process is.

Remember that I am not claiming to be an expert in regards to any of these platforms. I have done many hours of research on them, but there is a chance that some information in this PDF file will be wrong. If you see any glaring errors, please let me know. I promise you won’t hurt my feelings. If anything is marked “Unknown”, rest assured that I looked at every possible piece of literature on the website that I could reasonably find. If you managed to read this far in the post, the file is below. Enjoy!

Nexus 7010 Comparison – PDF File

*****Update – The Juniper 8200 series does support multi-chassis link aggregation. It just requires another piece to make it work. The XRE200 External Routing Engine gives the 8200 this capability. Thanks to Abner Germanow from Juniper for clarifying that!