I have a new home!

I’ve decided to shed the limitations of a free hosting service and actually pay for hosting services. While I wish I could just grab a domain with “Network Therapy” in the title and be done with it, it is already taken. Evidently there are legitimate uses of Network Therapy and they have nothing to do with 1’s and 0’s. Therefore, I decided to make the jump to an entirely new domain name. I figure that it is better to do it now while my post count is relatively low. From now on, you can find me at www.insearchoftech.com. I can do a bit more now that I am paying for it, so I hope to be a bit more frequent. I am by no means a website whiz kid, so I opted to keep using WordPress.

Categories: Uncategorized

HP Networking – Part 2(More vision…)

If you haven’t read my first post on HP Networking, you can read it here. I covered the marketing aspect of it. In this second post, I wanted to talk about the technical approach that HP is taking. However, there was so much information that was mentioned prior to the technical networking talk, that I couldn’t cover it all in the first post. Therefore, this post will be more marketing type content. Sorry for those of you who hate marketing, but at least I have no slide deck to torture you with.

Let me give you a rundown of the 4 different speakers we listened to from HP. I probably should have covered this in the first post. I mention these people just to let you know how much information we had to consume within the several hours HP presented to us. If you want to see the presentations I saw, you can watch the videos here. The HP videos are the last 2 in the list. It’s 3 hours worth of content from HP alone!

Over the course of several hours with HP during Tech Field Day 5, there were 4 different speakers. Frances Guida led off with the overall HP strategy. Jeff DiCorpo gave a very interesting talk on HP’s de-duplication approach in respect to storage. I purposely avoided talking about that because there were storage professionals within the Tech Field Day delegation and they are far more capable of writing about that than I am as a non-storage guy. Jay Mellman kicked off the networking marketing pitch. Finally, Jeff Kabul, spent the remainder of the time in a technical discussion on HP networking. Jeff is a technical marketing engineer with more emphasis on the technical than the marketing(his words).

Now that you have an idea for the presenter lineup, let me pick up where I left off in my first post…….

Throughout the presentations from HP, you REALLY get the feeling that they only look at Cisco as their competition. Everything was framed in the context of pulling share away from Cisco, or doing things better than Cisco. In light of that, it was no surprise when Jay Mellman mentioned that all of HP’s 6 main data centers were Cisco free. I think they are really proud of that fact, and maybe they should be. Is there any better way to show your customers, or potential customers, that you are serious about your networking products than to “eat your own dog food” in your production environment?

Then, it got REALLY interesting. Jay alluded to a recent Gartner report entitled “Debunking the Myth of the Single-Vendor Network” in which Gartner states that it is cheaper to have more than one vendor supply your network gear. Jay mentioned that Cisco got people very lazy about correct network design and that by bringing in a second vendor, it forces an organization to do proper network design. I am going to assume that was a reference to some of the proprietary things Cisco has developed like EIGRP and HSRP.

One of the delegates, Tom Hollingsworth(@networkingnerd), asked Jay what the difference was between proper network design and lazy network design. Tom mentioned that ProCurve had historically been edge centric and that perhaps HP felt that switching decisions should be made closer to the edge as opposed to Cisco who puts more emphasis on the core. Jay stated that Cisco does that because they make a lot more money selling core switches than they do edge switches. According to Jay, when it comes to Cisco pushing core switching, quote: “It is as much a business model as it is an architectural model.

HP believes they have a better approach to architecture than Cisco. Maybe they feel that way when compared to the other networking vendors, but again, I get the feeling they are only interested in being better than Cisco. They also believe people are going to do more evaluation than they have in the past.

HP realizes they aren’t going to hit a bunch of home runs and get forklift upgrades from Cisco to HP. They are just looking to get a foot in the door. Maybe they will win a few deals outright, but for the most part, they will have to squeeze their way into Cisco dominated networks piece by piece. BMW was a good example for them. What started out as a small wireless project in a few dealerships blew up into HP getting a piece of the BMW enterprise infrastructure. HP isn’t the only vendor to work the “foot in the door” angle. I’ve talked to several networking vendors in the past year and they are all trying this approach. Get a box or two in the datacenter or on the edge and slowly grow their presence over time. To me, that’s the best strategy. Let an organization get comfortable with you. Then, when there’s a problem and a vendor like Cisco cannot solve it, you get to ride in on the white horse and save the day with your product that CAN solve the problem.

With all of this talk of HP believing they did things better than Cisco, an opportunity to ask HP about voice, or  unified communications came up and I took it. I asked Jay if HP was going to do anything in the realm of voice. Granted, they have an existing product from 3Com entitled VCX, but in light of HP’s increasing relationship with Microsoft around unified communications, I didn’t have a good feel for what HP was going to do. The voice/UC offering from Cisco is pretty solid from a stability and feature standpoint, so it would be harder for HP to chip away at that sector than it would be in the realm of switching.

HP has decided they don’t want to be in the voice business long term. Jay indicated that with unified communications(ie voice), it is, and I quote: “bifurcating into applications and infrastructure”. Kudos to Jay for using an obscure word like “bifurcating“.  To be quite honest, I had to look it up.🙂 It means “the splitting of a main body into two parts”. HP has taken the approach that voice is nothing more than an application. They want to focus on the infrastructure that provides transport for that voice traffic, but they don’t want to be involved in developing the platforms that manage/create the voice traffic. Their goal is to identify areas like voice that they consider applications and work with third parties. While I tend to agree that it makes more sense to focus on the infrastructure from an HP networking perspective, it seems to me that HP is one of those companies that could actually put out a voice solution that would work. They have all of the pieces to make it happen. Networking, server hardware, applications expertise, etc. Perhaps to do that, it would take several years of development on their part and they obviously want to remained focused on other things.

I have covered everything(minus the storage de-duplication talk) up to the technical discussion from HP. In the next post, I will jump into the nerdier things. There was so much meaty information from the discussions leading up to the technical presentation that I thought I would re-hash the points that I thought were the most interesting. The more time I spend in the industry, the more interested I get in the non-technical things when it comes to the different vendors out there. That’s not to say that I don’t like the very technical things, because I do. I just think that if you are going to devote a substantial amount of time to learning a vendor’s technology(and we all do), you need to make sure that technology is going to be around for more than a year or two. Understanding where the focus of company XYZ is will go a long way in determining what you need to focus on and what you need to let go the way of the dinosaur.

So……next post on HP will be more technically focused and this time I mean it.🙂

*****Disclaimer: As a delegate for Tech Field Day 5, my flight, food, lodging and transportation expenses were paid for in part by HP. I am under no obligation to write anything regarding HP either good or bad. Anything I choose to write are my opinions, and mine alone. **********

Categories: hp, vendors Tags:

HP Networking – The Vision(As I Understand It)

March 4, 2011 3 comments

As part of Tech Field Day 5, I got a chance to sit in on multiple briefings from HP. I was very interested to hear about their particular product set and how it fits within the data center. The following are my thoughts on HP’s networking solution.

According to HP, one of the biggest problems facing their customers is that of “IT sprawl”. As a result of this sprawl, silos are created. The servers end up in a server group. Storage ends up in a storage group. The same goes for the network, database, security, and so forth. Silos, in the opinion of HP are a bad thing. They cause you to lose sight of the bigger picture.

I don’t know that I agree with that. Silos in and of themselves are not a bad thing. It takes a fairly high degree of technical ability to oversee just one of those previously mentioned areas in a decent sized enterprise network. I fail to see how you could have anything but silos. I know there are people out in the industry who think architects should not have a specialty and should be able to design anything at a high level. I call those people crazy. As you go further down the chain into engineering, support, and implementation/deployment, the level of technical abilities in a specific area becomes really important. It isn’t realistic to have people functioning within multiple silos unless the level of technical proficiency you require isn’t that great. As for the big picture, that’s what management is for. My job is to ensure the network is running. That’s a tough enough job within itself. Perhaps I misunderstood what HP was trying to say. The only cross-silo entity I want to see is the help desk. I have been in environments where you took the various tiers and put them all together under one common manager. Instead of putting all the network people together, you put the support people together, the implementations people together, the engineers together, the architects together, etc. The problem with this approach is that I always needed to interact more with people in my networking silo than I did with people who were in the same tier as me, but may have been storage, server, or security focused. I worked more with people outside of my group than with people within my group. Perhaps other people have different experiences, but from an efficiency standpoint, I favor the silo.

That was just within the first 10 minutes of the HP pitch. I wouldn’t expect to hear much of a difference if another large vendor was presenting. Sprawl is a HUGE problem that things like virtualization have dealt with. What is it about HP that makes them different? Why should you choose them over another vendor when it comes to a networking solution? In HP’s view, there are 3 reasons why.

1. Strong IP in all domains of IT. – You can’t really argue this one. HP has products in just about every major sector of IT. They believe that the only way to present an overall working solution to the customer is to have a fundamental understanding of all things IT. They have a LOT of smart people working for them(as do ALL major vendors) and those people produce a variety of products that make money as well as make our lives easier from a technology standpoint. Check out this link for some proof of that: http://h30507.www3.hp.com/t5/Data-Central/HP-Labs-Releases-2010-Annual-Research-Report/ba-p/88265

2. Open integration – HP continually hammered away at this point throughout their presentations. Everything they do, they want it to be open and standards based. This was their attempt to contrast themselves with Cisco, whom people constantly harp on for all of their proprietary protocols and technology. The problem with preaching the “standards” and “openness” mantra is that you better go to great lengths to ensure there isn’t a hint of anything proprietary in any of your hardware or software. For the most part, HP can make that claim. However, if you dig deep enough, you’ll find that HP has proprietary implementations of certain things. I don’t necessarily think it is that bad of a sin to have some proprietary element to your architecture. Key word being “some”. Juniper is doing it. Cisco, of course, does it. Brocade does it. They all pretty much do it in one form or another. I think you can reach a point to where you are so “standards” focused that you end up like the United Nations. It’s a great idea, but let’s face it. Nobody goes to the UN expecting them to do anything in an expedient and efficient manner.

I will say this about HP’s desire for open integration. They want to meet the needs of their  customers in as many areas as reasonably possible. For example, in the realm of storage, HP can integrate with Fiber Channel, iSCSI, and FCoE. In short, they want to give you options.

3. Services approach – Basically, wherever you want to do business, HP will work with you. If you want everything on your local premises, they’ll help out. Outsourced environment? They can help with that too. Even if you are looking at cloud providers, HP can assist with that.

During HP’s presentation, their head of marketing for networking, Jay Mellman, said some things that interested me greatly. Jay said the following, and I am paraphrasing:

“HP has to produce first class technology and HP will never get away with taking second hand infrastructure and slapping it together. Other business lines(server,storage) are counting on HP networking to produce a quality product or they’ll get the product elsewhere.”

Maybe I misunderstood, but the impression I got was that if the networking group produces slop, the other parts of the company won’t use it. In other words, it looks like they only eat their own dog food if it tastes good.

Jay had some more thoughts that he shared with us. He said that it is not about a gold plated network or 100% uptime anymore. As far as customers go, that’s a given. What it is about is the following:

“How do I deliver the right set of services to my customer at a given point in time with the right security at the right cost and then tomorrow morning flip it to a different set of services?”

HP wants to be number 1 in networking. They lead in every other one of their sectors like servers and laptops. They have the marketing know-how and a growing number of people out there who are getting tired of paying Cisco’s premium. The question is, do they have the right technology to pull it off? I’ll leave you with that question to ponder. My next post will focus less on the philosophical marketing stuff and more on the technology that HP is bringing to the table. Stay tuned……

*****Disclaimer: As a delegate for Tech Field Day 5, my flight, food, lodging and transportation expenses were paid for in part by HP. I am under no obligation to write anything regarding HP either good or bad. Anything I choose to write are my opinions, and mine alone. **********

Categories: hp, vendors Tags:

Thoughts on Infoblox

February 20, 2011 3 comments

As part of Tech Field Day 5, I received a briefing from Infoblox on their product line. They have some interesting products that revolve around making your life easier in the realm of network services management and network device management. While the products in and of themselves are compelling, the names affiliated with this company are just as interesting.

The VP of Architecture at Infoblox is none other than Cricket Liu. Anyone who has delved into BIND or Microsoft DNS should be familiar with Cricket. I read “DNS and BIND” well over 10 years ago, which Cricket co-authored with Paul Albitz. It’s an industry standard text as far as DNS goes.

In addition to Cricket Liu, another name affiliated with Infoblox, albeit indirectly, is Terry Slattery. Those of us in the network world who keep up with the Cisco CCIE program should be familiar with Terry. He’s CCIE number 1026. Essentially, he’s the first person to pass the lab. CCIE 1025 belongs to Stuart Biggs, who wrote and administered the first CCIE test. The room the first lab was in happened to be numbered 1024. Terry Slattery is the guy who founded Netcordia and created NetMRI. Netcordia was acquired in May of 2010 by Infoblox.

A third name you probably aren’t familiar with is Stuart Bailey. He’s the founder of Infoblox and the CTO. As he himself said during the session with Tech Field Day, he came straight out of academia at the University of Illinois at Chicago and founded Infoblox in 1999.

Infoblox has a fairly straightforward value proposition. Organizations are spending countless hours deploying and administering DNS, DHCP, IP address management, and network configuration/policy management solutions. They aim to solve that with a couple of different products.

First, we have IPAM for Microsoft DNS/DHCP. IPAM is their IP address management product and it does 3 core things:

1)      Manage IP address usage. – With a fair amount of eye candy, you can see the status of your entire IP addressing space on your network. By giving you visual maps of IP address usage, you can quickly find the gaps. Need an address allocation of 45 IP’s? You can find a group that large rather easily.

2)      Manage Microsoft DNS servers. – IPAM can manage all of your Microsoft DNS servers in a central location.

3)      Manage Microsoft DHCP servers. – In a large organization, you might have dozens of DHCP servers. Additionally, you may be concerned about failover capabilities and want to ensure every location has a backup DHCP server provisioned in the event of a failure. IPAM can take care of that for you from a central administrative site.

Second, we have NetMRI. This product came with the acquisition of Netcordia in 2010. NetMRI does what other products like Solarwinds Orion NCM and HP Network Automation software do. It manages the configuration state of your various network devices. With an ability to talk to multiple vendors, there isn’t a lot that NetMRI cannot do.  It does several things, but here are the core ones:

1)      Archive device configurations. – If you lose a device due to hardware failure, you are probably going to want to put the same configuration on the replacement device. NetMRI can ensure that device configuration backups are done on a regular basis. Any changes made to those devices are logged and over time, you can see what changes were made, who made them, and when they were made. This comes in handy when you need to know specifically when a certain change was made. You won’t always get that from the device itself. Perhaps Juniper devices running JunOS are an exception to the rule as I believe they store a large number of previous configurations on the device. However, if that device is dead, that won’t do you any good unless the configurations are stored on some kind of removable flash memory.

2)      Deploy mass changes to devices. – Let’s say your organization has 500 switches on the network and you need to change the NTP settings. Do you want to do that manually? Do you want to build a script to automate that? For most network people, those are not options. There will always be people out there who excel in automation and can write a script in Perl or some other language, extract the device list from a file and make the changes. For the rest of us, you use something like NetMRI.

3)      Enforce device policies. – Whether it is firewalls, switches, or routers, you typically have certain things that are always configured on your devices. Some of these are done for security purposes. Others are done for network stability. Imagine that you have a strict requirement for an access list to be applied to all Internet facing interfaces. If someone were to come along and remove that access list from an Internet facing interface, as long as you have a policy configured to enforce that requirement, NetMRI would change the interface configuration back to the way it was before someone changed it. It could then notify you that a policy violation had occurred.

4)      Automatic device configuration. – This goes hand in hand with the policy enforcement, but is worth discussing since the benefit here has to do with initial deployment. Imagine a company that has a bunch of remote sites that are relatively similar in nature. Retail, healthcare, and hospitality are a few industries that fit this scenario. If I can simply apply an IP address to a device along with a local user account or SNMP strings, I can have NetMRI do the rest. Why spend time configuring a dozen switches when it can be done through pre-defined policies? How much is that time savings worth to the company?

Infoblox appliances are able to interface with each other in what is known as “Grid Technology”.  You can create a small ecosystem of Infoblox products and have them interact with each other. The main focus of the grid appears to be survivability. Multiple appliances can communicate with each other and provide redundancy. If one appliance fails, other appliances in the grid can take over. Every indication I got from the in person sessions as well as research from their documentation leads me to believe that this is strictly related to IPAM. NetMRI can be on a physical or virtual appliance. Although I know it interacts well with IPAM, I don’t think it is a part of the survivable grid.

One final product worth mentioning is IPAM Insight. Although it is designed to map out your network and give you better insight into the connections, one of the side benefits is that it gives you the ability to track down IP addresses and MAC addresses to an individual switch port. I would assume this is a function built into NetMRI, but maybe not. It is built in to some of the competing products. Anyone who has chased down a MAC address that is flapping would instantly see the value in something like this.

What’s the value in all of this?

To be rather simplistic, the value prop from Infoblox is “time”. How much is your engineer’s time worth? Or, to be more brutally honest, how many fewer engineers would you need if you had centralized IP, DNS, DHCP, and network device configuration management? How much is a properly documented network worth?

If you are already in a highly structured environment with defined IP subnets and standard device configurations, you might not see much value in what Infoblox provides. My personal opinion is that no matter the size or state of your network, NetMRI is a solid tool that should be looked at. If you already use one of the competing packages(Solarwinds Orion NCM, HP Network Automation software/Cisco NCM, CiscoWorks LMS, etc) there’s probably not going to be a compelling reason for you to switch to NetMRI. All of those products tend to do the same thing with some minor variations. As for the IP, DNS, and DHCP management, it will only be beneficial in those environments where good practices and documentation do not exist. If your environment is VERY large and you have a million different hands in the pot, IPAM might be a good thing. You’ll be able to lock things down a bit easier, as well as use one central location for administration. If you have everything laid out properly in your Microsoft Active Directory environment, you’ll probably have a hard time selling this to management. The native tools from Microsoft do a decent job of providing usable information. Fortunately for Infoblox, there are tons of those environments that are not managed properly.

Let me know in the comments if you agree, disagree, or need to point out any errors.

*****Disclaimer: As a delegate for Tech Field Day 5, my flight, food, lodging and transportation expenses were paid for in part by Infoblox. I am under no obligation to write anything regarding Infoblox either good or bad. Anything I choose to write are my opinions, and mine alone. **********

Categories: vendors Tags:

Tech Field Day 5 Is Over. Now What?

February 14, 2011 4 comments

I made it back to Nashville before noon on Saturday. A cross country red eye flight with a short layover in Atlanta put me into Nashville just in time. I was able to get a few hours with my kids, dinner with my wife and a bunch of friends from church, followed by dessert and more socializing with all those church friends over at my house. Sunday was full with church, time spent with my father explaining what this San Jose trip was all about(he was very interested in it all), a cub scout hike with my son, and more church. I’m still exhausted. I feel like I haven’t slept in days. I’ve had a nagging cough that air travel made worse and the weather is now 50 degrees warmer than when I left last week to go to California. My co-worker left my company to go work for a well known hardware vendor. His last day was Friday when I was in San Jose. As luck would have it, we had a major data center outage Friday afternoon. I spent the remaining hours in San Jose on the phone and glued to my laptop staring at switch configs. I didn’t get to really say proper goodbyes or even enjoy the final meal with everyone else as I was constantly jumping off and on a conference bridge to deal with the problems in the data center back home. In the end, the problem ended up being something outside of my control, so it was an extra kick in the teeth from the data center gods. In spite of it all, I feel like a million bucks!

Let me tell you why.

1. I love technology. – I love it to the core of my being. There is no greater joy for me than to immerse myself in the 1’s and 0’s of networking and consume mass quantities of information. I’ve never been one to understand people who do what I do for a living and have no real interest in technology outside of 8 to 5 Monday-Friday. Maybe that sounds somewhat elitist. Maybe that’s not a realistic attitude to have. I get paid to learn. That’s the coolest thing in the world. I guess I just recognize that opportunity for what it is and want to be around people who think the same way.

I have been a part of IT groups before where a core group of us had similar attitudes regarding the world of technology. We would feed off of each other and our efficiency and skillsets advanced much faster than all the other environments I have been in where not a whole lot of people shared the same drive and desires. Things change and our careers take us other places. Over time you start to shift back to what is normal for everyone else. You no longer look at Friday afternoon as an inconvenience since you have to put the toys away and go home for 2 days. You no longer wake up Monday morning excited to go into work. For a couple of days last week, I got that spark back.

Now, I don’t want you to think I have a depressing life. I LOVE my life. I love what I do for a living. I love just about everything about my life, and I work in a cubicle! My point, is that I was in the midst of a large group of technology zealots once again. Over the next couple of days, I would either witness or take part in countless discussions regarding networking, storage, virtualization, backups, or systems in general. These were discussions with people who were well versed in their respective areas. People who actually thought about technology as opposed to parroting talking points gleaned from a vendor slide deck. Some of them were published authors. I have a book collecting addiction. Being around authors rates pretty high on my scale of coolness.

2. I love talking to vendors. – My typical exposure to vendors is via their sales channel or third party reseller/integrator. This time, I was able to go straight to the source. I liked the fact that the companies I was exposed to at Tech Field Day 5 ranged from the very large like Symantec and HP, to the very small like Drobo, and Druva. I also saw the companies that fit in between those 2 groups like Xangati, Infoblox, and NetEx. I like talking to the vendors because they all want to differentiate themselves from one another. This means that in general, they have differing points of view as to how to solve a problem. By understanding each vendor’s approach, you can make a more informed decision.

I live on the corporate side of IT. If I make a recommendation in regards to the network, I need to make sure I make the BEST one possible. Yes it takes a lot of time and effort, but choices around hardware and software need to be treated with more care than one uses when selecting which brand of breakfast cereal to buy at the grocery store. I’ll talk to just about any vendor that lives within the network space. No matter how insignificant the product or company may seem, I want to know what it is they do. There is no such thing as being too prepared when it comes to making decisions about your network.

That was Tech Field Day in a nutshell for me. Lots of discussions with my peers and lots of discussions with vendors.  For now, I am still trying to digest it all. Two full days worth of briefings and discussions will take a bit to sink in for me. If anything, I have a sincere desire to shore up my virtualization and storage knowledge. I just have to find the time to fit it in. Networking on its own is enough to keep me busy for years to come!

I met some really great and SMART people at this event. Several of them I already knew from Twitter, and some of them I had read their blogs prior to this event. Others were affiliated with vendors, so I had never heard of them, except for some of the people from the larger companies. My RSS feed list has grown by quite a few entries as a result of this trip.

If I could give any advice in regards to this kind of event, it would be this. Go register to be a Gestalt IT Tech Field Day delegate. Do it NOW. If you love technology, if you love talking about technology, and if you want to mix it up with vendors in their own back yard, this is the event for you. I was taken care of very well by Claire and Steven. Nothing was overlooked. Every single vendor that presented seemed interested in us being there. Nothing was off limits in terms of what you could ask. Of course, there’s no guarantee they are going to answer it. The vendors still have to protect their intellectual property and rightfully so. Never in a million years would I have imagined that I would be able to engage someone like the CEO of Symantec and ask a direct question and get a direct answer. I also wouldn’t have imagined myself ever talking to the CEO and CTO of a company like Druva. I spent at least 15 minutes talking with them about their company, social media, and other similar things at the Computer History Museum. Without a doubt it was one of the high points of my trip to San Jose. I could go on and on about other incidents, but it wasn’t my intention to ramble on in this post.

Oh, and lest I forget to tie into the title of this post I should answer the question: “Now what?” Well, I still have to finish preparing to take the CCIE Route/Switch lab. However, I find myself wanting to give equal time to ramping up in the VMware and storage networking worlds. I spent several days in the midst of some storage and virtualization experts. What can I say? They have made me a convert. Or maybe it’s just that I want to understand a bit more of what they were talking about if I ever run into them again.🙂 In the near future, I want to write a bit about the various vendors. In particular, I will focus on Xangati, HP, Infoblox, and NetEx. They have more of a network-ish focus and that’s the area I focus on. That’s not to say that I won’t comment on the others. I really enjoyed the data deduplication talk from Symantec!

I cannot say thank you enough to everyone who made this event possible. Stephen Foskett played the role of our fearless leader very well. Claire was the driving force behind the scenes making sure everything went off without a hitch. The audio/visual crew produced some very high quality stuff even in the face of several technological glitches. The vendors were very gracious in hosting all of us. I appreciate their interaction from the presentation standpoint as well as their active Twitter presence. Bonus points to Xangati for the bacon and chocolate espresso beans! As for the delegates, well I am humbled to have been among you. Some of you are used to interfacing with these companies at this level. I personally, am not. I do look forward to reading your writings and hope to run into you again at some point!

*****Disclaimer*****
As a Tech Field Day delegate for Gestalt IT, my flights, hotel room, food, and transportation were provided by all of the vendors that presented during this event. This was not provided in exchange for any type of publicity on my part. I am not required to write about any of the presentations or vendors. I received a few “souveniers” from the vendors which were limited to t-shirts, water bottles, pens, flash drives, notepads, and bottle openers.

Wrapping My Head Around The Nexus1000v – Part 1

February 11, 2011 5 comments

****Note – I am NOT in any way shape or form a VMware expert. I can’t guarantee you that I will be 100% correct in my terminology or representation of VMware, VMotion, VSphere, etc. I apologize in advance. I am just a network guy trying to understand how the Nexus 1000V ties into the VMware ecosystem. I also understand that companies other than VMware are doing virtualization. Please feel free to correct my inaccuracies via the comments.

Paradigm shifts are coming. Some of them are already here. About 5 or 6 years ago I was first introduced to server virtualization in the form of VMware ESX server. For you old mainframe people, you probably weren’t as impressed as I was when I learned about this particular technology.

When it came to VMware, I wasn’t doing anything fancy. I was just using it to host a few Windows servers. When these boxes were physical, they were only using a fraction of their CPU, memory, and disk space. In most cases, they were specific applications that vendors would only support if they were on their own server. From a networking standpoint, there was absolutely nothing fancy that I was doing. All of the traffic from the virtual machines came out of a shared 1 gig port. For me, VMware was a fantastic product in that it allowed me to reduce power, rack space, and cooling requirements.

I realize that some people will take issue with my use of the term “server virtualization”. To some, software and hardware virtualization are different animals. For the purposes of non-VMware people like myself, the fact that I used VMware to reduce the physical server sprawl means that I refer to it as “server virtualization”.

Fast forward to today. It is getting harder and harder to find a company that isn’t doing some sort of server virtualization. It isn’t just about reducing physical server footprint and maximizing CPU and memory resources. These days, you can achieve phenomenal uptime rates due to things like VMotion. For those who are unfamiliar with VMotion, it is a service within VMware that can move a virtual machine from one physical host(ie ESX/ESXi server) to another. This can happen as a result of hardware failure on the physical host itself, additional CPU/memory resource requirements, or other reasons that the VMware administrator deems important.

Today, from a networking standpoint, there are 3 options when it comes to networking inside the VMware vSphere 4 ecosystem:

vNetwork Standard Switch – 1 or more of these standard switches reside on a single ESX host. This would be the vSwitch in older versions of ESX. This is basically a no frills switch. Think of this as managing switches without the use of VTP. You have to touch a lot of these switches if certain VLAN’s reside on multiple ESX hosts.

vNetwork Distributed Switch – 1 or more of these will reside in a “Datacenter”. By “Datacenter”, I am not referring to a physical location. Rather, in VMware lingo, it is a logical grouping of ESX clusters(comprised of ESX hosts). This is the equivalent of running VTP across a network of Cisco switches. You can make changes and have them show up on each ESX host that is part of the “Datacenter”. This particular switch type has several advantages over the standard switch in terms of feature availability. It also allows you to move virtual hosts between multiple servers via vMotion and have the policies associated with that host

Cisco Nexus 1000V – Similar to the distributed switch, except it was built on NX-OS and you can manage it almost like you would any physical Cisco switch. It also has a few more features that the regular VMware distributed switch does not.

That’s the basic overview as I understand it. What I had been struggling with was the actual architecture behind it. How does it work? I can look at a physical switch like the 3750 or 6500 and get a fairly decent understanding of it. Not the level I would like to have, but I understand that vendors like Cisco don’t want to give away their “secret sauce” to everyone that comes along and asks for it.

As luck would have it, my company has purchased several instances of the Nexus 1000V and last week, I was able to spend a day with a Cisco corporate resource and one of the server/storage engineers my company employs. I didn’t realize how deficient I was in the world of VMware until I got into a room with these 2 guys and we started talking through how we would design and implement the Nexus 1000V. I kept asking them to explain things over and over. In the end, a fair amount of pictures on the white board caused the light bulb in my head to go active. I still have much reading to do, but for now I understand it a LOT more than I did. Now, let’s see if I can have it make sense to you.🙂

The Nexus 1000V is basically comprised of 2 different parts. The VEM and the VSM. If we were to assign these 2 things to actual hardware pieces, the VEM(Virtual Ethernet Module) would be the equivalent of a line card in a switch like the Nexus 7000 or a Catalyst 6500. In essence, this is the data plane. The second piece is the VSM(Virtual Supervisor Module). This is the same as the supervisor module in the Nexus 7000 or Catalyst 6500. As you probably already guessed, this is the control plane piece.

Here’s where it gets a bit crazy. The VSM can support up to 64 VEM’s per 1000V. You can also have a second VSM that operates in standby mode until the active fails. In theory, you have a virtual chassis with 66 slots. In the Nexus 1000V CLI, you can actually type a “show module” and they will all show up. Each ESX host will show up as its own module. Will you ever have 64 VEM’s in a single VSM? Maybe. However, there are limitations around the Nexus 1000V that make that unlikely.

The VEM lives on each ESX server, but where does the VSM reside? It resides in its own guest VM. You actually create a separate virtual machine for the VSM when installing the Nexus 1000V. That guest VM resides on one of the ESX servers within the “datacenter” that the Nexus 1000V controls. You access that guest VM just like you would a physical switch in your network by using the CLI. Once the VSM is installed, the network resource can go in via SSH or Telnet and configure away.

That’s the basic components of the Nexus 1000V. There are other things that need to be mentioned such as how communication happens from the guest VM perspective to the rest of the network and vice versa. Additionally, we need to discuss the benefits of using the Nexus 1000V over the standard VMware distributed switch. There’s a lot more than just the management aspect of it. I will cover that in part 2. Additionally, I plan on doing a write up on the Nexus 1010 appliance. This allows you to REALLY move the control plane piece out of the VMware environment and put it on a box with a Cisco logo on it.

Be A Part Of History

January 27, 2011 Leave a comment

Henry V - Image courtesy of Wikipedia

He that outlives this day, and comes safe home,
Will stand a tip-toe when this day is nam’d,
And rouse him at the name of Crispian.
He that shall live this day, and see old age,
Will yearly on the vigil feast his neighbours,
And say ‘To-morrow is Saint Crispian.’
Then will he strip his sleeve and show his scars,
And say ‘These wounds I had on Crispian’s day.’
Old men forget; yet all shall be forgot,
But he’ll remember, with advantages,
What feats he did that day. Then shall our names,
Familiar in his mouth as household words-
Harry the King, Bedford and Exeter,
Warwick and Talbot, Salisbury and Gloucester-
Be in their flowing cups freshly rememb’red.
This story shall the good man teach his son;
And Crispin Crispian shall ne’er go by,
From this day to the ending of the world,
But we in it shall be remembered-
We few, we happy few, we band of brothers;
For he to-day that sheds his blood with me
Shall be my brother; be he ne’er so vile,
This day shall gentle his condition;
And gentlemen in England now-a-bed
Shall think themselves accurs’d they were not here,
And hold their manhoods cheap whiles any speaks
That fought with us upon Saint Crispin’s day.

Such are the words that William Shakespeare penned in “Henry the Fifth”. They come from the Saint Crispin’s Day Speech that King Henry V gave prior to the battle of Agincourt in 1415 where the English defeated the French and King Henry ended up with a French princess named Catherine as one of the spoils of war. Although the speech from Shakespeare is made up, it is still a beautiful combination of words that express the pride the English soldiers would feel in the years after the battle. Others might forget what went on, but the soldiers would never forget. They would be a part of history. Which brings me to the point of this post……

The other day Jeremy Stretch mentioned this on Twitter:

“regardless of your thoughts on IPv6 adoption, it’s a pretty interesting time to be a networker”

That’s putting it mildly and it got me thinking about the changes going on in networking these days.

1. IPv6 Transition – Certainly you have heard of IPv6 and the coming IPv4 address exhaustion. If not, you need to get out more.

2. Virtual Networking – With the explosion of vmWare and other virtualization vendors in the past several years, a fair amount of traffic is cruising around “virtual” switches inside physical servers. Guess what? You still have to manage it. You still have to secure it.

3. Wireless Explosion – Everything is wireless today. Cameras, printers, phones, tablets, laptops, and other wireless capable devices are growing in number each year. If you aren’t familiar with wireless, you better be soon.

There’s more. Storage traffic riding over the same wire as voice, video, and data. How about link encryption on your internal switch/router infrastructure? Don’t forget the rush to flatten datacenter networks to L2 courtesy of TRILL or each vendor’s implementation of it.

Some difficult and interesting days lie ahead. Difficult and interesting from the standpoint that we’ll have to implement things that we haven’t been doing for years and years. This is new ground for many of us. With the right amount of due diligence and a couple of heavily padded blocks of time from various consultants, it will all get done. Fast forward to a few years down the road. Like King Henry said in Henry V:

Then will he strip his sleeve and show his scars,
And say ‘These wounds I had on Crispian’s day.’
Old men forget; yet all shall be forgot,
But he’ll remember, with advantages,
What feats he did that day.

and

And gentlemen in England now-a-bed
Shall think themselves accurs’d they were not here,
And hold their manhoods cheap whiles any speaks
That fought with us upon Saint Crispin’s day.

We’ll all have scars, but they’ll be scars we can be proud of. This is an interesting time to be in networking, but I wouldn’t want it any other way. I foresee changes like these flushing out people who are not ready for the paradigm shifts that are coming or are already here.

New blood will come into the field. You’ll be able to guide them and mentor them and show them your scars from the IPv4, every server was physical, no wireless, TDM PBX, Frame Relay was king days. Then, they’ll produce a fake smile as you bore them with stories of how many CAT5 patch cables you have made in your past career and then they’ll mock you when you’re not around. Kind of like how we mock Thomas Watson and his inability to predict the demand for computers.

Categories: career, learning
Follow

Get every new post delivered to your Inbox.