Monday, July 13, 2015

Core Network Replacement Part 1

Core Network Replacement

I read a timely post by Tom Hollingsworth @NetworkingNerd about writing. I've realized that I have not written anything on my blog in quite sometime. I could write down a list of excuses but what is the point in that? Most others have the same or similar. And when did I actually get this posted?

I decided to capture some of the thought processes and steps that have and are going into the network core replacement at the $DayJob.

History:

Last Core network replacement was in was in 2007. Link to the vendor press release: http://www.thefreelibrary.com/Indiana+Tech+Builds+High+Performance+Campus+Network+With+Force10...-a0168505385

That update brought 10 gig between serveral buildings on campus and a push to 1 gb to the computer labs on campus. This also moved us away from a very Cisco centric network. It was new and different. The design contained Force10 E300 as well as a handful of S50 "classic" switches. 3 line cards in the E300, 8 port 10gig card, 24 port 1 gb SFP card, and a 48 port 1 gb copper card. At the time this seemed reasonable with room for growth. The 8 ports of 10 gig was not completely populated at first, but moving from 1 gb between buildings to 10 gig seemed like a huge jump. The 1 gig connections were not being used up, so 10 gig was a super highway.
After a couple of years a few flaws showed up. 1. My S50's didn't make the cut for running FTOS and continue to run STOS. 2. Some vlan troubles between STOS devices and no Force10 gear. 3. End of Sale / Dell purchase 4. Account forgotten.

In the pursuing years I found HP Procurve switch gear to be suitable and cost effective for use in my campus and branch offices. The last few building projects I used HP 5400 series switches either standalone or in a VRRP pair. I thought through whenever the next core upgrade came around, that maybe a good starting point. Possibly the 8200 series (Due to multiple "supervisors").

With the last building built Brocade offered a solution with their ICX 6610 and 6450 switches. I was intrigued with the performance in the 1 RU form factor. Being able to stack the switches across 10gig Ethernet links was very useful as the closets changed around from 3 to 4 due to design changes to the building. I had to compromise on the redundancy of each closet due to the change in cable paths and overloaded a closet from initial design. Since I wasn't stuck with fixed chassis I was able to shift one switch to the other closet. The use of high performance 1 ru switches showed value.

Current Selection:

So the ICX 6610 seemed to offer a redundant scalable cost effective solution to network core. Stack multiple to expand available 10gig ports. I was a bit concerned about having to stack multiple switches just to scale the 10gig ports without using the other ports on the switch.


Enter ICX 7750, 6 40gig Ports and 48 10gig port put that in a redundant pair. That is a lot of 10gig ports in 2U of switches. Which maybe more than I need at this point but the nice thing about SFP+ ports is the use of 1gig SFP's in them. So this is the direction that I went.


Next post will be a quick step through of the process that I used to swap them out.

No comments:

Post a Comment