Tag Archives: eigrp

Nexus 3000 Family – is it great low cost core switch?

I have generally shied away from recommending that businesses deploy a Nexus 3000 core in their data centres, one of the reasons has been around platform / feature support and the other around NX-OS release trains; for example-

The Nexus 3K is designed for low latency trading environments so that Cisco can compete with Arista and as such low latency is king. This means the advanced features (and maybe silicon, depending on the model) you get will not be as advanced as the traditional enterprise platforms i.e. N5K, N7K etc. Also features such as FCoE, OTV etc have never been road mapped for the N3K. Not surprisingly then the N3K also follows its own release train for NX-OS updates and patches.

That was enough to convince me that this switch should do what its good it and mainly be left in the environment is was designed for. Each time a customer asked me should they deploy this as a core switch my first reaction was “err, probably not”…until recently.

In this particular environment the requirements were fairly straightforward (they mostly are anyway) and could be summed up as;

  • We need 1 & 10G L2 switching on 96 ports at the core
  • We need an IGP and static routing, and a little policy overlaid on that IGP
  • We need high availability across the core, with load sharing & fault tolerance
  • We want to continue to use our existing management tools to manage the environment

In comes the N3K. Now the small set of requirements above are fairly generic, but could be a typical of a 1000 medium size businesses out there today. Given those requirements the final solution ended up looking like;

  • A pair of Nexus 48 port 1/10G switches in a vPC cluster
  • Peripherals connected in via vPC port channels split across boxes (FW’s, Blade, rack, Rtr’s)
  • HSRP, & EIGRP with static route redistribution and some smarts thrown in for fun
  • Standard management with SSH, NTP, SNMPv2, ACLs and role based access controls
  • Plenty of bandwidth with a 40G vPC peer-link between switches, and 20G north/south to the blade servers

In summary, if you don’t need the bells and whistles of advanced DC technology such as Unified Fabric, L2 Multi-pathing, & clever DCI stuff, and can live with a standard L2, L3 design and topology with a known (& constrained) port count capacity, then the N3K may be the switch for you.

Topology Overview