Category Archives: Data Centre

Nexus 3000 Family – is it great low cost core switch?

I have generally shied away from recommending that businesses deploy a Nexus 3000 core in their data centres, one of the reasons has been around platform / feature support and the other around NX-OS release trains; for example-

The Nexus 3K is designed for low latency trading environments so that Cisco can compete with Arista and as such low latency is king. This means the advanced features (and maybe silicon, depending on the model) you get will not be as advanced as the traditional enterprise platforms i.e. N5K, N7K etc. Also features such as FCoE, OTV etc have never been road mapped for the N3K. Not surprisingly then the N3K also follows its own release train for NX-OS updates and patches.

That was enough to convince me that this switch should do what its good it and mainly be left in the environment is was designed for. Each time a customer asked me should they deploy this as a core switch my first reaction was “err, probably not”…until recently.

In this particular environment the requirements were fairly straightforward (they mostly are anyway) and could be summed up as;

  • We need 1 & 10G L2 switching on 96 ports at the core
  • We need an IGP and static routing, and a little policy overlaid on that IGP
  • We need high availability across the core, with load sharing & fault tolerance
  • We want to continue to use our existing management tools to manage the environment

In comes the N3K. Now the small set of requirements above are fairly generic, but could be a typical of a 1000 medium size businesses out there today. Given those requirements the final solution ended up looking like;

  • A pair of Nexus 48 port 1/10G switches in a vPC cluster
  • Peripherals connected in via vPC port channels split across boxes (FW’s, Blade, rack, Rtr’s)
  • HSRP, & EIGRP with static route redistribution and some smarts thrown in for fun
  • Standard management with SSH, NTP, SNMPv2, ACLs and role based access controls
  • Plenty of bandwidth with a 40G vPC peer-link between switches, and 20G north/south to the blade servers

In summary, if you don’t need the bells and whistles of advanced DC technology such as Unified Fabric, L2 Multi-pathing, & clever DCI stuff, and can live with a standard L2, L3 design and topology with a known (& constrained) port count capacity, then the N3K may be the switch for you.

Topology Overview

 

Spanning Tree & Virtual Port-Channels

There are a few basic things you should know when creating virtual port channels (vPC) using NX-OS considering the spanning-tree protocol (STP) in your Data Centre.

Enable STP as normal on all links. If a dual attached switch is non port-channel compliant then use STP & ensure only non vPC VLANs are used on the STP switch.

Always make the STP root bridge, vPC primary peer and HSRP active aligned across the chassis –  do not split or try to load-share VLANs using STP root & HSRP odd / even VLAN load sharing  as this may cause packet loss in certain scenarios.

STP is distributed; that is, the protocol continues running on both vPC peer devices. However, the configuration on the vPC peer device elected as the primary device controls the STP process for the vPC interfaces on the secondary vPC peer device

The 3 config lines below are recommended on all devices in the STP domain.

  1. spanning-tree mode rapid-pvst
  2. spanning-tree pathcost method long
  3. spanning-tree port type network default

vPC devices are managed independently and separate instances of network protocols exists on the vPC peers. During the vPC domain setup, a vPC peer is elected as primary. This peer will be responsible for running STP on all the vPC ports of the vPC domain. So logically, a vPC is a simple channel located on the primary vPC peer switch from the perspective of STP. The state of the vPC member ports located on the secondary peer is controlled remotely by the primary.

BPDUs can be exchanged on all the physical links belonging to a vPC. Note that some STP information about the vPCs is still available on the secondary vPC peer, but it is just replicated from the primary.