Category Archives: Switching

Nexus 3000 Family – is it great low cost core switch?

I have generally shied away from recommending that businesses deploy a Nexus 3000 core in their data centres, one of the reasons has been around platform / feature support and the other around NX-OS release trains; for example-

The Nexus 3K is designed for low latency trading environments so that Cisco can compete with Arista and as such low latency is king. This means the advanced features (and maybe silicon, depending on the model) you get will not be as advanced as the traditional enterprise platforms i.e. N5K, N7K etc. Also features such as FCoE, OTV etc have never been road mapped for the N3K. Not surprisingly then the N3K also follows its own release train for NX-OS updates and patches.

That was enough to convince me that this switch should do what its good it and mainly be left in the environment is was designed for. Each time a customer asked me should they deploy this as a core switch my first reaction was “err, probably not”…until recently.

In this particular environment the requirements were fairly straightforward (they mostly are anyway) and could be summed up as;

  • We need 1 & 10G L2 switching on 96 ports at the core
  • We need an IGP and static routing, and a little policy overlaid on that IGP
  • We need high availability across the core, with load sharing & fault tolerance
  • We want to continue to use our existing management tools to manage the environment

In comes the N3K. Now the small set of requirements above are fairly generic, but could be a typical of a 1000 medium size businesses out there today. Given those requirements the final solution ended up looking like;

  • A pair of Nexus 48 port 1/10G switches in a vPC cluster
  • Peripherals connected in via vPC port channels split across boxes (FW’s, Blade, rack, Rtr’s)
  • HSRP, & EIGRP with static route redistribution and some smarts thrown in for fun
  • Standard management with SSH, NTP, SNMPv2, ACLs and role based access controls
  • Plenty of bandwidth with a 40G vPC peer-link between switches, and 20G north/south to the blade servers

In summary, if you don’t need the bells and whistles of advanced DC technology such as Unified Fabric, L2 Multi-pathing, & clever DCI stuff, and can live with a standard L2, L3 design and topology with a known (& constrained) port count capacity, then the N3K may be the switch for you.

Topology Overview

 

IP Local Area Mobility & IP network re-addressing

Cisco LAM enables a host to move to another subnet but keep its original IP address, and still be reachable across the network.

A router/layer 3 switch configured for LAM determines whether there are directly connected hosts that do not belong to the local IP subnet.

When this router sees traffic from a host that does not match the configured subnet, the router installs an ARP entry for the mobile host. The router then also installs a host route that points toward this interface.

The feature could be useful during a LAN refresh or DC relocation, when migrating from old IP ranges to new, if a device is overlooked but LAM is configured, that device will still be reachable even though it will be in the wrong IP subnet. Then the routers ARP cache can be inspected to see what ARP bindings exist, and the missed device can be re-configured.

to configure LAM.

int vlan 20
ip address 192.168.1.1 255.255.25.0
ip mobile arp

to add security (only permit allowed/known mobile host ranges)

int vlan 20
ip address 192.168.1.1 255.255.25.0
ip mobile arp access-group 1

access-list 1 permit 172.16.255.0 0.0.0.255

When using LAM, a router periodically checks to ensure that the mobile host is still there by querying it with ARP requests. This ensures that the redistributed route is still valid. The mobile host ARP keepalive times can be altered with the command:

ip mobile arp timers [keepalive minutes] [hold-time minutes]

The LAM default arp keepalive is 300 seconds with 900 seconds hold timer.

HP 1910 Switches – Intuitive Web Interface?? I think not

HP 1910 Series Switches

According to the specs on the 1910 the web interface is intuitive – which implies simple and easy to use. maybe it is if you have never logged on any other switch in your life…ever.

The 1910 isn’t a bad switch by the way, for the $$$ its actually a winner one you have figured out how to configure the thing. Having configured a few (hundred) switches in my time, inc many OS including ProVision, NX-OS, IOS, EOS, and Comware to name a few I have only been left frustrated and disappointed with two products before. The Cisco 500 series switch, and the HP 1910. Both of these had one thing in common, a slow and fairly annoying HTTP UI.

so, lets make life easy and get into the CLI config of the 1910 setup and config. Log into the console port with the default user admin, password “blank” and copy and paste the following commands to get into the CLI configuration shell.

_cmdline-mode on
Y
512900
system-view

now we can configure the switch from the CLI, using a familiar command set. in the CLI scrape below I am going to add some vlans and set Interface 1/0/1 as voice/data and 1/0/24 as a dot1q uplink.

#
vlan 1
description Default
vlan 7
description SwitchMgmt
vlan 64
description Native
vlan 34
description Voice
vlan 219
description Data
#
interface GigabitEthernet1/0/1
port link-type trunk
port trunk permit vlan 34 219
port trunk pvid vlan 219
undo voice vlan mode auto
voice vlan 34 enable
poe enable
undo port trunk permit vlan 1
#
interface GigabitEthernet1/0/24
port link-type trunk
port trunk permit vlan 7 34 64 219
port trunk pvid vlan 64
stp edged-port disable
undo port trunk permit vlan 1
undo poe enable
#
return
save
display saved

deploying configuration this way is sooo much easier. I get the positioning of these types of products but if the CLI is there already, why not expose it anyway for those that like to do things the easy way?

Spanning Tree & Virtual Port-Channels

There are a few basic things you should know when creating virtual port channels (vPC) using NX-OS considering the spanning-tree protocol (STP) in your Data Centre.

Enable STP as normal on all links. If a dual attached switch is non port-channel compliant then use STP & ensure only non vPC VLANs are used on the STP switch.

Always make the STP root bridge, vPC primary peer and HSRP active aligned across the chassis –  do not split or try to load-share VLANs using STP root & HSRP odd / even VLAN load sharing  as this may cause packet loss in certain scenarios.

STP is distributed; that is, the protocol continues running on both vPC peer devices. However, the configuration on the vPC peer device elected as the primary device controls the STP process for the vPC interfaces on the secondary vPC peer device

The 3 config lines below are recommended on all devices in the STP domain.

  1. spanning-tree mode rapid-pvst
  2. spanning-tree pathcost method long
  3. spanning-tree port type network default

vPC devices are managed independently and separate instances of network protocols exists on the vPC peers. During the vPC domain setup, a vPC peer is elected as primary. This peer will be responsible for running STP on all the vPC ports of the vPC domain. So logically, a vPC is a simple channel located on the primary vPC peer switch from the perspective of STP. The state of the vPC member ports located on the secondary peer is controlled remotely by the primary.

BPDUs can be exchanged on all the physical links belonging to a vPC. Note that some STP information about the vPCs is still available on the secondary vPC peer, but it is just replicated from the primary.