In IPv6 Article 1 we explored the current rate of adoption globally for IPv6, and made the conclusion that yes, its real and will be mainstream within 4 years. This raises the obvious question of “when do I need to start planning?” & of course, the answer is “it depends”
- How big is your enterprise will determine how long an implementation may take – you can work backwards from there…
- Where do you need v6 initially – the edge, the core? there are several common deployment models that depend on your specific needs?
- What IPv6 services do you see yourself offering to your clients, or your own business?
- Do you have the base requirements in place i.e. addressing plans, IPv6 prefixes, V6 policies and standards?
- How complex is your environment – do you have dual ISP connectivity for example and will need a highly available provider independent address space to route?
The above are a few of the questions you will need to answer in the early stages of planning.
A good place to start is with point 4 – which is essentially proper planning. A lot of organisations sort of skip this step which of course will lead to a broken implementation and higher costs down the line. Some of the elements required here should be;
- Apply for your v6 prefix from your RIR, and your BGP ASN if you will need one (hint, get one)
- Design your v6 IP schema and address plan – sounds simple? its not…
- Update your IPV4 Security Policies to include this new Protocol. IPv6 schemas can be used to enforce policy on bit matching or this could be a good time to start considering your next-gen firewalling strategy, based on user authentication policy enforcement and move away from static and hard to scale / manage IP address based policies.
- Determine your v6 standards, and create a procurement checklist to ensure all new network devices can transport v6 as needed (think PIM-v6, OSPFv3 ,BGP address-families etc.)
- Plan for the network services that will need to be enabled as well, this will cover at least IPv6 DNS & DHCP services
If you cant tackle that yourself, as you probably wont have the skills yet – then go get yourself some consultancy to get you going – it will be worth the investment in the long run.
There’s a lot to cover to even go over the high level steps and considerations for your long term IPv6 deployment strategy – in the next article we will assume your planning is completed and take a look at how you can roll V6 out in a controlled manner.
Nexus 3000 Family – is it great low cost core switch?
I have generally shied away from recommending that businesses deploy a Nexus 3000 core in their data centres, one of the reasons has been around platform / feature support and the other around NX-OS release trains; for example-
The Nexus 3K is designed for low latency trading environments so that Cisco can compete with Arista and as such low latency is king. This means the advanced features (and maybe silicon, depending on the model) you get will not be as advanced as the traditional enterprise platforms i.e. N5K, N7K etc. Also features such as FCoE, OTV etc have never been road mapped for the N3K. Not surprisingly then the N3K also follows its own release train for NX-OS updates and patches.
That was enough to convince me that this switch should do what its good it and mainly be left in the environment is was designed for. Each time a customer asked me should they deploy this as a core switch my first reaction was “err, probably not”…until recently.
In this particular environment the requirements were fairly straightforward (they mostly are anyway) and could be summed up as;
- We need 1 & 10G L2 switching on 96 ports at the core
- We need an IGP and static routing, and a little policy overlaid on that IGP
- We need high availability across the core, with load sharing & fault tolerance
- We want to continue to use our existing management tools to manage the environment
In comes the N3K. Now the small set of requirements above are fairly generic, but could be a typical of a 1000 medium size businesses out there today. Given those requirements the final solution ended up looking like;
- A pair of Nexus 48 port 1/10G switches in a vPC cluster
- Peripherals connected in via vPC port channels split across boxes (FW’s, Blade, rack, Rtr’s)
- HSRP, & EIGRP with static route redistribution and some smarts thrown in for fun
- Standard management with SSH, NTP, SNMPv2, ACLs and role based access controls
- Plenty of bandwidth with a 40G vPC peer-link between switches, and 20G north/south to the blade servers
In summary, if you don’t need the bells and whistles of advanced DC technology such as Unified Fabric, L2 Multi-pathing, & clever DCI stuff, and can live with a standard L2, L3 design and topology with a known (& constrained) port count capacity, then the N3K may be the switch for you.
Cisco LAM enables a host to move to another subnet but keep its original IP address, and still be reachable across the network.
A router/layer 3 switch configured for LAM determines whether there are directly connected hosts that do not belong to the local IP subnet.
When this router sees traffic from a host that does not match the configured subnet, the router installs an ARP entry for the mobile host. The router then also installs a host route that points toward this interface.
The feature could be useful during a LAN refresh or DC relocation, when migrating from old IP ranges to new, if a device is overlooked but LAM is configured, that device will still be reachable even though it will be in the wrong IP subnet. Then the routers ARP cache can be inspected to see what ARP bindings exist, and the missed device can be re-configured.
to configure LAM.
int vlan 20
ip address 192.168.1.1 255.255.25.0
ip mobile arp
to add security (only permit allowed/known mobile host ranges)
int vlan 20
ip address 192.168.1.1 255.255.25.0
ip mobile arp access-group 1
access-list 1 permit 172.16.255.0 0.0.0.255
When using LAM, a router periodically checks to ensure that the mobile host is still there by querying it with ARP requests. This ensures that the redistributed route is still valid. The mobile host ARP keepalive times can be altered with the command:
ip mobile arp timers [keepalive minutes] [hold-time minutes]
The LAM default arp keepalive is 300 seconds with 900 seconds hold timer.