Cisco Announces “The Network. Intuitive.”

With content courtesy of Cisco Systems

Last year I broke down the Cisco DNA – Digital Network Architecture in an article called “Beyond Marchitecture”, because quite frankly, it was a ton of marketing with little substance.

This year at Cisco Live! 2017, Cisco has done this the right way.   With a new campaign, backed by the technical prowess we expect from Cisco and launched with all the big names, and big programs we expect.  This was well thought out, and if this is what Chuck Robbins is going to bring to the table of Cisco Systems – there should be some big things ahead.

In a series of interviews with different business units, it was revealed that the “Handcuffs are off” and departments have been given the ability to innovate, collaborate and tear down the silos.  This new program demonstrates that.

The Network.  Intuitive.

2017-07-05 11_36_44-DNA for CL Vegas.pdf - Adobe Reader

First get past the grammar related issues of the new DNA Campaign, and realize that is it not “The Network Intuitive” it is “The Network. Intuitive.”  – punctuation matters here

The key to understanding “The Network.  Intuitive.” is in two powerful words.

Intent

As announced by Chuck Robbins in the Cisco Live keynote, they want you to power your network with business intent.   No more programming VLANs, or setting up routing, but truly going into a unified console and telling it what you want to do.

“A computer will do what you tell it to do, that may be totally different from what you had in mind” — Quote Unknown

The idea that “Machine A” can talk to “Server B” and “User Y” and talk to “System X” without worrying about the underlying infrastructure is where they are going.

This is a construct, not a product, but unlike DNA-2016, there is a strong technical basis for this idea.

Context

Intent does not do you any good, unless you have context in your network.   We need to understand, who is where, and understand what they are before we can set our intent against that object.

Chicken before the egg syndrome a little bit, how do we secure, route and prioritize our network, if we do not know what this traffic, who they are and what they are trying to do.  Today context generally comes from things like IP Addresses and subnets.    In DNA-2017, this context come from Cisco ISE.

The Network. Intuitive.  InfoGraphic.

2017-07-05 14_38_42-DNA for CL Vegas.pdf - Adobe Reader

The latest info-graphic from Cisco really does provide a good overview of this new architecture.

The underlying technology for this new intuitive network technology is SD-Access – Software-Defined Access.  This of “ACI – Application Centric Infrastructure” but now it is user centric – make our decisions and policies and apply them to users, and where they are is unimportant.

SD-Access Building Blocks

SDAccessInfoBlock

I want to help build the SD-Access story for you, so you can understand how this technology comes together.  Like like years DNA announcement, SD-Access is a reference architecture, but there are bespoke technologies around it.

Transport Layer – Network

At the very basic transport layer, SD-Access relies on a few switch options that are available today.      Supported on Catalyst 9K, 3650, 3850, 4500E, 6500/6800 and Nexus 7K.  Wireless options are 3800, 2800, 1560 and controllers 8540, 5520 and 3504.

The new one to this party is the Catalyst 9000, developed by the team at Cisco with the new DopplerD series CPU with tons of power and supporting ETA – Encrypted Traffic Analytics.    Please see my future blog post on the Catalyst 9000 series.

These devices do all the transport and implementation of policy in the background of SD-Access and move the bits around your network

cat9k

Understanding the Campus Fabric

The underlay network will transport your traffic from place to place, this is what makes up your campus fabric.   True virtual networking to the endpoints through encapsulation, not just through VLANs anymore.    The idea is we want to segregate the forwarding plane, from the services plane, why should our physical network dictate how traffic flows around our network, but how can we add capabilities without massive complexity.

2017-07-09 07_46_09-(48) TechWiseTV_ A Deeper Look at Software-Defined Access - YouTube

If you want me to sit here and admit that this is as easy as the old VLANs and IP addresses in your network – it simply is not.   However the security, control and simplicity once it is implemented is worth it.  The automation and contextual data you will receive.

The transport does not need to be complex, by using an overlay, we can deliver features through the overlay, and the underlay network, the hardware does not need to be complex.

LISP – Location Identity Separation Protocol – Layer 3

This bring together location and identity.    Think of the old way for a moment, we know switch port, and IP address or subnet, and we have a weak idea of the context of a user, who and where they are.  LISP takes the IP and Location and segregates them so that IP and Location are not tied anymore.

LISP is like DNS for packets,  when a switch needs to forward packets from place to place, LISP identifies to the network device locations and the routes required using a map server or resolver.   This could be an IOS device or a virtual machine somewhere.   LISP allows a device to live in any place on the network.  Getting in and out of the LISP environment is via a tunnel router or “XTR”.

This is what provides mobility of devices around your network, even if a user moves to another building or another floor, the IP address of that user does not change – they just move from place to place and the map system handles where that user is

VXLAN – Layer 2

Wait, why is VXLAN showing up in the access layer?   Well, LISP is really a layer 3 technology, it ensures that packets can route, but what if we have users across multiple layer 3 areas that need layer 2 connectivity?   What about multicast and broadcast traffic.

VXLAN provides the transport of our layer 2 traffic across our campus fabric.

Transporting Policy with Cisco TrustSEC

We can now add contextual information into the VXLAN headers through “SGT” or scale-able group tags.   We need to use TrustSEC so that we can apply policies against objects but not based on their IP, but their identity.     Instead of using the IP address, we use the SGT – tag to tell the rest of the network who owns this packet so we can make decisions based on security.  SGT is applied by ISE and then access lists and rules are applied against security groups, users are placed in those groups within ISE.

Identity Layer – Context

This is where the context comes in.   ISE – Identity Services Engine is used to create network identity for objects, users and systems.    I know what some of you are thinking “Oh no – ISE”.   Have you taken a look at ISE 2.1+ ?  They have vastly improved the experience.    There is no question that adding ISE will complicate your life, but it is the contextual engine that provides the data you need to secure your network.   There is no avoiding ISE anymore, you will need to have it in your life, and your network.

ISE

There are benefits here, once ISE is implemented, all of your network devices start to see things are user activity, firewalls show users names not systems, you can start deploying policy against groups of objects and network authentication becomes very easy.   Your wireless network becomes easier to manage from a security perspective.

Interface Layer – Intent

This is the real veggies.   DNA Centre is the new package for the APIC-EM platform.    This is Cisco’s single pane of glass attempt by Cisco so make a UI front end for your network, the intent is a single pane of glass for your ENTIRE network.

dnacentre

This is where your contextual groups from ISE like users and servers will meet up with the policy you want to create.   There is no denying the interface is a little “Meraki” like, clearly they borrowed some design concepts.    All of the complex components of SD-Access meet here in DNA Centre, and are then pushed out to the rest of your network.   The automation from DNA Centre will automate everything for you.  From dealing with ISE to programming those Catalyst switches.   This is the automation layer.  Set what intent you want, and automation will turn that into action down on your hardware layer. Worrying about all this VXLAN and LISP stuff?  No worries, DNA Centre will help you here.

2017-07-09 07_56_42-(48) Cisco SD-Access - Campus Fabric with DNA Center Automation & Assurance with

NDP – Network Data Platform

No shortage of data about our network, we have NetFlow and Syslog and any number of tools to deliver data.   In the coming months as we get a better look into the new Network Data Platform, we will learn how this will help correlate network data and provide analytics.   This is where the old “Proverbial lead into gold” promise is supposed to deliver.   For me this is a wait and see approach, right now there just isn’t enough data out there, for now that is all I have to say.  This is still very early.

 

More to come in future posts about Catalyst 9000 and DNA Centre, NDP and ETA.

 

 

With content courtesy of Cisco Systems

 

ThousandEyes – Mean Time to Innocence in minutes

Back in August, at Networking Field Day 12 we had a very interesting presentation from ThousandEyes – my take away was excitement about what I saw, but I was cautiously optimistic because I have been told about these groundbreaking new tools before, so I held off, until I could actually go back and put this in a real production environment.   Is this “Yet another tool” or can this deliver real value?

We have all been there, let me know if you have heard these complaints before

  1. The Network Is Down
  2. The Internet Is Slow
  3. <insert name of cloud product> is horrible
  4. Our Internet Performance to Site XX from YY is Poor – Fix it now!

The Network Is Down / Slow

This statement is universal – what people REALLY mean is, their perception of the network is poor.   This could be caused by just about anything, poor server performance, poor application performance, saturated storage arrays, and yes it could also be the network.    The problem is, why is “the network” the default gateway for blame?    We cannot fix that mind set.

1h6u86

Cloud Complications

“The Cloud” complicates things for network resources,  previously the internet was where you went to get things that did not belong to you.    Now with services like Microsoft Office365, SalesForce, Azure and AWS – what was previously external is now used as an internal application, and while the application team is happy to offload their applications to a cloud provider or hosted application, it means that network resources are being forced to basically accept and support a series of routers/switches and infrastructure they have no control over.

Traceroute, Ping and Monitoring

Over the years we have all learned to use all sorts of tools to troubleshoot our networks.  Traceroute came in 1988 and uses ICMP messages to probe a network and deliver information about responsive times to each hop along the way.  1988 is a long time ago.

The major problem with these types of tools is, they are probes, some use ICMP, TCP and even UDP to probe and test the network.     Many of these tools do not detect things like tunnels and load balancers, and while your probe might show one route a different protocol might take a different route.    They were designed for a different time, but yet most legacy tools use them.

The bottom line is – these tools do not provide you with complete data, without complete data you are making poor decisions.    These tools commonly deliver inaccurate or incomplete path information.

ThousandEyes – Smarter Network Data

“Legacy products are built for controlled environments, ThousandEyes is designed for chaos” – Mohit Lad – CEO/Co-Founder – ThousandEyes

By deploying sensors all over the internet in a SaaS model, ThousandEyes brings together sensor data from all over the world.     When you combine that with agents within your own infrastructure you start to get a very unique view and insight into performance and availability.

The Enterprise Agent provides you with an internal vantage point, performance of your ISP, your WAN, and application traffic.   Stop me if you have heard this before, but here is the difference.   Combine that with the many data centres and cloud agents that ThousandEyes deploys all over the globe, and you can now monitor, and troubleshoot global availability and performance.

Deployment of Enterprise Agents is very easy, deployed on any virtual platform, inside a Docker container, on metal in Linux, or even within a Cisco IOS Virtual Container.   We deployed these in a matter of minutes and had data coming in within less than an hour.

te1

So above is a typical end of end metric for a web site – stay with me – lots of tools provide this, nothing ground breaking here.   The problem is we don’t know WHY things are slow, next step?  Blame the network.   Not so fast.

Visualize Your Path

tour03-visual-analysis-call-manager-path-visualization

This would be your typical path visualization – but the tool built this – from the path of the probe, not a network map you provided.   Realize that the hops in the middle, are not hops that you own or control – but using proprietary technology, ThousandEyes can probe, test and analyze the health, wellness and performance of those hops.

Catch your ISP

thousandeyes-outage-detection

The above is a very typical scenario – people are calling, some users complain that things are slow, but some state things are just fine.  Take the far end node, starting in 157.   If you look, Chicago users would be fine,  St Louis users would be sometimes fine, but that depends on the path their traffic takes.

No question this scenario would end up on the desk of the network engineer – and without a tool like this, it would be near impossible to pin point the problem, because the problem is intermittent.  In this case I can call my ISP, and not only tell them the problem – I can even allow them temporary access to ThousandEyes if they want to see the data I have on their network.

Mean time to innocence?   Minutes.    Gone are the days where you check your infrastructure over and over again while the ISP feigns innocence.

Reverse Path?  YES! – Still innocent!

figure-2-2

Even reverse path can be troubleshot, in this case, due to issues beyond the network engineers control, and within the external network voice traffic is returning via multiple routes, and not all of them are healthy.   This might result in chasing after your SIP provider, troubleshooting gateways, looking at your internal LAN.      Open a ticket with the ISP and go grab a coffee Mr Network Engineer — this one isn’t you either.

Path Troubleshooting On Steroids

It is almost impossible for me to show you how great this is with still screen shots because you can zoom in and out, view individual nodes, look at peering – so here I have queued up the demo of the amazing path view capability courtesy of our team at Network Field Day 12, and a demo from Nick Kehpart – Sr Director for Product Marketing.

BGP Path Visualization and Internet Outage Detection

Again, No words on this – you have to see it.   In this clip we show the BGP Path Visualization, the ability to see the path from one site to another – and figure out what exactly is going on across that path including troubleshooting BGP routing.

In this case Internet Outage Detection is used to detect packet loss – in the past – and troubleshoot now.   Have you ever had someone say “yeah it was horrible yesterday but it is fine now” – wouldn’t it be great to be able to actually see what happened?  Was it you?  Was it someone else?

 

ThousandEyes Launches Endpoint Agent

This is big – in fact it is so big, one of our delegates literally jumped up in the middle of the presentation and screamed “TAKE MY MONEY”  he was that excited about how much this was going to help him.    The room was honestly full of people with their jaws down.

There are other vendors creating “end point” agents – but here is the difference – this one goes to network, it does not start off by blaming the network – it continues with the “Mean Time To Innocence” and helps to prove exactly where the problem is.   The same level of detail we get with the entire ThousandEyes suite is extended all the way to the endpoint, and I can drill down, ALL THE WAY to BGP if I want, from application to network layer even WiFi and not only that – retrospectively.

Remember my comments about Cloud?     It gets worse!

  1. Users working from home
  2. Users working from a hotel
  3. Public WiFi
  4. Corporate WiFi
  5. VPN

How are we supposed to find the true answer, and worse than that, normally the complaint is something like this

“So yesterday I was working in SalesForce and all of a sudden things were slow, I cannot get work done” – “Oh where were you?”  – “I was using public WiFi at an airport in Anchorage” —  the typical response is “I cannot troubleshoot that”

You can!   With Endpoint Agent – we can look at end users in the home, at work, at public wifi, and compare them – is it really the application?    Is it a large internet outage?     Perhaps a few users just happen to be on poor wireless, what might look like something, might be a coincidence – but without data how do you know?

So how do we get it on workstations?    You can push it out, it is lightweight and works as a browser plugin and a system service, the updates are all automatic and CPU consumption is less than 1% and less than 40mb of memory is ever used.      The license is NOT NODE LOCKED – so you can move your licenses around, lets say you have 1000 users, but you don’t want to monitor them all, you could monitor a select 100, or perhaps you have a few users who complain all the time – you can push it only to them.   Helpdesk agents could keep a few licenses on hand and deploy as necessary.

Endpoint Agent – DEMO

So before we get into the demo – we start with this,  client computer, they are on WiFi, they have internet – oh look, a proxy and the internet site on the far end.

endpoint-agents-user-session-proxy

The bottom line – I cannot explain this in a blog – the only thing I can tell you is – you have to see it to believe it.  In this demo they will show you a real scenario for what it looks like when troubleshooting real issues.   The key word here is DRILL DOWN!

 

 

Innovations in Micro Segmentation

Thoughts on Segmentation and SDN

The entire point of micro segmentation is to segregate individual network applications and provide them with separation from each other, and the rest of the network.

In the olden days, we had firewalls – ok we still have those – and many customers had outside/inside/DMZ – sadly there are still organizations who run Outside/Inside firewalls and are using outside IP to inside IP’s using NAT and think they have a firewall.

As things got better people started realizing we need to protect the inside of the network, from a box that might get attacked, so we put those in DMZ’s (It drives me nuts how DMZ is mis-used, it is really just poor education).

BYOD, Laptops, and users that do not know any better, result in nastiness being brought in to your network via the “Walk Net”, or users managing to download some kind of malware or virus – bottom line is that the biggest security threat on your network is probably on the inside.

There are many different security standards that are imposed on different industries, PCI for payment card,  NERC/CIP for electrical utilities,  NIST and a barrage of ISO standards.    These standards know something many do not – like I said, the biggest security threat is on the inside.

So we need to start protecting the network from itself.    Many clients started putting firewalls and IDS between users and servers, and that was difficult and expensive.   A router that routes line rate at layer 3, is significantly less costly than a firewall at the same performance.

What about protecting servers from servers?

SDN, Micro Segmentation, ACI, VXLAN, NSX, OpenFlow – all different terms, some vendor specific, but all talk about the same basic concept — Software Defining The Network.   Giving us better granular control of packet flows from device to device or object to object in our environment.

Micro-Segmentation – The Simple Explanation

There is a very easy concept to understanding Micro Segmentation.   Your network started as “Allow all Packets”  and now is “Deny all Packets”  — that is it, nothing more complicated

“Wait doesn’t that mean I need rules for EVERYTHING now?”  — Yes you do.

“That’s a lot of work!” — Yes it is, but once you do it once, you are good.

Why am I writing this?  Well there are some new ideas…   Read on.

I am no expert.

First, I am no expert on this topic – so I am writing simply what I have learned so far, and really this is an emerging market.     The other important point I want to make is – I am not writing about every possible option, there are tons of dev heavy SDN and/or micro segmentation options out there, and I am no developer.     OpenStack type concepts really scare me, and it scares a lot of professionals (many are afraid to admit it).

This is my opinion after years in telecommunications and information technology, feel free not to agree with me and sound off in the comments.

Do I need this?

I don’t know — do you?    Really, ask yourself.   I feel ACI/SDN/NSX/NVGRE – pick your term – is a solution for a problem not many clients actually face today.   In the service provider market this is a big deal for customer segregation and network automation and orchestration but I don’t think even large enterprises will run out and deploy these solutions any time soon.   Why spend $1 million on something that costs me $50K a year to do by hand.   On the other hand, if you are in a regulated environment, this might solve a lot of security problems for you, or perhaps you want a network that has the highest levels of security.   Either way some of these more mainstream solutions are big and expensive to deploy and will not be done quickly.

The use cases for SDN type technologies in my opinion are still evolving at this time.   I know one thing, the barrier to entry is cost, time and complexity.   Even if you wanted to deploy micro-segmentation to only a single app – it has traditionally been very expensive to do – until now.

The Need For a Gateway

Most if not all SDN or Micro Segmentation systems use some kind of encapsulation, VXLAN for VMware’s NSX and Cisco ACI,  NVGRE for Hyper-V.

The problem is – once we want to leave our virtualized / SDN / micro-segmented network – we need to talk standardized methods to client devices, routers and other devices that are not within the scope of our micro-segmented system.

Some solutions have the de-encapsulation features built right into the fabric (Cisco) and some like Illumio, well that is a totally different story because they do not use encapsulation.   For some like NSX and NVGRE  this means some kind of gateway – that gateway can be a single point of failure depending on your design.  Some of these gateways are hardware, and some are software.

Cisco ACI

Watching for some time, Cisco has realized one thing – SDN is a bit of a mess.   It is a little like me handing you a box full of mechanic tools and asking you to build a car with no automotive knowledge.

The solution from Cisco is ACI – Application Centric Infrastructure – which is a fancy name (in this writers opinion) for “Managed SDN” – you program business intent, and it tells the network how to achieve that.    The basis of Cisco’s new DNA architecture is the whole “program intent” instead of the traditional “program behavior”.  The idea is called contracts – or basically “I have a contract that says I can speak to you in a certain way” – no contract – no talkie.

ACI uses a segregated control plane inside a cluster of boxes called the APIC – The Application Policy Infrastructure Controller for the command and control of ACI – but it isn’t in the data path.   You can actually shut down the APIC and the network will still mostly function.

The Cisco ACI solution in my OPINION the best way to do it for big data centres that are greenfielding it – virtualize the network using network hardware – at the network – in silicon to ensure performance.    It also does not rely on any kind of gateway to talk to the rest of the non ACI world – that capability in inherent in the system, eliminating this nasty single point of failure.

The downfall is that your have to have all Nexus 9000 series switches to run Cisco ACI, and you must move to spine leaf architecture – and it is not exactly a plug and play solution.  Brownfield deployment of ACI is no small task, and can only reasonably be accomplished by installing ACI and then gouging massive security holes inside it to make applications work while you slowly lock it down (kinda the inverse of the point).   Not to mention the investment – is huge.    If you have multiple data centres – your problems just got even more complex.

VMWare NSX and Hyper-V NVGRE

These systems rely heavily on software based platforms to make them function, and while they do integrate directly with their hyper-visor platforms they rely on software (or some hardware vendors) to handle the movement of data between the virtualized network and the rest of the world.

The NSX World calls this an “NSX Edge”  in Hyper-V they call it a network virtualization gateway.    Either way if you lose this device – you are in trouble.

There are many management VM’s associated with both NSX and Hyper-V, losing some of them will cause massive network problems.

Not a big fan of this way of doing things – it brings a lot of complexity, so you better have some kind of offsetting benefit you are getting for all this hard work and in my opinion, risk.   Let the network handle the network.

Illumio – A Different Approach

At a recent Networking Field Day 12 event, we had a great talk from Illumio, and these guys are really thinking different.   Every single operating system has really good security capabilities baked right in, so instead of re-inventing the network wheel, why not orchestrate the tools what we have

The architecture is called the Adaptive Security Platform and is made up of two components.

VEN – Virtual Enforcement Node – software running on each host or virtual machine, it understands all communications on that host and is used to build the application dependency map.  It also completes tasks on the platform itself, tracks data about who is talking to who and enforces the policies on that specific host/VM.  This of this as your data plane.

i1

PCE – Policy Compute Engine – An on premise or cloud box that takes all of the information from the individual nodes to create the relationship graphs, and then push down policies to the VEN from the PCE.    This is the control plane of illumio.

There are three things Illumio does…

Illumination

Understanding the relationships between applications and hosts and other applications is something no IT department knows – ok that is a rash generalization – 99.999% of them.   Every time I get into a micro segmentation or even VLAN segregation discussion where firewalls are involved “Ok tell us all your flows so I can program the new firewall” — yeah right forget it.   Even Cisco’s ACI platform cannot really help you with that.     One of the great features of Illumio is the ability to see what is talking to what.   This feature alone could be used for many different applications – including helping you map out application dependencies or graphs for the deployment of traditional SDN or ACI solutions.    This will help you then build your policies for your network.

Illumio ASP(TM) Illumination Screenshot

They do this by generating a communications graph

 

Enforcement

Enforce policy on all devices by orchestrating the enforcement mechanisms already built into each operating system.    Build policy to match business intent.

“Segmentation in our vernacular is the enforcement of policy at the host, it is not a network construct”  says Matthew Glenn of Illumio

Install the PCE engine, and start building the policy based on your illumination data.    Once that policy is created the PCE will extrapolate the required rule sets for each host and start pushing it out.    You can diff your changes, and roll back quickly if you run into problems.

Tamper detection will alert you if someone tries to shut down the VEN or if someone tries to modify the ip tables or filtering component. The system operates in a double trust model, they have rules for traffic both in, AND out.   Even if a single application is attacked or the VEN is disabled – that only allows that box to get out – other boxes still wouldn’t allow traffic – and with shun features, it can even realize there is a problem with that host, and push out rules to block that box that was shunned.

Road mapped features include features to alert on rules that are not being used anymore, and help you keep your policies from getting bloated with old polices.

Deploying additional hosts of an application is simple, build the new host, and push out the pre-made policy for that application.   You can even test and monitor policies before deployment to make sure you do not cause any adverse problems.

 

SecureConnect

Encrypt data between workloads easily using a single click.  You simply tell workloads you want to encrypt data between machines, and Illumio handles all the hard work of dealing with all the encryption and authentication issues.   I won’t spend a ton of time on this – it is a cool feature, but for me it is just a “nice to have”.  This basically eliminates the manual efforts of setting up these IPSec connections.

Illumio Extends Beyond the Physical Data Centre

The other use cases for Illumio are enormous    It can simply do things that normal SDN/ACI/NSX solutions can not do.

  • Secure services in branch locations
  • Protect devices across physical locations
  • Get a holistic view of apps and their security and relationship regardless of geography, and then create policies to protect them
  • Deploy into cloud services like Amazon AWS, Microsoft Azure or Long View OnDemand

Go Brownfield

This doesn’t involve changing any networking – in fact – this doesn’t even need the network team, an application or server team could deploy Illumio without even talking to the network department.   Obviously this is not something I would recommend, but this does mean you could use it for single application needs too.   Perhaps you have a single application or environment that needs heavy security but you do not want to move it to a new VLAN, or you need security between boxes across a WAN.   Perhaps you have a new application you are deploying and you want to micro segment it on the network – now – without having to go through the trouble of deploying micro-segmentation to the entire network.     New regulatory requirements require you to beef up security in a short amount of time with additional box to box encryption or policy support.

My Final Thoughts

These guys at Illumio are thinking different.  I really get the idea that everyone else in the SDN world seems to think people will just thrown down their infrastructures and rebuild in this new fancy SDN/ACI/NSX – whatever – world, like we all have time for that.    I know many organizations that may never have the time/energy/money to do that – but they all need security.    I like companies that think differently and push the edge, and this is something that if marketed to the right audience, and if Illumio can get their message to the right customers – may actually help to provide a wide range of customers with great security without tearing out everything they have.  Fan boy?  Yeah, I think so.    What about running this down to the desktop even?  They apparently have done that.    What is next for Illumio?   Not sure – but I would keep an eye on them.

 

Networking Field Day 12 – Announced

NFD-Logo-150x150

I am pleased to be selected as a delegate for Networking Field Day 12.      For those who are not familiar with the team at www.techfieldday.com and their amazing online content.  Steve / Tom and the team work very hard to bring you top notch technical content.

Think Cisco Live type presentations, in a significantly smaller environment.   The best part is that you get to watch online and submit Q&A.   So make sure you book some time August 10-12 for presentations from all of the vendors listed

NFD12

I will post a link to the live feed the day we go live at Network Field Day 12!

In the meantime – here is the schedule for the event.

NFD12Schedule

More about Tech Field Day…

 

Veeam Launches Cloud Connect

As a delegate for Tech Field Day Xtra at Cisco Live this year I was pleased to sit in on a presentation from Veeam about their new Cloud Connect product.

Previously only available to large enterprise, rapid DR response times, DR data centre space and IP mobility were things that smaller organizations could only dream of doing.   Veeam is responding to that need.

veeam_2014_logo_color

First, let’s remember the rule as a reminder

3 – Copies

2 – Different Media

1 –  Off Site

 

We have a few challenges to getting this data “Off-Site”.  Many are still using tape,  but more and more people want to get this data off-site automatically, and more often.

Many organizations are trying to reduce RTO – Recovery Time Objective.     How fast can we get back online after a serious problem?

Here is a quick intro into Veeam Cloud Connect by Clint Wyckoff @clintwyckoff –

 

The RTO Challenges

“15 Minutes” is a common theme these days.  With current technology this is pretty easy to do — On Site.     Once we decide that for whatever reason we want to recover off site we have a few challenges.

  1.    Backup Copies – that data has to be off site, we have to get it there
  2.    Data Availability – That data has to be AVAILABLE.   No tapes stored in a vault or a box, and nothing that we have to “restore” in order to bring it online
  3.    Connectivity

I want to discuss a few options we have for #3….

Assuming you have data centre space, either yours, or rented.

1)   Over the WAN – Different IP – This has all sorts of challenges, application issues, hostname resolution, firewall considerations,  NAT if it is published.   There are some tools out there that help you with this, but it always has been a bit of a dogs breakfast.

2)  Over the WAN – Same IP – This gets complicated fast,  your choices are move the entire subnet,  use a protocol like OTV (expensive on the hardware side) or some other method.

Option 1 is what we have been doing for years,  various tools have tried to make it easier (Think DoubleTake) but it was very hard to get working, and you need infrastructure – real infrastructure on the far end.

Options 2 is expensive, and complex, not something many customers want to invest both time, money and resources in.

 

 

Veeam NEA

VeeamCC

Without any “geekery”, without OTV, or VPN links,  Cloud Connect with NEA – Network Extension Appliance allows your virtual machines to power back up at the DR data centre with zero effort by the customer.   The IP does not change – the application comes up, and the Network Extension Appliance simply transports the traffic destined to and from that VM back to the customer site.   They operate as a proxy-arp on site for the IP and MAC of the server.

The reverse replication can happen, and then when ready you can fail back.

This is bringing the benefits of very large scale enterprise level availability – to the SMB sized customers.      With a personal level of control.

You don’t need any special network gear, storage or servers.   You don’t even need to own data centre space.    You purchase resources from a Veeam Cloud Connect provider, and your service is up and running in shared infrastructure.

Reduced Operating Costs

This means reduced operating costs, you are not paying for dedicated DR infrastructure at your provider,  your machines are not running consuming resources, and the product is designed for “Pay as you grow” so you can start small and grow without significnat capital outlay

Wrap

This is a great idea – the complexities of the network connectivity alone associated with the traditional method make many shy away, and when you add in the Veeam backup product which is already well respected in the industry and now provide off-site recovery with the click of a mouse, in my opinion, Veeam has a winner here.

DEMO

Watch below as Veeam provides a great demo of the product while the Tech Field Day team asks the hard questions