Cisco has recently released a new video around the anatomy of a hack. Most people think hackers are script kiddies sitting in their basement (or their grandmothers basement) wearing a hoodie writing “scripts” and damaging infrastructure.
There is no question the script kids are out there – but organized crime and foreign governments have become the real bad actors.
This video is actually pretty realistic, and many of my friends and colleagues have gone through these exact scenarios at work, or at their clients.
So for the previous blog entries where I was critical of some of Cisco’s marketing videos – this one is bang on.
I have updated info with how to actually FIX this yourself at this link
https://cantechit.com/2017/04/22/ford-still-not-delivering-carplay-to-2016-vehicles/
When Ford announced CarPlay in the 2016 F-150 – it wasn’t quite ready for prime time, but they sure spent alot of time talking about it, and marketing the idea to get customers to purchase trucks from them. For the record – I am one of those people, so this might come off a little ranty.
Demos hit YouTube as early as Jan of 2016 with a demo of how Sync 3 with Apple Car Play would work.
The statement from Ford was clear – announced on the show floor at CES 2016 in January 2016. Ford even demonstreated Apple Car Play and Android Auto.
In Ford’s own marketing document they state “future, over-the-air updates via Wi-Fi will help ensure it keeps up with the latest technology” – but to date there has only been a single update from 1.0 to 1.01 – and that required a dealer visit. The 2.0 update which is purported to provide Apple Car Play and Android Auto support – will again require a dealer visit we are told. More broken promises.
“In North America, Ford is making Apple CarPlay and Android Auto available on all 2017 vehicles equipped with SYNC 3, starting with the all-new Ford Escape. Owners of 2016 vehicles equipped with SYNC 3 will have an opportunity to upgrade later in the year” – A statement that was actually revised.
Sales people – including the ones that sold me my Ford F-150 promised the upgrade by “end of summer” Ford is now shipping 2016 F-150’s (same model year as mine) with Ford Sync 3 Version 2.0 – with Apple Car Play and Android Auto enabled. So they seem to be more interested in delivering new cars to new customers – and forgetting about the promises they have already made to customers. All of that text related to “end of summer” has now been changed to “end of year”
Sources also tell us that a hardware upgrade with a cost of $50-$300 (unsure of cost of the part, and labor) will also be required, it seems the USB Hub unit in the vehicle doesn’t work with Car Play and will require replacement (Android Auto users are fine) – a cost that sales people did not disclose to buyers.
Social media marketing teams from Ford operate accounts on various forums – including f150forum.com under the name “FordIVTeam” “Ford In Vehicle Technology Team” but have since clammed up on the issue, with many owners mad they were sold promises that Ford did not deliver. That social media account continues to point people to a website that basically teases the consumer about all the features they are not getting – but paid for.
It would seem Ford used the flashy lasso of cool technology to rope in a lot of buyers – unfortunately the failure to deliver may result in unhappy consumers – but I guess they already got our money.
For those who think “Who the hell is VXi” – no kidding. They are not very well known, and 2 years ago at #CLUS I ran into their tiny little booth. I was actually a big fan of one of their products – I was actually disappointed to see such a small both – this company really needed some marketing muscle!
First, these guys make the best headset – I have ever owned. This little baby is the BlueParrott (by VXi) B250-XT – a bluetooth headset that has insane battery life and the best darn noise cancelling on the market today – PERIOD.
I am not kidding when I say high performance – I drive in a Jeep TJ, no top and no doors – and I can actually take phone calls. How about a modified Subaru STi with a loud exhaust – no problem, have a business call, everyone thinks I am in the office.
The only complaint? You look like an air traffic controller wearing it – but I will be honest, for the performance – bring it on!
Everyone I tell about this wants one – everyone loves them. The biggest problem – no marketing!
Last year they launched the revamped B350 version of the product, but now big news from the VXi camp.
VXi Aquired By Jabra / GN
Jabra has acquired VXi Corporation, inclusive of the VXi and BlueParrott brands. The idea is that they will share channel, and gain portfolio. Personally this has got to be about VXi’s IP – because no headset works like the BlueParrott – nothing. From the news relase “It also gives GN Audio the opportunity to leverage VXi’s best-in-class expertise within “high noise”communication environments”
This is also about marketing and channel space – Quote ““We are delighted to have reached an agreement with VXi. The acquisition further strengthens our position on the North American market, where we have shown strong progress in recent years. We will build on VXi’s strong presence and reputation in the US and combine it with the international reach and professionalism of GN Audio and the Jabra team,” said Paul Hamnett, President for GN Audio in North America.”
This is great news – someone like VXi with great products needs the power and marketing arm of someone like GN/Jabra. My only hope is that what made VXi great – is not lost at GN.
VXi Launches B450 Flagship Bluetooth Headset
A few new features on this B450 BlueParrott next gen headset, first the charging cord, the old B250 had a barrel connector, which was a pain – because i had to use THAT charge cord – now they have changed it to Micro-USB which means all my existing charge points can be used and commodity charge cables. More buttons which can also be programmed for functions I want. The close mic noise suppression design is still there. They have added VoiceControl to the headset itself – I have this feature on one of my other headsets and never use it – I will be honest, I just use SiRi on the iPhone, or on Android I use the speech rec on there – I’m not sure this feature is really required but as a check box against the competition – it is there.
The ear pad is WAY more comfortable than the B250, yes the unit is larger, but more comfortable – hey you already looked like an ATC operator with the B250 – nothing is changing and now it is more comfortable. My only concern with the extra size is portability, before I could kinda fold it up and it it in my bag, not sure this will be as portable.
As of this print – I have not had a chance to try the B450, and I only had a chance to try wearing a B350 – right now no B450’s exist here in Canada – I am trying to get my hands on one and when I do – I will get you a review side by side right here on the blog.
Update: After posting additional details and clarification were provided and as a result edits were made to make things a little more correct.
This came out of the blue for me today – clearly I am off my game but today Meraki is launching “Meraki Vision”
Not a Traditional Offering
Analog days we had coax cameras to capture cards, and now IP cameras that send H.264 and JPEG streams to “NVR” or Network Video Recorders. Even companies like QNAP and Synology offer up NAS devices that will record from a myriad of cameras both expensive and cheap.
Difficult Technology
There is no question existing technologies are difficult, and everyone has their own proprietary way of dealing with it. Even Ubiquity who were selling a very successful series of standards based cameras took heat when they installed proprietary software on them in order to force people to use their NVR platform – a few years later they reversed that decision.
Different codecs, different stream types, different camera features and a mixture of protocols have made this difficult to deploy. With today’s announcement Meraki has decided to disrupt this normal way of doing this – and eliminate the traditional NVR (storage) and VMS (management) platforms as they typically exist today
The goal at Meraki is to expand beyond networking, first with the MC74 line I previously wrote about and now in the security camera world. With solid state storage becoming cheaper and an existing extensible management platform at Meraki they are able to provide a cloud managed security product line.
High End Specifications
Two cameras will be offered at launch, MV21 Indoor Model will be $1,299 (List USD) (works with 802.1AF Power) and MV71 will be $1,499 (List USD) for an outdoor (802.3AT POE+ required). The outdoor will have a heated chassis.
Both will feature a 5 megapixel camera and 720P HD recordings. 3-10mm Vari-Focal lens for a flexible field of view and a wide angle where appropriate or zoomed in for long shots.
Cameras do support IR illumination up to 100 feet and have good low light performance.
Wall Mount, Pole Mount and Bracketed mounts will be available at launch
Cloud Licenses which include all hardware support will run $300/YR/Camera with options for up to 10 years with significant discounts.
Video Wall, Motion Search, Granular User Access
No NVR Required – Dashboard!
Meraki’s MV line will not require any NVR on site, and no Video Management Software (VMS) that will all exist in the Meraki Dashboard. They have no interest in following the old way.
Each camera will have 128GB of on board storage, or in Meraki’s estimation about 20 days of footage. To eliminate the need for centralized storage the camera will perform motion indexing and thumbnail storage in the camera. For Meta data, it will use 50kbps of bandwidth, when viewing BW it depends on how many cameras you are viewing, and a few other factors. Camera’s will allow local access (Question is, will it be a standardized stream you could send to another NVR or another monitoring app..)
You can create different layouts right on the Meraki Dashboard and provide multi user access to give individual users access to only their cameras.
One of the best features of the Dashboard – and to be honest my first concern was – streaming all these cameras to the cloud – the cameras do their own storage, and for live view, the dashboard figures out if you are local to the camera, and if you are the streams are delivered directly from the camera, to your workstation. If you are not local, the streams are proxied via the dashboard.
Individual Cameras can have motion search capability to look for motion.
Justin’s Take
This would appear to be a very complete offering for a first launch. As someone who has actually built camera systems in the past it is missing only a single thing – a PZT camera offering. We need cameras that can do patrols, and cameras that auto-zoom and long lenses for outdoor surveillance. The platform is a very good start, I do hope that Meraki has even more offerings coming down the pipe for this line.
The few customers I have spoken to regarding this today all said they want the ability to record the video somewhere else. If someone smashes the camera, you would normally get the video of the smash – and then black – in this case you get nothing with all content on the camera.
This isn’t it. If we are doing security cameras and phones now – I am willing to bet card access, building security, and other IOT plans are in the works over there. It would make the most sense to have a single platform to manage all of these things. How about a Meraki NAS with cloud backup? Desktop Meraki Backup services? The ideas for things cloud managed are endless.
The question is – how big are they going to get ? How far will Meraki take this?
Some time ago I was talking about Meraki maybe being re-banded – I could see it already “Cisco Cloud Networking” or “Prime Networking” – it wasn’t something I was looking forward to, I would rather Meraki is left to their own devices (pun intended). This little green skunk works in California is quickly turning into the one stop networking shop.
I want to get my hands on one of these things as quickly as possible – when I do, I will bring it to you live.
iOS 10 will be available on September 13th – This means that on that day your network is going to get hammered. 100 employees, even on a 1GB internet link could reek havoc into your network when they all start downloading the iOS 10 update at work.
Why at work? Limited download speeds at home, limited bandwidth at home, bored at work – whatever it is, each time a huge iOS update is announced, I get calls about slow networks. This is especially important for Guest and Public Access internet services – stadiums, ice rinks, recreation centres – or as many think of it ‘That spot I go to download!’
Protect Quality of Experience With Meraki
The last thing we need is this new Apple download getting in the way of the quality of the experience for your business apps and real users.
Meraki offers a few options for helping with this, and it is as easy as a few dashboard changes. If you are using a mixed MX/MR environment, I recommend doing this both at the wireless and at the edge, especially because desktops can pull the update as well.
Remember – Layer 3 rules are always processed before layer 7 rules so this is only a tip, you might be adding this to your existing rule set so take care. You may need to add this to group policy if you have deployed individual group policies based on VLAN or AD Group.
Capture The Right Traffic!
Meraki categories iTunes updates as “Music” so to throttle this properly we actually need to use the Music category, but many Apple updates also come down using an application that identifies simply as Apple.COM. So to ensure we catch this, we should create two rules to ensure that we are catching ALL of the traffic types. Technically Apple could use different methods to distribute the new update, and we do not know what they will use or how it will be categorised.
Users also have the option of downloading the update to their PC – which might technically be iTunes Traffic. We cannot look into the future, so I plan to be sure to catch this traffic by creating a few rules
An intelligent way to do this could be to look at how many users you think you have and then throttle based on a calculated amount, if you have 100 meg internet, and 200 apple devices and want to use 50MB max, you could give 256K / user. Remember traffic shaping is per session.
These rules should be added to both the traffic shaping for your wireless, and for the MX device if you have both.
Throttle apple.com
This traffic identifies as “Application: Apple.com” so we need to create the appropriate rule.
In this case I am going to limit each user to 256K, I don’t want to totally prevent it from working – but I don’t want 100 people eating my network – 200 x 256K is 51Mb!
Throttle “Music”
Sometimes iTunes traffic identifies as Music.
So we need to ensure we capture that traffic as well, the nature of Apple’s environment makes it difficult to figure out how they will distribute this. Once again as above, we will limit it to 256Kb
That is it! You are protected against the onslaught of Apple Update madness.
The entire point of micro segmentation is to segregate individual network applications and provide them with separation from each other, and the rest of the network.
In the olden days, we had firewalls – ok we still have those – and many customers had outside/inside/DMZ – sadly there are still organizations who run Outside/Inside firewalls and are using outside IP to inside IP’s using NAT and think they have a firewall.
As things got better people started realizing we need to protect the inside of the network, from a box that might get attacked, so we put those in DMZ’s (It drives me nuts how DMZ is mis-used, it is really just poor education).
BYOD, Laptops, and users that do not know any better, result in nastiness being brought in to your network via the “Walk Net”, or users managing to download some kind of malware or virus – bottom line is that the biggest security threat on your network is probably on the inside.
There are many different security standards that are imposed on different industries, PCI for payment card, NERC/CIP for electrical utilities, NIST and a barrage of ISO standards. These standards know something many do not – like I said, the biggest security threat is on the inside.
So we need to start protecting the network from itself. Many clients started putting firewalls and IDS between users and servers, and that was difficult and expensive. A router that routes line rate at layer 3, is significantly less costly than a firewall at the same performance.
What about protecting servers from servers?
SDN, Micro Segmentation, ACI, VXLAN, NSX, OpenFlow – all different terms, some vendor specific, but all talk about the same basic concept — Software Defining The Network. Giving us better granular control of packet flows from device to device or object to object in our environment.
Micro-Segmentation – The Simple Explanation
There is a very easy concept to understanding Micro Segmentation. Your network started as “Allow all Packets” and now is “Deny all Packets” — that is it, nothing more complicated
“Wait doesn’t that mean I need rules for EVERYTHING now?” — Yes you do.
“That’s a lot of work!” — Yes it is, but once you do it once, you are good.
Why am I writing this? Well there are some new ideas… Read on.
I am no expert.
First, I am no expert on this topic – so I am writing simply what I have learned so far, and really this is an emerging market. The other important point I want to make is – I am not writing about every possible option, there are tons of dev heavy SDN and/or micro segmentation options out there, and I am no developer. OpenStack type concepts really scare me, and it scares a lot of professionals (many are afraid to admit it).
This is my opinion after years in telecommunications and information technology, feel free not to agree with me and sound off in the comments.
Do I need this?
I don’t know — do you? Really, ask yourself. I feel ACI/SDN/NSX/NVGRE – pick your term – is a solution for a problem not many clients actually face today. In the service provider market this is a big deal for customer segregation and network automation and orchestration but I don’t think even large enterprises will run out and deploy these solutions any time soon. Why spend $1 million on something that costs me $50K a year to do by hand. On the other hand, if you are in a regulated environment, this might solve a lot of security problems for you, or perhaps you want a network that has the highest levels of security. Either way some of these more mainstream solutions are big and expensive to deploy and will not be done quickly.
The use cases for SDN type technologies in my opinion are still evolving at this time. I know one thing, the barrier to entry is cost, time and complexity. Even if you wanted to deploy micro-segmentation to only a single app – it has traditionally been very expensive to do – until now.
The Need For a Gateway
Most if not all SDN or Micro Segmentation systems use some kind of encapsulation, VXLAN for VMware’s NSX and Cisco ACI, NVGRE for Hyper-V.
The problem is – once we want to leave our virtualized / SDN / micro-segmented network – we need to talk standardized methods to client devices, routers and other devices that are not within the scope of our micro-segmented system.
Some solutions have the de-encapsulation features built right into the fabric (Cisco) and some like Illumio, well that is a totally different story because they do not use encapsulation. For some like NSX and NVGRE this means some kind of gateway – that gateway can be a single point of failure depending on your design. Some of these gateways are hardware, and some are software.
Cisco ACI
Watching for some time, Cisco has realized one thing – SDN is a bit of a mess. It is a little like me handing you a box full of mechanic tools and asking you to build a car with no automotive knowledge.
The solution from Cisco is ACI – Application Centric Infrastructure – which is a fancy name (in this writers opinion) for “Managed SDN” – you program business intent, and it tells the network how to achieve that. The basis of Cisco’s new DNA architecture is the whole “program intent” instead of the traditional “program behavior”. The idea is called contracts – or basically “I have a contract that says I can speak to you in a certain way” – no contract – no talkie.
ACI uses a segregated control plane inside a cluster of boxes called the APIC – The Application Policy Infrastructure Controller for the command and control of ACI – but it isn’t in the data path. You can actually shut down the APIC and the network will still mostly function.
The Cisco ACI solution in my OPINION the best way to do it for big data centres that are greenfielding it – virtualize the network using network hardware – at the network – in silicon to ensure performance. It also does not rely on any kind of gateway to talk to the rest of the non ACI world – that capability in inherent in the system, eliminating this nasty single point of failure.
The downfall is that your have to have all Nexus 9000 series switches to run Cisco ACI, and you must move to spine leaf architecture – and it is not exactly a plug and play solution. Brownfield deployment of ACI is no small task, and can only reasonably be accomplished by installing ACI and then gouging massive security holes inside it to make applications work while you slowly lock it down (kinda the inverse of the point). Not to mention the investment – is huge. If you have multiple data centres – your problems just got even more complex.
VMWare NSX and Hyper-V NVGRE
These systems rely heavily on software based platforms to make them function, and while they do integrate directly with their hyper-visor platforms they rely on software (or some hardware vendors) to handle the movement of data between the virtualized network and the rest of the world.
The NSX World calls this an “NSX Edge” in Hyper-V they call it a network virtualization gateway. Either way if you lose this device – you are in trouble.
There are many management VM’s associated with both NSX and Hyper-V, losing some of them will cause massive network problems.
Not a big fan of this way of doing things – it brings a lot of complexity, so you better have some kind of offsetting benefit you are getting for all this hard work and in my opinion, risk. Let the network handle the network.
At a recent Networking Field Day 12 event, we had a great talk from Illumio, and these guys are really thinking different. Every single operating system has really good security capabilities baked right in, so instead of re-inventing the network wheel, why not orchestrate the tools what we have
The architecture is called the Adaptive Security Platform and is made up of two components.
VEN – Virtual Enforcement Node – software running on each host or virtual machine, it understands all communications on that host and is used to build the application dependency map. It also completes tasks on the platform itself, tracks data about who is talking to who and enforces the policies on that specific host/VM. This of this as your data plane.
PCE – Policy Compute Engine – An on premise or cloud box that takes all of the information from the individual nodes to create the relationship graphs, and then push down policies to the VEN from the PCE. This is the control plane of illumio.
There are three things Illumio does…
Illumination
Understanding the relationships between applications and hosts and other applications is something no IT department knows – ok that is a rash generalization – 99.999% of them. Every time I get into a micro segmentation or even VLAN segregation discussion where firewalls are involved “Ok tell us all your flows so I can program the new firewall” — yeah right forget it. Even Cisco’s ACI platform cannot really help you with that. One of the great features of Illumio is the ability to see what is talking to what. This feature alone could be used for many different applications – including helping you map out application dependencies or graphs for the deployment of traditional SDN or ACI solutions. This will help you then build your policies for your network.
They do this by generating a communications graph
Enforcement
Enforce policy on all devices by orchestrating the enforcement mechanisms already built into each operating system. Build policy to match business intent.
“Segmentation in our vernacular is the enforcement of policy at the host, it is not a network construct” says Matthew Glenn of Illumio
Install the PCE engine, and start building the policy based on your illumination data. Once that policy is created the PCE will extrapolate the required rule sets for each host and start pushing it out. You can diff your changes, and roll back quickly if you run into problems.
Tamper detection will alert you if someone tries to shut down the VEN or if someone tries to modify the ip tables or filtering component. The system operates in a double trust model, they have rules for traffic both in, AND out. Even if a single application is attacked or the VEN is disabled – that only allows that box to get out – other boxes still wouldn’t allow traffic – and with shun features, it can even realize there is a problem with that host, and push out rules to block that box that was shunned.
Road mapped features include features to alert on rules that are not being used anymore, and help you keep your policies from getting bloated with old polices.
Deploying additional hosts of an application is simple, build the new host, and push out the pre-made policy for that application. You can even test and monitor policies before deployment to make sure you do not cause any adverse problems.
SecureConnect
Encrypt data between workloads easily using a single click. You simply tell workloads you want to encrypt data between machines, and Illumio handles all the hard work of dealing with all the encryption and authentication issues. I won’t spend a ton of time on this – it is a cool feature, but for me it is just a “nice to have”. This basically eliminates the manual efforts of setting up these IPSec connections.
Illumio Extends Beyond the Physical Data Centre
The other use cases for Illumio are enormous It can simply do things that normal SDN/ACI/NSX solutions can not do.
Secure services in branch locations
Protect devices across physical locations
Get a holistic view of apps and their security and relationship regardless of geography, and then create policies to protect them
Deploy into cloud services like Amazon AWS, Microsoft Azure or Long View OnDemand
Go Brownfield
This doesn’t involve changing any networking – in fact – this doesn’t even need the network team, an application or server team could deploy Illumio without even talking to the network department. Obviously this is not something I would recommend, but this does mean you could use it for single application needs too. Perhaps you have a single application or environment that needs heavy security but you do not want to move it to a new VLAN, or you need security between boxes across a WAN. Perhaps you have a new application you are deploying and you want to micro segment it on the network – now – without having to go through the trouble of deploying micro-segmentation to the entire network. New regulatory requirements require you to beef up security in a short amount of time with additional box to box encryption or policy support.
My Final Thoughts
These guys at Illumio are thinking different. I really get the idea that everyone else in the SDN world seems to think people will just thrown down their infrastructures and rebuild in this new fancy SDN/ACI/NSX – whatever – world, like we all have time for that. I know many organizations that may never have the time/energy/money to do that – but they all need security. I like companies that think differently and push the edge, and this is something that if marketed to the right audience, and if Illumio can get their message to the right customers – may actually help to provide a wide range of customers with great security without tearing out everything they have. Fan boy? Yeah, I think so. What about running this down to the desktop even? They apparently have done that. What is next for Illumio? Not sure – but I would keep an eye on them.
Malware is everywhere. Symantec reported more than 430 million new unique malware packages in 2015, 36% more than in previous years.
Here are some additional statistics to explain how serious the Malware issue is right now (Statistics courtesy of Symantec’s 2016 Internet Security Threat Report)
One new zero day vulnerability was found every week in 2015 – double the number from 2014
500,000 personal records were lost or stolen in 2015
Spear-Phishing campaigns targeting employees increased 55 percent in 2015
Ransom-ware Increased 35 percent in 2015
20.8 Million devices are predicted by 2020. All of these – are at risk for malware.
As traffic moves from branch to branch around your environment, we have a few challenges. This traffic may not traverse firewalls and IPS devices, malware protection is common at the edge but not at the branch. Branch offices also sometimes have limited security features, perhaps they only have a small ISR.
Cisco is using a recent acquisition of OpenDNS to help block 90% + of malware. The architecture is called “Cisco Umbrella Branch”.
“What if I am using direct to IP?” – At this point, not yet, but this is a new technology they are working on. DNS powers most malware, so when you add in OpenDNS protection, we can short circuit a significant amount (Cisco says 90%) of malware. A good security protection strategy includes multiple methodologies – this is one more which is quick, short circuits a lot of malware with limited programming and low cost.
With direct internet access becoming less expensive, and customers moving more to VPN technologies as high speed internet becomes significantly less costly than WAN services, end users are accessing the internet directly from the branch.
Intelligence in the Cloud
Cisco along with OpenDNS has created an intelligent cloud to manage all of this data, so using all of these data points they can validate the safety of these web sites in real time without having to update any kind of local database. As every query is sent, if a domain is found by the Cisco security cloud, it will be marked as bad very quickly in OpenDNS and you are protected.
How it works
On Cisco ISR 4000 devices, the ISR will register to the cloud, a secure tunnel is created and then it is ready for DNS queries to be filtered by the OpenDNS cloud via the Cisco Umbrella Branch Connector. The Stealthwatch Learning Network will also provide netflow based security analysis.
The intelligence is all in the OpenDNS cloud, and the verdict of the DNS lookups is forwarded to the ISR. All ISR configuration for DNS is managed by the connector once it is enabled.
Keep in mind this is in addition to the rest of the OpenDNS feature set that you will also receive like URL filtering.
All DNS entries are filtered and captured by the Cisco Umbrella Branch Connector – the users and servers do not have to use the ISR as the DNS server, you can have the users, or servers using internet DNS – the ISR will intercept it, tunnel the request to OpenDNS and return the response.
A great idea from @ghostinthenet – Jody Lemoine for a great future idea was that it would be cool if the ISR created a dynamic access list based on good verdicts to OpenDNS lookups so a positive response to a DNS lookup would be required before you would even be allowed out of the office.
The Demo
The team at Tech Field Day has a great demo video on the Cisco Umbrella Branch technology in technical detail.
I have deployed the Meraki MX series many times, along with the MR access points. One of the most popular articles I have written to date was Meraki Guest Access – The Better way an article about another way to deploy guest access in the network with fine grained policies across perhaps multiple networks.
One of my recent deployments I had a customer who wanted to tunnel all guest traffic back to an MX – similar to how his existing legacy wireless system does it, so that he could send that traffic back to a dedicated connection OUTSIDE the firewall. Basically the idea is that we want guest traffic to never get anywhere near the corporate network. We also had multiple sites in play across a L3 WAN, so simple VLAN segregation would not work. (yes yes, I know there are other ways to do it, but we are keeping it simple here)
Meraki MR has the ability to L3 or VPN tunnel traffic back to an MX – but be aware of the following warning and important design considerations.
This configuration is designed for use with an MX in passthrough/concentrator mode, tunneling to an MX in NAT mode is not supported.
This warning comes from the Meraki web site, right here where it discusses the various modes in the MR. The problem is – it will not stop you from trying, and even in NAT mode, the “Wireless Concentrator” options still show up in the MX config screens. It even tries to work if you configure it, and in some cases it actually functions – but – not supported.
Important MR L3 Tunnel Caviats
1) Only Pass through / Concentrator mode is supported
As mentioned above, even though it might appear to let you configure it – and while I have had it working at clients before, it is not supported. As a result there are many core MX features that are disabled, for this reason, I would not buy the advanced security license for a dedicated MR concentrator device. Those features do not really function in this mode if you are using it primarily as a concentrator (they do work if traffic is traversing through the device interface to interface)
2) Content Filtering is not supported in passthrough mode
While layer 7 filtering is a component of the wireless access point – web page content filtering by category is an MX function, and in pass through mode the traffic from the MR’s doesn’t really pass through the MX, so the content filter is skipped. Funny enough URL blacklists do still work, but the categories do not.
3) No DHCP
You don’t get a DHCP server in this mode, which means you need some kind of DHCP for your guest users. Whatever your edge device is or switch could handle this. DHCP requests are tunnelled back to the MX and broadcast at the MX – so you can have a remote DHCP for this.
4) Tunnels can only terminate on the “Internet” interface
If you are trying to do this in NAT mode (Which you shouldn’t be doing) this will trip you up. Either way understand that the way it works is that the MR contacts the Meraki Dashboard and reports the public IP it is on, so does the MX, and then the VPN tunnel is created between the two devices using those IP’s as a baseline. So this traffic is really designed to go to the internet. You can override this behaviour in case your MX is on the inside of the network (has a private IP on the INTERNET interface), if you go into the MX Wireless concentrator screen you can put an internal IP on the MX and make it take the “inside” route if you want. Your mileage may vary here. However if you try to use NAT mode, and force the AP’s to use the “inside” interface of the MX — forget it — that will not work, the VPN process in the MX isn’t listening on the inside interface – only on the outside – again NAT mode is not supported.
5) SSID’s with down Tunnels do not transmit
If your MR cannot open a tunnel to the MX – the SSID will NOT transmit. So keep this in mind, if you do not see the SSID broadcasting out of your access point – that is a real great indicator you have a tunnel problem.
You might need 2 MX Devices
So some might ask “Wait, in some designs I might need 2 x MX devices to acheive what I want to do then, one in pass through to terminate my tunnels, and one at the edge” — Yes that is correct. As the MX you use for the tunnel termination cannot do content filtering on that traffic – and it also can not provide DHCP, you will need another device to get involved in this case. Another MX would be the right solution. If you are smart the way you deploy the VLAN’s on the second MX, you could create different SSID’s with different security zones and it would be quite easy to manage it all as well.
Watch out for hair-pinning
You may run into some hair-pinning issues with this design, so be careful of your packet flows. It’s possible that you could end up going out your firewall, back in, and then back out again. Packet Capture is your friend here.
Use Packet Capture to Confirm
When troubleshooting the tunnel creation on the MR, take packet captures of the AP, while pressing the “test connectivity” button in the SSID configuration – you should see the MR attempting to bring up a tunnel with the MX – do the same on the MX interface as well to see if there are responses. Isn’t it great we can take “remote” PCAP’s on this platform.
I hope this provides everyone with some important rules when it comes to this design, and tips on architecture for your next project.
It has been a few weeks since the end of Cisco Live 2016. I was originally targeting my blog post to be right after the event – catch all of that post event excitement.
I wanted my post to be more of a retrospective, how I feel about the event – where the benefits are for ME.
Each Experience is Different
If you asked 25 people how Live was for them, what their plan was, you will get 25 different answers. I have a few goals at each Cisco Live event that I attend.
1. Network with colleagues and good friends
This one is HUGE for me. In life, business and technology there is no better resources than those you have around you. I have met some amazing people at Cisco Live. True technology visionaries – people who really do think differently, and people who thing abstractly.
On the surface it sounds like a kegger party or some kind of mass social event, but it is nothing like that, unless you were a fly on the wall to the conversations that we have with each other – it is simply impossible to comprehend. I swear that when this group is together, a high speed multi-gigabit (it would obviously be some kind of mGIG / NBase-T connection OBVIOUSLY 😉 ) connection is created and ideas, thoughts and challenges are transferred at high speed between individuals.
The biggest take away I get from this group is inspiration – a few years ago it inspired me to look within myself, and forge ahead with new ideas. Every year I get new perspectives on technology and my life.
This is the large family of “Live Friends” but this year they really did graduate in my own personal mind from Live Friends to Live Family.
2. Get the update
What is the focus for 2016/2017. What is the new technological focus – yes from a Cisco perspective (see my Cisco DNA series) but more important, what is hot. I mean really hot. Is it IoT technology (slow uptake, but this is starting to actually grab hold), new wireless technologies (802.11AC Wave 2?), new management platforms?
What about SDN? Years ago at Live I remember watching demos on “OpenFlow” and thinking “That’s interesting, but no mass adoption yet”. The key is to see what is coming.
This is your chance to hit up some sessions and get up to date on — whatever it is you need updating on. Don’t leave before Q&A – that could be your chance to spark up an amazing conversation with someone really smart.
3. Find a path for this year
So this really is my secret, #CLUS helps you find your competitive edge. If you want to stay competitive in the marketplace, be the “go to guy/gal”, and keep life interesting – you need to stay ahead. Cisco Live unlike any other event shows you what is coming down the tubes, and in great detail.
Perhaps this year you are planning a big data centre migration and want to design a new state of the art architecture. Maybe you want to build a business plan to revolutionize the way your company uses wireless to drive revenue.
Whatever you are planning for the next 12 months – start planning it at Cisco Live, simply because the resources available to you are outstanding.
3. Geek-Out
If you are passionate about new and cool technology, this place is pretty awesome for eye candy. Virtual reality switch configuration, and big transport trucks full of radio gear, model trains connected to IoT devices. Let’s be honest for just a second – take some time to yourself and go play. It will be the best release your brain has had in awhile – and this type of release is inspiring, it will help you release the kid that is stuck inside of all of us.
My 2016 Cisco Live Take Away
Ok, my intention wasn’t to write another “here is some tips” post – the event is past, but those are the things that I focus on.
For 2016, my goals are exactly what I mentioned above. That being said, the event was 2 days too short for me to get everything I wanted – but there is no way my body could have handled 2 more days in Las Vegas.
Most sessions will be up on http://www.ciscolive365.com in coming weeks, so if you missed a session don’t fret, it will be there.
For this year, it is time to understand Cisco DNA (that is why I am writing my Cisco DNA series) as customers will come looking for it, and Cisco is pushing significant marketing dollars down the pipe on it.
Apple integration is going to be big for collab in the next year, even on the wireless front I can see this being a big deal as well. This “Apple thing” is going to be big for Cisco. Keep your eye on it. Spark + Apple + Video + Wireless = something innovative, I can just feel that.
The second place I go, is the World of Solutions – but this year it was massive – I mean – massive. I could have spent my entire day just in that room, each day and still not spent the time I wanted to. This goes back to value, it is almost impossible not to get good value out of going to Cisco Live – even on just a social / explorer pass.
Now we forge into the last half of 2016, with a new focus, feeling pumped and ready for what is ahead. See you in 2017.
This is a game changer, and this will be a long blog post. Cisco is flipping the script on QoS. Quality of Service – will now become Quality of Experience. This isn’t a marketing term either. Come along for a ride as I explain.
First some references, the amazing team at Tech Field Day – www.techfieldday.com and the Cisco Team who presented at Tech Field Day Xtra at Cisco Live this year provided so much insight. As I talk about this, I will provide some links to videos, or specific parts in that presentation. Some of my graphics have been pulled from that content. Tim Szigeti is an amazing knowledgeable professional a true leader in the field, and Ramit Kanda provides an amazing demo on this great new technology.
A history lesson…
QoS… Since the day I took the Cisco CVOICE course, I was learning about protocols and methods of qualities of service. The construct is simple – we need important stuff to be first. Quickly this became a topic even the top network professionals – CCIE’s couldn’t handle.
Cisco Enterprise has a Vision.. “Transform our customers’ businesses through powerful yet simple networks” — powerful.. yes.. simple.. no so far…
As networks became constricted in bandwidth (mostly in the WAN) we needed a way to constrain less important traffic. The start of QoS was in the VoIP world – as people like me (hard core telephony guys from the TDM days) started to work on VoIP, we wanted circuit switched performance over packet switched networks. Zero packet drops, little jitter and delay.
We started with ToS (Type of Service) – a small field in the IPV4 header that gave us some bits we could set. 3 bits should be enough for anyone — yeah right, just like “640KB should be enough for anyone”. For most enterprises 8 classes is enough – but for service providers, not so much.
Then there was vendors who treated TOS and DSCP bits differently, or put them into different queues and treated them differently
QoS is second only to routing in the network when it comes to adoption – but how many customers are deploying it properly. Stay with me – we have new tools for you.
“It takes [us] 4 months and $1M to push a QoS Change… ” says a Wall Street Financial company.
“It took us 3 months to deploy a 2 line ACL change across 10K devices, which slowed down onboarding of our Jabber application” – says a Cisco Network Architect
QoS is Too Hard
“With QOS – the #1 TAC case report – is missing or incorrect classification and marking” – says Tim Szigeti – Cisco Systems
In a recent group of CCIE’s, and some others who I also respect greatly for their knowledge they all agreed “QoS is too difficult” – just get more bandwidth. Let me provide some illustration. This is the way a 2P6Q3T router would classify these categories into queues.
As I go across my network – each device I have has a different QoS architecture
Let me save you – don’t bother reading the below graphic – you get the point. Can you, as a professional, trap and trace a packet as it flows across the network to ensure it is getting the treatment you want? Can you design how to deploy a new application into this many different queuing mechanisms? Do you even want to?
What if I wanted to provide QoS for all 1400 applications that a network device supported?
Here is a hint you don’t want to do that.
“We have done more to advance QoS technology in the last year, than in the last 10” says Tim Szigeti from Cisco Systems.
So Cisco made it better, — but this is still too much
Cisco Validated of Design – Classification, Marking, and best practices – 2 lines of code. This is a huge day for QoS design. This will be consistent across ROUTERS AND SWITCHES – all products, all lines. So even if you are doing this in the CLI this is good news. Cisco is moving to a single design in hardware as well in the future. 5 Queuing structures will be the future – but still only a single reference design. Why can they not create a single structure? Cost. However now it has a reference design.
More Bandwidth Does Not Solve It!
HOLD THAT THOUGHT – No, more bandwidth does not solve QoS problems. It might sound like it does on the surface – lets dig down a bit
“Bandwidth and Utilization is not an accurate way of assessing if there is a QoS Problem” – says Tim Szigeti of Cisco Systems
Security – As a construct, QoS has a place, we can limit risky traffic, questionable traffic or scavenger traffic so that it cannot overwhelm our network and shut us down, and stop the speed of attacks
Cost – You cannot simply add bandwidth forever – your costs would simply continue to go up and up. On that note, until now, it has been cheaper to deploy more bandwidth than configure QoS – in some situations, but that does not address the security concern or….
Buffers – That’s right, buffers. Micro bursts – even with the highest performance switching ASIC – at 1% port utilization, with a micro burst we could see traffic being dropped.
Cisco DNA – Automation
If you recall in my recent article we talked about automation being at the heart of DNA. If we want to make things simpler, automation is the only answer.
Wait a second – isn’t this SDN? No this is automation! Most SDN solutions – including Cisco’s own ACI – include forklift.
Cisco APIC-EM for QoS works with existing networks (brownfield!) – You can even abort the installation APIC-EM EasyQOS at anytime. So if you deploy EasyQOS as I am about to show you – but decide after you do not like it – you can remove it – even if you made other network changes later, it tracks every single change and will set back exactly what it changed to QOS and QOS only.
“People that are really serious about software should build their own hardware” (Alan Kay – 1982) that is why Cisco developed the UADP (Unified Access Data Plane – Code Name Doppler) and the QFP (QuantumFlow Processor – Code Name Yoda)
This is all about controlling and automating that high performance hardware and pushing that configuration in a consistent way down to the network
Wait a second ago did you not say many of the queue architectures are different? How do you address that?
EasyQOS – The APIC-EM Secret Weapon for Quality of Experience
Why is this important – the idea is simply this. EasyQoS will allow you to program BUSINESS INTENT in your network. You tell the EasyQoS application in APIC-EM how you want traffic to be treated, classified and prioritized. The APIC will figure out how to apply that business intent – against all of the various QoS architectures in the routing and switching platforms that you have.
QoE via EasyQOS – How It Works
It goes without saying – this is an APIC-EM app. So – go and get APIC-EM installed, and then come back.
The key architectural thing you need to understand is – 3 policy constructs are used here, to abstract 12 classes. You will see that in a minute.
Step 1: Create a scope
For your devices, create a scope in the APIC-EM for your devices, and then add the appropriate devices to the scope.
Step 2: Define Applications
Within EasyQOS there is 1300+ applications that are pre-defined, plus you can define your own applications based on a variety of factors.
Each application there is a traffic class.
You really want to create “Favourites” here, within the interface you can “star” and mark your applications as favourites, this is a good way to track which apps you are actually creating policies for.
Step 3: Define Policy
We need to apply these applications to a policy, within the policy we have classes of traffic – but think of this as business intent – not QoS.
There are three basic classes. You simply drag and drop each application into each policy.
Business Relevant – This has 10 classes within it based on the application, but do not worry, the APIC will automatically define the business relevant apps to an appropriate class. This is all under the covers
Default – Traffic you don’t really care about, this is your Best Effort class
Business Irrelevant – This is your scavenger class
Step 3: Apply Policy
The policy uses various types of connections, today it uses SSH – and YES you can validate the commands before they are sent.
Any interface changes are detected by SNMP, or through polling every 30 minutes in case you change things by hand. The changes are sent out immediately.
If during the provisioning you realise something is wrong, or something fails – the APIC tracks every transaction on every device. You can abort a provisioning half way through – and it will back out each individual change.
Operational Features
Now we have this running. We have some other cool tools that make our life easier.
History
The first is a history engine, any changes will be tracked so you can see the changes in the policy over time – so if you make changes, then realise you had an adverse affect, a simple fix is to hit “Rollback” — keep in mind, this could be 500 devices on the network. The old way you spend a month making QOS changes – only to realise those changes are detrimental – you spend a month removing them. In APIC you can make, and rollback these types of changes in literally minutes. Huge cost and time savings here.
Dynamic QoS
This one is pretty crazy sounding, but for VoIP and Video, we cannot always track these by application, they are encrypted or dynamic.
So the way this works is – Jabber or Lync sends a call setup – the APIC is informed of this call, and the APIC sends a NEW QoS policy — for just that call — to all the network devices in the path.
If you are reading this and thinking “So you are telling me my QoS Config is going to be modified every time someone makes a call” — Yes that is exactly what I am saying. I am not sure I am on board with this idea – that is a lot of dynamic network changes. Cisco says “it works!”
Show Me The Money – Path Flow Analysis
This is the most compelling part of APIC-EM EasyQOS. Bar None – Hands Down – Mic Drop.
You can perform Path Flow Analysis, on every device – instantly.
Including interface stats
QOS Stats
ACL Rules blocking traffic
Interface Stats
Step 1: Input the path trace data
Step 2: Flow Visibility
Prepare to be blown away. Here is the application flow. It even looks inside CAPWAP tunnels. If you had to do this by hand you have to do this per flow, in every single device. To set this up alone would take you hours, then analyze the data, then remove that config.
The APIC-EM does all of this for you – in seconds.
Device Health, performance stats, packet loss, DSCP values, Jitter, even routing protocol information. Router CPU level, Memory use. If you are troubleshooting a network – this is literally gold. “All hail the packet – for it runs on the network” did Denise Fishburne herself call someone up and help them build this? They should call this the APIC-EM Network Detective!
Here is a great example of an ACL block – imagine if you had 200-300 ACL’s on this device, finding the one that is causing problems would take you forever.
Even Asymmetric Flows. Every device, every hop. Even if you didn’t use EasyQOS this is worth the time to deploy APIC-EM.
Watch the last few minutes of our video from Tech Field Day and be BLOWN AWAY. A room of CCIE’s clapping tells you how amazing this is.
Prove it – with Validation of Experience VoE
The functional architecture of the validation of experience is an analytics engine. I would like to put a caveat on this discussion – this is still a bit of a proof of concept discussion. There is limited actual capability that you can deploy at this moment – but this is the functional way this will work.
Functional Layer 1 – Instrumentation
Collect all the right things, no silent drops in hardware – collect all the relevant metrics. Right down to the application layer if we can, as an example – Jabber. This means not just network information, but application level metrics like video or audio frame drops. If we want to monitor experience – we need to go all the way to layer 7
Functional Layer 2 – On-Device Analytics
We may not need to collect and return everything, but some of these are critical. So we need to analyse them on the device, decide what is critical and then return that.
Functional Layer 3 – Telemetry
Get the critcal information off the device – we don’t want that data sitting there, we need to collect it to the analytics platform. (Cisco is still working on the analytics platform). SNMP/MIB is simply not enough.
Functional Layer 4 – Real-Time Monitoring
We need to get alerts. Real-Time, not in an hour. If we make a change, and we cause a negative affect to the network, we need to know now. Real-time monitoring of application experience and performance.
Functional Layer 5 – Scalable Storage and Efficient Retrieval
Store these analytics somewhere, with an interface to access this data. Scaleable storage – even in the cloud. All the information from all of the devices in the same location. This is key, without a complete picture, from all devices and applications in the network – we cannot validate or analyze the true experience of the user.
Functional Layer 6 – Analytics
Correlation of data now results in information about network quality. We can identify where problems are in the network or applications.
Functional Layer 7 – Troubleshooting
Now can identify the root cause of problems with the network. Remember the quote from earlier – the #1 QOS TAC ticket is incorrect classifcation and marking.
The holy grail – find the root cause – and fix it.
Summary – Justin’s Opinion
So, after all of that – what do I think about this. Game changer. The troubleshooting tools save hours and hours of time, one of my colleagues mentioned “Mean Time to Innocence” MTTI – how long it takes to prove, it wasn’t the network at fault. With path flow analysis like this, we can prove the network out in seconds.
The ability for us to take BUSINESS INTENT and map it to technology in an intelligent way that is automated is how this will program the network to “Intrinsically know what the business needs, and then just does it” — that is delivering on the promise of the marchitecture.
QoS has been way too difficult for way too long, we NEED this type of tool, the cool part is that REST-API’s are all published, so other vendors are already starting to take advantage of EasyQOS in their own applications. I cannot wait to see what comes out of Cisco DevNET. Just imagine the packet analysis and tracing tools that could use the troubleshooting engine in interesting ways.
We are not fully there, or fully baked yet. VoE is still a bit conceptual. What is the holy grail for me would be the following
Program Business Intent via EasyQOS – Quality of Experience
Monitor my network for experience, provide validation of experience alerts.
When problems occur either automatically fix them – or recommend changes.
We are not far from this – the team at Cisco says “it’s in the pipeline”
My recommendation – if you are not up to speed with APIC-EM – you better start, because networks have finally burst the bounds of our brains when it comes to understanding everything that is going on – so you need this automation in order to tackle these complex network and application needs.
References
Tech Field Day Extra – 2016 – Cisco APIC-EM Controller Discussion
Tech Field Day Extra – 2016 – Cisco Validation of Experience with Tim Szigeti
Tech Field Day Extra – 2016 – APIC-EM EasyQoS Demo