Kelly Herrell, Brocade
Kelly takes us back to 2013 and the software defined anything. Was it really there? Was anything real that year? In 2013 the real work on the Software Defined Network concept started!
In 2013 SDN started
Last year it was about how the world’s largest customers, the largest buyers, demanding the vendors to come up with a software defined product that works for them. The message was clearly that the industry wasn’t yet giving customers what they actually needed. Remember that the customers define what vendors should target for, not the other way around!
The first thing that happened was a protocol for this and the talk was all about “OpenFlow”. When it was clear the protocol needed to be in place, hardware (the product) was needed in addition to this protocol. And now we’re moving toward the “platform era” in SDN.
This is when you’re starting to look at the full set of abstraction, from the infrastructure that’s conducting the actions, to the controller that is easing the management pain from that to the orchestrator above that, that’s actually provisioning this infrastructure.
When looking at the 7-layer OSI model it’s important to realise that switching is stateless and everything above that is stateful, since you need to be able to control what happening all the way up to layer 7, the application. The platform needs to have a rich set of services.
the Controller level
You need to have heterogeneous support of the controller level. This level needs to be open and this means it’s about choice and being able to support a multi vendor environment, but also to accommodate 3rd party controllers. Before the end of the year new architectures that have a controller hierarchy in place, because of the way Service Providers are already building their networks.
You have to be clear in how and what you’re trying to orchestrate, because orchestration always has to be about choice and about multi-vendor.
On top of this all lies the ecosystem. Different vendors can have different stacks, but it has to be open and adaptable. Suppose you want this one extra feature from another vendor, you should be able to plug that feature right in the existing stack and make use of it. It has to be an open system, so it can be adapted optimally to what each and every customer eventually needs and wants.
Software and servers interconnect
In the early days each server had one task and controlling it was easy, since it was physical. Nowadays with VMs a lot of different traffic enters the physical layer and software needs to distinguish which VM needs to process which data. Everything virtual is probably THE best example of where “software defined” comes from since you’ve now virtualized everything to run within a server, so the software that manages this all needs to be in place! (think vSwitch for example)
Open Platform Revolution
The beginning is there and you can already see the various layers interface to each other, but it’s now up to the industry to provide that open platform for inside the server.
What if a new type of buyer enters the market? Or what if a customer behavior changes? The cloud / service providers need to adapt in order to accommodate this new powerful customers. They’re the ones that are going to do the bulk of the buying!
The law of the three Twos in the Telco world
20 Million Dollars, 2 years and T(w)oo late. These very structured large companies just took way too much time to get something going. With a software layer on top of all that, it could mean that a simple push of a button gets things done that earlier took days, weeks, even months to accomplish. The TTT model is also known as the Elephant Mating Model: it takes a whole lot of yelling and screaming and takes two years to produce results.
OS, controller, Orchestration, it’s all software needed to enter the cloud. Some really interesting new business models will emerge out of this. The next two years will certainly decide where this whole “software defined” trend is leading us to.