Decoding the Symbols and Anecdotes of Open Innovation and Transparent Collaboration ~ Platform perspective

RedHat Summit is one of the technologist’s community events which over the years acclaimed reputation of open innovation and transparent knowledge sharing .Therefore it is natural to see techies from around the globe coming together with common vision of sharing , learning , growing together .One can consider it a one stop shop for all industries that leverage power of software to grow, from Lockheed martin F35 design, Exon Mobile deep-sea exploration, DT IT Digital platforms, Delta airline journey to open platforms and innovation, Banking and financial systems digitization or Telco’s pursuing to evolve through power of openness and innovation. Each of story has a lesson in its own and industry shared joint responsibility to obviate the issues of vendor lock in and the customers success stories from different industries support for benefit of all as a collective goal.

However, with so much influx of information and hearing perspectives from different markets and different mindsets it is probable to get diluted between a marketing theme and what is actually possible. It is therefore important to decode symbols and anecdotes as I grabbed them in this year’s RedHat summit held in Boston, U.S from May 7 to May 9  2019 . as follows .


Promote the Spirit of Open source is through collaboration and belief

Open source by its spirit stands on the ground that sum of collective effort will outweigh the sum of individual efforts. Infact Open source is nothing more than collaboration and belief of sharing that should benefit all.This collaboration opens the new channels and opportunities in each industry we operate by reaching a new possibility

What I liked most was a symbolic depiction from Apollo U.S Program which only made possible after we set high possibilities something which obviously was not possible with technology possibility of that day

No alt text provided for this image


Making money through transparent innovation

During last few years we have seen so many companies joining the band wagon of open source but still the industry has not able to benefit in full way from all of the promised achievements . I think this issue happened because still we need to collectively enforce idea that if you get you have to give and that embracing the fact that it’s all about dependency not independency which makes ecosystem as the ultimate winner of this transformation

Open source is not only about products but also partnerships

Bringing People and technology together

All benefits from points highlighted above can be achieved only if we have Platform and tools that will enable industries and teams to come together to explore opportunities together. Hence the platform must have following characteristics.

  • Accepted unanimously across all industries
  • Supports collaboration using open and shared principles between different teams
  • Open enough to steer new innovation
  • Free enough to experiment and grow, After all this is so important for dollarization of technology

Platforms and Tools are innate to materialize Digital Transformation

Team empowerment

Despite of all lucrative promises why many industries (Specifically Telco’s) have not really able to reap full benefits i think is because to realize truly immersive targets we must empower teams with tools and trust them. This is the only way to take us forward from an industry dominated by a few vendors to the open dominated by best ideas and best solutions.

Culture and Trust is useless with out capability and vice versa

No alt text provided for this image

Smooth Operations

Being innovative and reaching for the moon is one thing and running smooth operations is something else. Specially troubleshooting of opensource is difficult than one can even imagine. This is an area that require thoughtful analysis and support.

Alignment of operations will decide how quickly we can get rid of old infrastructure

Transfer technology equity to less innovative industries

This question is being asked to me by many teams, in summits, workshops, interviews every where. We all know specifically for Telco industry only the T1 CSP’s have pockets and muscle to rise to the real challenge. As industry we owe a collective responsibility to transfer the equity of what we learnt in the form of clear reference architectures and tools that are programmable, customizable and less costly.

To put in a perspective Edge and distributed cloud is a big opportunity that require public web scale cloud companies like AWS , Google , Azure and others to pass Innovation to less innovative industry which needs tooling for clear use cases like AI, ML, Deep learning , AR and VR etc to mention a few .

As industry we owe collective responsibility to build something that can be consumed by less innovative industries and clients


Client Success Stories IBM, Delta, exon mobile, Lockheed martin

RedHat summit 2019

Ideas Exchange -The Cube

Report this

Published by

Saad Sheikh  ☁

A “Dev-Net-Ops” Framework for Digital Telcos (DSP’s) ~ Telco Application Playbooks


“Dev-Net-Ops” Framework for Digital Telco’s (DSP’s)


From vendor defined tools to Telco Applications CI/CD Playbooks

As the Business adoption to DevOps accelerates at Stupendous pace . In a recent working with Gartner we conclude that by 2022 expected 75% of such project will fail to deliver business value as we put DevOps idea to practice. It means for Comm-SPs really we need to put some realistic framework on ground to ensure we adopt DevOps in steps using realistic use cases that deliver real value to business. This paper is all about this how author thinks the industry should address this problem. This white paper describes key practices and how they can benefit the telco world, which has conventionally relied on separate development and operations to meet its stringent service quality requirements in a heavily regulated market.

I think once the Telco applications will be based on Cloud Native and micros services it will be easy to use many of repository and tools from IT but what about situation today. From where we shall start with technology as heart of transformation what is ground step-0 , I think it still lies on vision and approach to not go after fancy marketing stuff in a frenzy and define a clear use case , Lets say to introduce DevOps in Telco can we select a simple open source Application and cloud it with a tool chain to manage it through a Digital market place from inception to delivery across the life cycles .


For Comm-SPs what will be the right definition of DevOps for Network or Dev-Net-Ops , the answers can be many as most Telco’s want to introduce a pure IT concept to Telco’s , however the fact is that the Development pipe line of Telco’s today is totally controlled by Vendors not visible on Community Hubs or commercial ready repositories .So to introduce Dev-Net-Ops in Network Telco’s should prepare a customized framework for DevOps can help them automate the Networks , enable agile service delivery and most importantly is open enough to introduce nimble players in the Networks .

Then there is a commercial aspect of the same, when I started to build my career in IT a decade ago always we are trying to build a solution where product /licenses can be procured from vendors while the service is managed in house. However Telco’s operate in a more vendor dominated mode since decades and it is due to very reason the Service cost is eating huge part of their CAPEX. A systematic approach in to DevOps and scoping it in right direction will help solve the issues

Current NFV/SDN pain issues related to DevOps use cases

Starting NFV/SDN with high expectations with Multi-vendor solutions, half a decade on the difficult road and still find difficult to operationalize the network. The list of Comm-SP pain points for transformation can be huge and adding Operational complexity without automation and analytics using DevOps can make this issue further complex, however for the purpose of this paper I will like to demarcate framework list can be big but can be summarized along following three points.


(1) End to End system integration is still not a pure decoupled offering from vendors, which ultimately means in a Multi-vendor project an Operator thinks Service will be led by integrator but ultimately, they find them in the mid-way requiring service procured from all players and dealing with them all the way. We believe a focus on Test environment and Validation defined by Net-Dev-Ops can help solve both technical and commercial issues.

(2) VNF On boarding is still a silo scenario , normally for 1st time VNF certification and onboarding we can digest more cost/time but subsequent VNF’s of such type/scale must be a clone and should be accessible at Zero service cost , after this is what we want to achieve a Network offering as a code and programmable network . a Dev-Net-Ops can enable us for this.

(3) TaaS or Testing service need to be based on open source tools and automated in CI/CD, in other words Telco should be able to customize tool chain and use variety of tools from different vendors. A correct Dev-Net-Ops will help us solve this.

(4) On Top of everything DevOps with relying to breaking the code and aligned with automation and frequent builds seems contradicting with Telco world strict requirement for highly reliable and performance accelerated service.

From Automation to DevOps Pipe Line

We all agree the Telco Networks are still not automated unlike their IT and Cloud Counterparts and in fact require lot of manual tweaking whether it is for integration, deployment, testing and worst all for troubleshooting. So, one might be asking if Automation can fix those issues?

“Twitter user @ tweeted this perspective on code reviews: “10 lines of code = 10 issues. 500 lines of code = looks fine.” There simply are not enough people with enough time for manual processes to adequately oversee application delivery”


*Courtesy of Ramyram GitHub

The answer to such type of question is not definitive, because Telco industry Development pipe is still controlled and tested by Vendors. So for us as a Telco it makes a lot of sense if we address those issues of Automation that help us to use set of tools that enable agile integration, fast testing and nevertheless help solve issues in an automated manner. So what should be the characteristics of such a Tool Chain?

  • Simple to use
  • Simple to integrate with both Cloud and Legacy Network
  • Support a programmable interfaces that are Secure
  • Support native coherence with IT automation tools JIRA /JFROG/JENKINS/GITHUB
  • Put Orchestration as first way to achieve automation not focus much on Below layers

Why I think Automation will not solve issues for Telco Cloud Journey is due to two main reasons once is because most application are still Fat and secondly too much reliance on vendors specially on Dev part simply means complete automation is not possible and secondly if we achieve fake automation define by vendor it will simply not deliver desired results

I think at least of now CI /CD /CT that enable smart FCAPS is key to introduce in Telco Networks, this should be a first step and that as we will see reply strongly on developing, using many of the tools in Operator Test environments that enable them to smoothly roll out the Next Era services. . A Culture of Shared Responsibility, as defined in the Scaled Agile Framework (SAFe) supports the aim of DevOps to tear down silos and build shared tool-sets, vocabularies and communication structures focused on one goal: delivering value rapidly and safely.

Bringing Synergy with IT for Telco Net-Dev-Ops

“DevOps automation can be a complex and arduous task as toolchains emerge from the vast number of tools used to deliver applications in faster more agile ways. I&O leaders require a top down view to designing and implementing successful toolchains. Tool chains are often built from stand-alone disconnected tools that are available to I&O leaders in their organization” Gartner


*Courtesy of colleague Mateo Červar

With Mature DevOps in IT and NFV/SDN bringing a lot of Open source tools the value chain requires an organic growth means it can reuse existing tools and leverage it with new Tools. Secondly it supports Mapping of many tools let’s say from open source to package it in one tools in simple to use interface like Nokia Matrix. In other words the DevOps tool chain must be Orchestrated as well .

Why I not speak about the Culture

It’s a toil of exercise to know how a big Telco want to evolve to DevOps and the only sad part is too much reliance on Culture and Processes without solving the Technology Architecture is like pushing water from top of a tank that is broken form bottom, the more water you puts in the more will be passed out. So, the question of technology framework and tool chain has to be solved first.


Infact as we try to apply the traditional DevOps concepts In Comm-SPs , the main reasons can be building a DevOps environment that can solve Operational issues in a way that do not require a major change in existing Operational environments of the Operators. Then the rigidness of Resources in Telco’s is far more than those existing in IT . For example the H/W resource in IT is cheap making it possible to roll out PoD in short notice and to apply pure IaaS model.

Can a Comm-SP adopt the same ? The current Infrastructure selection triggered by application itself and the high cost of NFVI makes it more difficult to build an open DevOps solution. Obviously these are not areas strictly depending on DevOps but rather deciding how Dev-Net-Ops will finally roll out in the Networks. I think these points also define value chain of Orchestration vs. pure automation. In most cases pure Ansible is not something that can automate an NFV environment than to use an Open Orchestration solution.

Cost of Net-Dev-Ops

In Legacy world and still in NFV transition the question arises cost impacts for DevOps . For this first point is that once the tool chain is ready and working still, we need to optimize it continuously. So, a Comm-SP should carefully select the use case to make DevOps scale to business requirements and ensure we not invest in something we cannot quantify to deliver business results


DevOps Monitoring

This is a BOQ Cost item which obviously lies outside the DevOps umbrella but so vital for Comm-SP’s that simply DevOps can not seem deliver value if it is not addressed. I personally feel developing a DevOps Tool chain specially the CI/CD tool chain is simply what is Telco teams require. In our latest user story with Delivery teams we understand from new feature or new product development all items on line across pipeline with clear ownership and progress is vital, After all this is what we all have seen from the GitHub about open software’s. Today industry stands in two extremes Redmine and Jenkins that simply is control and process with no tool chain and vendors tool chain like Huawei NICS or Nokia Matrix which simply is not open control and process and in worst case not easily to optimize across iterations. This is an important area STC have been participating and influencing partners and industry and I hope building a framework will speed Net-Dev-Ops adoption across industry.


Role of vendors in Telco Dev-Net-Ops

Comm-SP’s as of today comprise of networks with many vendors and such situation will become more prevalent as procurement will follow principles of open source supply chain. Hence a need to build a framework and followed by reference architecture that is agreed among all stakeholders is key to success. This commitment to continuous planning, integration, testing and deployment will deliver rapid innovation through collaboration. Continuous delivery in a multi vendor environment requires the automated integration and testing of all service components to ensure high reliability of service. Complexity grows exponentially when the number of service components and their individual version increases. Synchronizing delivery from multiple vendors is essential. Portions of the operator’s infrastructure must also be open to allow testing and verification of new solutions. New channels for the automated collection of customized operational feedback on demand will also allow development teams to improve their components based on production data. This chapter of paper is based on author discussion with Nokia sales marketing team

Automated Testing

The most important for DevOps successful transformation is selecting the right use case and what can deliver best results than automate the testing , its advantages are many for example it will allow a Comm-SP not worry about Release management or whatever changes in the infrastructure .CI and API based based Tool chains will definitely help solve the issues .DT is one of the operator who has really focused on this case and certainly it helped early adoption of DevOps .


*Ericsson public document titled NFV DevOps Life cycle automation PoC

Dev-Net-OPS to offer Security as a Service

Web scale architectures and early adapters already see advantage of offering pipeline as a service. Disney offer DevOps as a service for developers but for Telco’s NFV so much restrained by security will benefit more if we have a DevOps use case of security as a service.


Why I think so because in NFV/SDN with so many integration points and thousands of API calls in a complex multi-vendor solution means ensuring and monitoring a costly initiative for a company. In our recent study we found cost of deploying and managing security solutions for Telco DC’s is almost same as of deploying a solution and I sincerely believe DevOps on this front with Red-Blue teams will solve this issue

DevOps for Comm-SP’s Open Source solutions and analytics

There are two more areas I want to address in this thought Paper, the first is a question to industry can we really rely on closed or Fat vendors to offer us OSS solutions? I will certainly not support it because they simply not good at that and not know how to control, manager and support this. I think all of those vendors want to package certain OSS versions from community under their tool that is fixed and a new release will require lot of changes, simply stating it means a big commercial and financial hit for a company expecting new release every two weeks. Can we use tools with JFROG, SALT STACK like frameworks solve above issue?


The second main question is how to deliver automation cases that is based on data analysis like cross layer cross relation, data mining and ML . Simply stating both of above cases must be solved as DevOps use case and if we adopt to vendor solutions, we see great risk of lock in. A detailed study of these models will certainly help Dev-Net-Ops help achieve real early wins for a comm-SP who is trying to evolve to target 2020+ digital architecture.

I sincerely hope through this thought paper we have shared our vision of what will be the DevOps for Telco DSP , all comments are welcome to let us understand the Telco issues together to refine and redefine Dev-Net-Ops 2.0 . I hope such a thought paper will follow a collaborative engagement , shared responsibility and standards alignment if ever such transformation will create real value chain for the industry as a whole .


  • Juniper Automation with JUNOS
  • Gartner How to Navigate Your DevOps Journey
  • GitHub (
  • For a deeper dive into application release automation features and functionality, see the Forrester report “Vendor DevOps automation”
  • Landscape: Application Release Automation Tools”
  • Nokia DevOps for the Telco World
  • Ericsson DevOps in Verizon a public document
Report this

Published by

Saad Sheikh  ☁

Open Networking. So Who am I and What I Exactly Do ?



I think our professional life should be a journey which will help us rise from a fountain to become lake and evolve to a stream that will finally find its way to the big sea .

This bigness and fullness can only be achieved if one believes on comradeship and sharing and this is why I have started and managing my blog post and sharing page here

So who am I ? obviously my aim here is no commercial because i do a nice job for living . However i do expect i contribute to community and learn in a two way manner . This is what I have strived for all my career .

In few words I can summarize my career as follows

I am Senior Architect with a passion to architect and deliver solutions focusing on business adoption of the Cloud and automation covering both Telco and IT markets. With 15+ years of diverse experience in ICT including (10Yrs in Telco) and 5 Years in ICT (Telco +  IT) I proved my capabilities as Solution Architect, System integrator and Transformation advisor. I am focused in defining NFV/SDN/Cloud solutions based on web-scale architectures, HA, reliability and horizontal scale with a passion to map business requirements to robust E2E technology architectures meeting client’s requirements in most cost-effective manner. Ability to solve complex problems through architecture homogeneity and converged solutions. My work in carrier Digital transformation focused on Platforms including Clouds both Open stack and container platforms, carrier grade NFV, MANO, DevOps CI/CD, 5G and Edge Platforms, TaaS platforms and journey towards unified clouds including transition strategy for successful migrations to the Cloud .

  • Telco Domain:  NFV/5G/Core /EPC /VAS /Wireless (GSM /UMTS /LTE )/IMS /VoLTE /VoWiFi/MPLS & IP
  • IT/Cloud and Orchestration Domain: Cloud, DC virtualization, IT Applications deployment, NFVI including Server, Storage, SAN, SDS , CI/CD DevOps , NFVO and Orchestration ,MEC and Edge Clouds including 3rd party App development
  • Networking: Infrastructure Networking and SDN

for freelancing or knowledge sharing you can reach me at

Current Work Profile

Currency my work affiliation is with STC as Chief Architect consultant for NFV,SDN,Cloud and 5G Program . Specially focused to build solutions that can serve for company evolution towards Kingdom’s 2030 vision focusing around Cloud and Networks Automation .



Current Consulting Profile

As Chief Architect and President Advisor , I lead the company consulting and services business in 5G , Cloud and Enterprise solutions offering together working with 25+ partners  contributing my part to build the future of  Telco , IT and Enterprise business in Middle East .




NFV/SDN Platform or Application Driven? Set Right Focus to reach the final Goal


As industry enters the Industry4.0 and many world leaders as WEF talk about their country commitments and support for Transformation it is evident we are entering an Era of disruption at scale. However still the true benefits of CSP transformation are not quantified and it is not sure what is told in theory has same significance in practice?

So where is the real problem , we still remember NFV was initially set to reduce CAPEX of CSP’s facing decline revenues but later we found every white box cannot fulfil the requirements and NFV COTS cost is tenfold the cost of IT servers so CAPEX dream was never conceived correctly , recently though even ETSI formed new ISG’s for using OPEX as main driver for NFV but did this the real issue .It actually depends on how NFV is conceived .


In whole industry how to take the journey has two approaches


Build Platform first or Build Application first , the proponents of former are main Cloud companies who want to see application as IT service which do not fits well for the Telco Service . After all in CSP’s the revenue is from Application not from platform.

The later claims application is the key and platform must be built for it. In a short term this looks more appealing as every CSP want to virtualize a PNF right so is more logical that a heavy vendor with the idea of application leading means more sense but what lies beneath the iceberg is questioning self are we building a platform that can meet future requirements in at least 5-10years . Unfortunately many CSP’s are not long sighted on it primarily due to only one reason that they never faced such issue. So right example to see will be Facebook. If finally Mark decided to take FB to the communication industry (whose signs are quite strong) how they will build. Obviously make a platform irrespective of service. Here I am not saying service is not important but I want to say that Platform must serve every service.


An example will make things easy to grab, Vendor X have its VNF (VDU design) if ask platform how to develop will give requirements of NUMA Placements, Pass-through , Bonds and forwarding plan design that will somehow can limit future hosting of new vendor VNF . Similarly for DC L0 the COTS/SAN dimensioning will be in-accurate. E.g based on m1.tiny and m1.large flavor size and VM placement last time I observed multi-vendor scenario may lead to not use your infrastructure more than 65% . This is a big compromise.


So it looks promising if we build platform first and irrespective of service, I do understand some limitations will come based on service type like Media and control plan require a different HA /AZ but at scale NE’s from same service class must be supported by a common Reference Architecture. Especially if along the Journey same platform will be open to IT and to Programmers the key is to control the platform not application.


In my later blog I will explain how the platform can be planned specially for ICT converged case and how it will help you size the DC and reduce CAPEX but for now we just look down on key Functions of an ICT Platform


  • Unified O&M

Rally, Ansible, 3rd party tools, Drive train, vROPS, Functest, Dovetail how to unify them around a simple one click architecture is key and realize config automation. Similarly platform must support same cluster for both Telco and IT DC’s as one platform


  • Supports all Performance (not high performance)

Instead of high performance the platform must support all applications with different performance


  • Multi Tenancy ( with minimum HA/HG’s)

vApps from both IT and Telco can onboard using same process and standardized API’s. It looks difficult till micro services architecture comes in place around 2020



  • Auto scaling

The use of AI in later years in NFV/SDN is only possible if auto scaling works well in scaled network. I mean in many DC’s where resources are in pool. I do not see a quick solution soon because of Super VIM architecture that still need to fit well with concept of hyper scaling. In the current auto scaling as defined in Open-O standardization looks like a issue and auto scaling parameters in VNFD of VNF1 can be very different from VNF2 at least this is our experience. I think ONAP Beijing release will address this issue somehow along the road as confirmed by Confluence team to me last week.


  • Distribute DC

The main theme is how NFV will work if for same VNF the VM’s are placed in different DC’s , how service will scale and how NFVO will real time optimize NFVI resource , this idea need more refinement because one VIM only controls one DC Infrastructure at this time


Sincerely the list is quite long but above 5 are the key points for Platform if somehow it needs to meet at least next 5Yr requirements. My next blog should try to find out how to actually plan the NFVi and can we say that VNF requirements of Server/Storage is enough?


Sheikh is the Chief Architect Consultant for NFV, SDN and Telco Cloud in Saudi Telecom Company which is the Biggest ICT Operator in Middle East, Always interested in those disruptive technology driving the industry transformation, Author hails from Telco CSP background and since 2013 working on Telco Cloud domain including Amazon, Huawei, Mirantis, VMware, RedHat etc. The comments in my writings are my own and shall not be considered as any relation/binding with those of my employer.

Understanding Asymmetrical NFVI Port design requirements in OPENSTACK OPNFV

Understanding Asymmetrical NFVI Port design requirements in OPENSTACK OPNFV

In all Open NFV Solution deployments we come to a situation of asymmetrical port/bandwidth planning in NFVI , as an example In both Huawei solutions like based on Huawei Switch CX310, CX910, CX912 or based on HP Flex Net design normally the Down link ports are as much twice (2X) compared to up link (1X) ports but because traffic move from server to vNIC to switch module and outward for both UL and DL case so why is this so.

The answer is NFV horizontal and vertical traffic scenario as the south-north traffic takes the uplink, and the east-west traffic will takes the down link. The uplink and downlink is asymmetric because of the following reasons.

1)   East-West traffic is usually greater than North-South because transactions are multiplied during the computation process. NFV based system, once it receives request from north-south direction it needs to communicate with the computing modules to compute the final result. During this process many internal transfers take place and execution thread spans many multiple computing module before delivering final response in the uplink direction.

2)   The downlink handles not only computational tasks assigned to the server but it also needs to deal with the auxiliary processes relevant to the chassis and computing module health. Those include heartbeat and management transactions which are run in the background but consume switch bandwidth in parallel to computing transactions therefore additional throughput needs to be factored into.

3)   The computing node redundancy requires that it has two ports are used. As result there must be 2 times more ports on the downlink.

4)   If we enable (Micro segmentation) e.g. in VMware the ACL and Security analysis need to be performed on each Compute host before it leaves the server. In fact all this overhead is populated in East –West and hence the down link traffic

5)   Finally In open stack HCI or High performance computing many of the functions are split from controller to host like DVR, DLR (VMware)  , Neutron Host and control node . This architecture means the direction is more processing related tasks to be delegated to compute. An architecture with more processing also require lot of message exchange across the direct pipe between compute and hence leads to asymmetric traffic between uplink and downlink

This are some of key reasons why the two directions are asymmetric and that both the traffic as well as ports need to be planned this way to satisfy Fast Data Stack requirements stimulated by NFV use Cases specially for VNF involving data plane.

About Author: Sheikh is Huawei Middle East Senior Architect for NFV , Telco Cloud ,SDN with focus on ICT Service delivery through Telco DevOps . Focused to define the Roads for future 5G Core Network . Always interested in those disruptive technology driving the industry transformation

CMM modeling to build a True Telco Cloud for Carriers (Empirical vs Hypothetical)

CMM modeling to build a True Telco Cloud for Carriers (Empirical vs Hypothetical)

Saad Sheikh

Saad Sheikh

NFV|SDN|Telco Cloud|DevOps


Integrated Approach from model introspection to Architecture transformation of CSP’s

As organizations embrace the importance of Digitization to meet the mandatory requirements set by the Industry4.0 evolution there are certain targets which all business whether small or big want to achieve which are nimble, efficient, open, and scalar and HPC (High Performance). Since the word Cloud translates seems like a One stop nerd jerky solution to fulfill all above problems so whether we like or not every organization want to evolve to cloud, though every enterprise targets may vary like For a bank it can be centralized control and Operations agility, for a CSP it can vision2020 to compete with OTT’s and nimble players and making situation more complex  for vendors like Huawei whose future is driven by customer requirements to update product, services and solution offerings to meet those requirements.

Believe or not every business, every segment evolving this way and Cloud is central to it. As an industry reference we can look at RedHat official annual reports in recent years and blogs from James Whitehurst and explanation of Cloud companies rise in this segment, Mirantis is not too far from the same. On services side we can see a big turmoil still as for your information AWS in 2016-2017 in Australia market captures 75% revenue from niche consultancy services (internet source) , similarly Cloud companies one can safely say that are in a state of turmoil the Cloud company want to put an Telco vendor Hat and similarly Telco vendors want to put a hat of IT companies everybody wants to give one message to Customer CXO and Strategist they know better than others with clearly missing one notion how to best create a value of customers?

I think best way to answer without prejudice and balance is as important as developing a solution because if evolution partner is promoting its own solution no way It can support organization to reach its target Architecture and meet minimum criteria of minimum enterprise continuum as we used to build the future Network2020 for a CSP .

So in this paper I want to bring some dimensions of How to make a robust design of a Telco Cloud , how to smoothly integrate it in the Live environment , I will cover the domains of NFV , Telco Cloud , SDN its integration and Agile delivery based on Telco DevOps approach . This is my focus area for many years and I want to share my own idea as a network architect on this direction to the Network architects , the purpose is also to indirect propose an approach for any open System integrator to execute such transformation in the most open and transformation manner .

So first for an architect we must list down difference dimensions and how IT Cloud will be different than Telco Cloud to benchmark a transparent index to select ,evaluation and optimize !!!

1.  Solution Selection

How to select the best products for your Network, I think there is no single answer and most clear answer is driven by requirements , not only technical (SoC , Proposals , KPI ) but most importantly on commercial and commitment from partners to strive to meet customer requirements and fulfil smooth migrations . Not only delivery is important but the Support service SLA is importantly important. we all know the traditional RPO/RTO model as we used to see in the IT services can handle all CSP requirements, believe or not IT companies reliance on so many partners and total separation of H/W selection and application services is the biggest concern for a CSP and a matter of principle the partner who shall combine different layers will leverage maximum advantage for the Enterprise

2.  Maturity of Solution

In the Legacy world all vendors used to develop in one direction, lock the standard first in standard first, then do a pilot project for solution hardening then go for mass rollout. Only issue is that in NFV world with standards refreshing changing every 6months it’s not the technology but the speed with how we harness them. It means the main factor must be that when a Bug comes with so fast changing standard who will Fence it and block from CSP environment. Partner capability in eco system development, Open Labs, EANT Labs (my favorite initiative) and Plug test (4th one and we are straight participating and evolving through it ) will define the Value Chain to ensure Openness does not mean non- Standard and solves the one main issue with Innovation in all those years !

3.  Use Case development  

Rome was not built in a day as adage goes so goes in open world, the Solution must be Use case based and Cloud is not an exception. I think for CSP specially Tier1 the Key use case of NFV Telco Cloud are as follows

Lock the Interface specification specially Ve-VNFM-VF/EM

On boarding API’s and Image parameters standardization

Simulate a value chain such as done in OSM community like vCPE, vIMS, vBNG end to end through NFVO /Tacker

I think in IT Cloud too much work talks to the Code and as matter of principle cannot fulfil how Cloud must integrate with the Telco environment

So as we see it our target in Cloud Journey in 1-2 Years must be

                       Target1: any new VNF /Application coming from any vendor even in house can onboard on any Cloud in 1month Maximum

                       Target2: any copy or clone as we used to say must not take more than 2days, 1 day for onboarding, followed by another day for tuning and validation through tempest, Rally, Cloud Performance, Cloud Availability etc to mention the few

In the Re-Architected world the Capability continuum must sum at something like 60-90min for any application but this target need standardization of interfaces and obviously need Re-Architect process completion first.

4.  Represent a Telco Device as a code 

Rome was not built in a day as adage goes so goes in open world, the Solution must be Use case based and Cloud is not an exception. I think for CSP specially Tier1 the key is to know steps of Telco PNF modeling as a code, it seems simple bur involvement of so many community and standard body make it complex also. We need to refer Point#6 to answer this part.

5.  How to develop Networking Solution for a Telco Cloud

For an IT Cloud this seems like a standard simple shot to design the Underlay and to overlay the traffic using BGP /MPLS using an agile way. Contrail which is market leader gives a good overview of this in the contrail community updates. However as we move to the Telco Cloud things no longer remain the same and whole story must start from POD or which my transport friends call Fabric design . Sorry for mixing POD with Fabric but fact is this is an important idea of a Telco Cloud. For a Telco Cloud based on OpenStack, a POD should be self-contained with APIs (controllers), compute nodes, and storage as well as any network nodes including the infrastructure. So whole idea is that east/west and north/south is segmented and controlled in a way that E-W traffic must not leave the POD and . It also helps if they are setup in Availability zones, this way workloads that are east/west intensive can be assigned to PODs where they won’t have to leave the POD for east/west data. POD sizes should be small and there should be a lot of horizontal scaling. A hefty good paper from big switch explain this part

Moving up the next thing that come are bridges Br-int, Br-Ext and BR-Tunn, actually Bridge is deployed using ML2 Neutron drivers as a plug of OVS

To make design simple the Management zone entities like Mirantis MCP, RH Over cloud, Ceph can use Standard ML2 bridges with the OVS which is also a default configuration.

For the VNF the mature VNF vendors propose the idea to industry that bridge is not used because most HPC VNF’s will use Provider network and not the tenant networks for the workload. As this requires VNF directly access the Fabric network so no need vNIC presentation to the Bridge and this is the reason for the OVS-DPDK scenario no need to deploy a bridge on the Computes , for SR-IOV it will be more simple . Also as per community recommendations it is suggested not to deploy Standard OVS with the OVS+DPDK on same host for the reason that the Kernel OVS bridge which is based on ML2 Plugin will cause Br-Int as single transfer point and can impede the performance of a Telco Cloud . From OpenStack Pike the community can work really how to make standard configuration and architect design for whole Cloud ………..For now this issues is left to the community.

In my later blog I will talk how to optimally plan your NFV DC consider any VNF, any Cloud to decide on optimum Flavors, OVS and Bridges mode and as a matter you may come to architect a solution where for Customer-1 you may think deploy every VNF as SR-IOV will save you more NIC/CPU resource than another scenario-2 where the Deployment should follow the VNF principle like a OVS/EVS for IMS while a SR-IOV for BNG etc. This itself is a detailed topic and author believes need to shed light in detail and will be discussed separately.

For networking BGP will be the key, with SDN as well as without SDN. One of the most important architect component shall be to know the fact that the SDN and specially the contrail consider BGP to integrate both future Cloud and non-Cloud Network part , the Open Stack consider BGPaaS so DC should consider this part .


6.  Telco DevOps is not IT DevOps

In the Year 2011 the webinar introduced me to this strange term of DevOps , over the years I have come to conclusion that since NetOps is NE focused while DevOps is IaaC focus , means IT only want code abstraction and automation so the two cannot be same . In my recent talks with leading Community architects and mentors I have come to following conclusion about this

a.   Linux scripting is the key and you cannot do DevOps unless a CSP cannot free their OPS from Infra OPS , must go scripting way

b.  The Cloud must support multi cloud. I means Telco DevOps must watch for the future of deployment of OpenStack + Container , using an open source tools like Spinnaker , Rally will be a nice idea to go

c.   Link to the community, the Crazy Mirantis MCP is a great example of this using SALT formulas, Meta model standardization and artifacts pulling through gerrit and Jenkins is a nice crazy way to automate you Telco Cloud.

d.  All are IT terms how about Telco, the Telco DevOps must consider TOSCA, HOT, YANG and YAMl scripts automation and obviously its abstraction. You can follow my little crazy profile on Jenkins and Code Club talk about this. But code is useless and logic is the key, so how to successfully model your Telco NE’s in software and script is the real challenge. Second will come how the Telco DevOps will join the hands of IT Cloud to automate the whole ecosystem through single point of access which is the orchestrator

e.   Finally to guide for transformation you need move from PMI-PMP to PMI-ACP as I did a year before , teams , process and tools to support it to develop an Agile Process support this , what is target for new requirement can meet in 2weeks maximum from idea to offering .However it seems like a long journey to reach it

7.  Automated testing

For a IT Cloud it’s a good idea to automate everything but for a Telco Cloud the nice idea is to start from test automation, if a NFV project can automate 50% test and on top for any Infrastructure change can validate all environment automatically it will be ideal to build a unified Telco Cloud, a nice bench mark to start automation journey in hill climb phase must be

  • New DC Clone (75min)
  • Cloud verification (65min)
  • VNF deploy (55min)
  • FR/NFR test (20min)
  •  Continuous monitor using AODH , Gnocchi , ELK , Nagios is a nice point of start Telco Cloud Journey

Finally the Cloud Transformation partner must have a Solution continuum to support such service like 50+scnerio and 1000+ cases which a project customer can select and optimize.

8.  Multi-Site Cloud Design

What makes the Telco Cloud different from IT Cloud is the way the service will be orchestrated and healed. For example cross DC , backup’s and network scaling to mention the few , as per Telco Cloud Design foundation Tricircle and its shared network design is a key to plan a Telco cloud , a nice new community Trio20 which talks about Nova ,Cinder extension across sites is another nice idea . The one important concept will be SAN or Ceph as it is the data container of all Instances, and for Solution selection any SDS will work as long it can handle for the structured and un-structured data

9.  SLA Multi-Site Cloud Design

It is very common to say Telco Cloud as 5 9’s vs IT Cloud as 3 9’s but how it should realize is the key to successful realization , below guide line is a nice idea to start with

a.   In OpenStack use HAproxy , mem cache , redis to offer HA

b.  VNF blue print to support HA by domain , by service

c.   Service pool or load sharing design to control HA by Telco Service

d.  VM and Aggregate design


10.                  HPC Cloud  

HPC or High Performance Clouds are the way NFV should build , the NUMA design is the key on NFVI , will NUMA will be crossed , how to adequately match the CPU cores , PCI Pass-through support like for VNF1 use DPDK , VNF2 use the SR-IOV , similarly dynamic Huge pages design is a key . Although it is a good idea to plan a uniform huge pages but as per our experience this will result in huge compromise in optimum resource selection and hence an architect must consider.

11.                  Integration tool set   

The IT application as a principle are standalone applications and only Vn-NF interface is enough to make them work while for the NFV Cloud it are 10+ interfaces like NFVI , Ve-VNFM , Or-Vi , Or-VNFM , Os-Ma , Se-Ma etc

How to standardized these interfaces and API’s for smooth onboarding and resource orchestration and developing the corresponding tool chain is the key for transformation

12.                  IT Cloud Migration vs Telco Cloud Migration

In a typical IT Cloud migration you must normalize and study only the DB replication and instance states but for Telco Migration the list can be exhaustive including meeting Customer KQI etc , PNF to VNF migration and managing operations evolution , customer experience and Smooth evolution is the key to success

13.                  Multiple Hypervisor Support

In an IT Cloud we primarily talk about XEN while for NFV Cloud the ETSI talk about KVM in their Reference Architecture but VMware Stake is as important and design must consider how to incorporate KVM with VMware. The driver testing validation and pooling of both is key to expand the cloud. In this phase of the industry may be it is difficulty but for Industry4.0 this will be target reference architecture.

14.                  Open Platform is the real problem in Telco Cloud

IT cloud is disruptive and very open, it is Open based on API’s, while a Telco Cloud requires lot of standardization such as ETSI RA, Plug test, EANT, Community etc

15.                  EPA and building a High performance Clouds

I remember the inclusion of EPA in OpenStack and its use case for 4K video the famous one in community to prepare different hardware for different use case, like make best use of memory, Disk, RAM, smart NIC etc. A company named mellanox have done a hell of job for this work. To sum up in the future converged Cloud following will be key to deploy a true NFV cloud

a.   NUMA and the CPU affinity need focus how to assure workloads placed in right NUMA , you may need make design align VNF , NUMA , OVS/Bridge and CPU/NIC

b.  TLB buffer size and associated Huge pages normalization for a target Cloud

c.   CPU pinning main focus must be the PMD threads , their dimension criteria and allocation across different VM’s /vNIC

I personally think there seems like an inherent Pandora box in Community specially for OVS workloads that needs to be addressed for SR-IOV it seems OK as NIC pass through can be used to achieve the Line speed

16.                  Architecture transformation

For the Architecture you must consider migration as ultimate target is move some service from As is to To be Architecture. The key principle for IT Cloud is App focused as all customization is to be done on App not the Cloud. This makes a Public Cloud very simple to manage and automate , conversely the Telco Cloud is a different world , the PNF migration will need understand the detailed analysis of service requirements and the to customize the Cloud like EPA , HPC , THP etc to meet the Telco Service needs .This is very important point for the Architecture .

Conclusion for the CXO Office and Chief Architects 

Finally I want to wrap this paper with the summary that Telco Cloud is not the IT Cloud, as Network architect you must quantify the real requirements and meet with both the Solution maturity and the service capability of the partner. To summarize the dimensions which make Telco Cloud different from IT Cloud are MVI IoT , KPI , Performance , Elasticity , Security , HA , Service SLA /KQI , NFV Assurance part and smooth operations . For Tier1 which want to transform there are some more areas to look like how to transform through DevOps, Process, Tools and Skills. This al need to evaluate from Telco service point of view. I think till now why despite many commercial Telco Clouds the Migration has not happened is not technical, it is because the Architecture not consider the fact that this change is not the technology but the Enterprise architecture and Operations will be the biggest impact for this .This is the reason a strong partner with IT skills and Telco mind is the key to remove the impediments and achieve quick wins for the business.

Closing the discussion ,well what we infer from this paper is that IT Cloud concepts no longer linearly applies to the Telco Cloud specially if you architecting a cloud that will future host the vAPP seamlessly on a common DC . Similarly concepts of Python, Net flow no longer applies to a case where you need Service abstraction and automation from business and CRM perspective. Specially consider future OSS the Business use case need model to Service catalog and one click instantiation , so you can see the concept is not the same at well . Similarly scalar VNF is not the same idea as we know Scaling in IT Cloud which is a mere VM and resource expansion. Those who are new on this part may need to learn statistics first especially how LMA and DLS algorithms really support for the Network automation. Seeing it altogether NFV is a system while Cloud is a Platform so concepts of E2E QOS, HA, SLA, Security, FCAPS and building a flatter architecture will be key objective of CXO office.

Nevertheless no architecture is complete unless Security is built in. We all know  Cloud took longer acceptability time from end users due to this and a matter of principle security need design in instead of design out because Open interfaces also means that everyone can know the language of how to talk to a component it means the advantage can be exposed . Inherently the API security through HAProxy, image compression can address high level issues but control of tenants will be key especially for the future converged clouds, the biggest challenge as CSP will open the Cloud will be IP related because Self-service case means many malicious attacks from Unknowns. Hypervisor security and future separating the Host OS per domain security in Kubernetes will be the main concern for the CSP’s. The Network is just opening imagine the case where the future S/W developer from a university will be invited to access your cloud to write an application for you. How the Chief Security architect will accept it. It is not acceptable now but will be acceptable as we evolve to build architecture moving along for this the key point Cyber teams need to remember is division of domains and duplication of analytics . Confused I mean IPtables + Security groups +ACL but it will impact performance for now so to support such architecture NFVI need evolve , Pike standard is just addressing 80% of such scenarios and we do write to community to ramp up this Part . May be you know now in the community this talks is given most important and infact you can join this work also.

The Market has just opened there are many partners along , even we know cases where having Cloud expertise can be shown by company a sign they know the evolution part , this is a trap knowing service and its assurance is the key , to best to survive is who is more open . For me Open means Tireless effort to bring value to customers and make community feel. Hey Guys we are together, we are Open and we will solve together!!!!

Finally every business has right to separate wheat from Chaff and we should embrace all that glitter is not gold. Best of luck and see you later in the new edition

The paper cannot be considered complete unless I thanks following

  1. Ben Silverman Principal Architect OnX Cloud , a teacher , a friend , my mentor
  2. Jaakko Vuorensola from Redhat a longtime friend and influencer
  3. Ajay simha RedHat Chief architect for the crazy work for the reference architectures
  4. OPNFV ,ETSI , OSM are obviously bible for all this work , how to solve issues in Open Source way is obviously a nice idea to have while evolving from NetOps
  5. Customers and partners , your questions and problems is what define my writings

And obviously the crazy consulting team in Huawei, together we believe understanding customer real requirements and to build a solution is the best way to transform business and to bring long time value to customers, partners and industry.

Sheikh is Huawei Middle East Senior Architect for NFV, SDN, and Telco Cloud with focus on ICT Service delivery through Telco DevOps. Focused to define the Roads for future 5G Core Network. Always interested in those disruptive technology driving the industry transformation, Author hails from Telco CSP background and since 2013 working on Telco Cloud domain including Amazon, Huawei, Mirantis, VMware, RedHat etc . The comments in my writings are my own and shall not be considered as any relation/binding with those of my employer .

Adapting the Elasticity use case for Scalar Telco Clouds


Sheikh is Huawei Middle East Senior Architect for NFV , Telco Cloud ,SDN with focus on ICT Service delivery through Telco DevOps . Focussed to define the Roads for future 5G Core Network . Always interested in those disruptive technology driving the industry transformation

What are Telco Applications in software means ? . A modular ,segmented code with defined API’s that can be customized for different functions and requirements .For Telco applications with varying requirements including uneven usage, bandwidth ,spikes during periods, the requirement for having built in elasticity and scalability is crucial for larger POD’s environments .However for such cases the applications should be designed to detect variations in the real-time demand for resources including but not limited to bandwidth, storage, compute , SLA and KPI to add for Telco use case. However, in legacy world applications had been developed to run on a single machine and require re-coding to adapt for both the scalability and elasticity that the Telco cloud provides.

The scenario that Telco Applications are persistent in nature make the scnerio more intricate to achieve

There are two main concepts to understand between Elasticity and Scalability in Telco Cloud.

  1. Scaling Horizontally or Scale Out /Scale In:

Those guys who have delivered a NFV based Telco Cloud knows the Scaling is supported currently , what it actually means is that EMS will notify current load to VNFM in real time or Polling way and it can make decision to add/remove a new VM and launch new instances of VM for the application . This do require an algorithmic policing to balance the load of vAPP on all new and old VM’s in a controlled fashion. Normally this is practice used in NFV Projects now

  1. Scaling vertically or Scale Up /Scale down

If you are not following industry may be you will not be well aware what this will mean for NFV? This is concept normally coming from AWS use case of Netflix for Persistent workloads a use case which AWS supports for large Enterprises. It will mean that moving a vAPP real time to a bigger /smaller VM or resizing the VM.

  1. The myth of Scaling

As we can see as long as the vertical scaling in Telco environment can have a Telco DevOps as I use to call the process to verify Build, I&V, Test the service in somehow automatic way .It will make a perfect case to avoid trap of load balancing and validation through long exercise of manual testing and MOP following by front line.

Incorporating this model through OPNFV test projects like Functest and Yard stick will mean this model can help for fast scaling and complete DC expansion more quickly. But obviously Industry need to consider how to assure no impact on VNF performance, SLA, KPI and that such scaling must be real time on persistent workloads?

Currently this case is not supported???? may be one alternative is use Telco service capabilities to isolate certain VNF first and then apply this model, but for sure it will have challenges in case VNF is split across sites in a federated network?

This model will be the key to deliver agility on big Telco DC , the model 1 as used now in NFV world will be obsolete in coming 2-3years after all Telco’s also want to become the Netflix of industry ………………………….:)

It will mean NFV world will move one step ahead from Being Scalar to being elastic for the workloads.

  1. Key Challenges

However to evolve to this model requires Industry to agree and solve some key issues

  • How to solve FCAPS and concerns from operations customer
  •  How to assure KPI before and after
  •  100% automated test after expansion
  •  100% automated performance validation
  •  How to introduce service for workloads? Consider there will be running traffic before expansion
  •  How to address customer experience for such cases
  1. Watch the Road Ahead

Finally to wrap us we must also think how soon Openstack the unified standard for Telco NFV will support evolution from PNF to VNF and subsequently Kubernetes can deploy on it to support Scaling through containers . It is sure finally this will solve issues but till the Telco world key concerns as higlhighted above are not adressed . The road to reach there seems gloomy . Do watch below video to see whats happening latest in this domain as a take away from Boston summit .

I will like to know your thoughts on this matter , because for persistent workloads the scaling have more challenges than ephemeral workloads and journey is challenging but interesting .

This will be the key to success !!!


Solving the 5G Core Network System Architecture Challenges for smooth Evolution

Huawei MWC2017 White Paper for SOC Core

Solving the 5G Core Network System Architecture Challenges for smooth Evolution

Sheikh is Huawei Middle East Senior Architect for NFV , SDN ,Telco Cloud with focus on ICT Service delivery through Telco DevOps . Focussed to define the Roads for future 5G Core Network . Always interested in those disruptive technology driving the industry transformation

From early 90’s the Telecom industry has seen a steady progression from 1G, 2G, 3G, 4G and now 5G just seems around the corner. The proponents of 5G claims that there is huge potential in the new technology and will be bread and butter of Telecom Market for at least 10Years. Consider the Market worth of 5G along with the potential opportunities it will create it is reasonable to be ready for the new network readiness to milk the market , however the so many interfaces with too complex use cases means lot of complexity in integity and value creation for this NG Solution .

Infact the great opportunity also comes with big challenges, the inclusion of many verticals in the 5G network will mean loss of challenges especially with regard to Integration with existing technology and networks and smooth migration. This is also a question how to build the operations model of such converged Core or which we call in Huawei as SOC (Service Oriented Core) which will be shared among many verticals .

Actually the complexity and capacity volume which 5G networks are around to offer surmount complexity which require consideration around many domains like

  • Eradicate dependency on any Access Network and UE’s and still need assure 100% interworking and backward compatibility
  • NFV to build capacity
  • SDN to program capacity efficiently
  • API exposure to integrate many 3rd parties
  • Unified Pane of operations to make all components work as a System
  • Open Architecture based on Software industry Micro service and SOA

If we see to meet those challenges we need to understand lot of standards and customize it as per customer requirements and this is the key task for the system integrator.

The requirements of 5G Core networks will be much customized in two ways. The first one they have to offer the services to many end users like Industry vertical, Government, Healthcare, Telco’s, Media entertainment and secondly in each vertical it needs to offer a customized solution for every customer segment hence the traditional fixed use cases can not apply, as Core Network service catalog has to customized and optimize for each segment and customer.

With more than 10Gbps speeds readiness for the end-users/devices it is clear the brain of the 5G network must be based on hyper scaling of NFV Clouds powered by agility of SDN. May be the space of enterprise will also converge to Carrier to deliver across geographical provision capability through SD-WANs

How the CSP’s will deliver value of well-defined and standardized systems in this complex Eco systems surely will require lot of work to do around Tools , Management systems , Automation and efficiency to meet the new network requirements

With the overall system becoming more open the risk for Security will be huger. Especially with billions of devices connecting and authorizing through the DCN and 3rd party platforms will require proactive measure to find security breaches such as to build in-house security systems around red/blue teams, it will require security integration and testing in ways far exceeding simple LDAP or IAM measures.

To well imagine the change in Network architecture it is very important to understand the IT and Enterprise architecture, after all the two technologies NFV and SDN are inherited from the World of IT. As IT consultants we all know that the first base of IT applications is that they are totally separated from the Hardware. It means the Application itself can decide HA, Healing, Data availability and restore with almost no dependency on under lying layer. However in Telco’s most of the work depends on close coordination between the two layers. Hence the VNF design itself should adjust around micro service based on VNFC (VNF Components) integration through APIs’. It is really very important for service scaling.

As is the case in Telco industry today all the SLA compliance and KPI settlement is based on NF functional architecture with less focus and control for each user . However in 5G with so many users from different segments using the service the network really have to support the way of Naas (Network as a service) to provide the accurately tuned slice for the customer tied with the offered SLA/KPI . The Network slice is a concept that will extend beyond Tenant provision and control. It is because slice has to start from Radio resource to IP to Tenants to VNF. In other words the slice need to be well contained and defined and every Node and Function that to monitor/Manage and report the slice usage.

What is an API, recently we have seen huge growth in the API integrators and ecosystem .The market itself is worth billions and we really need to see the advantages API will bring to the 5G. Like it will support for any 3rd party integration, 3rd party programming and finally service modelling and control. This is a perfect new world.

What we know is Cloud is written in PYTHON and VNF support YANG modeling with YAML /XML for onboarding and above all the REST API is standard way to retrieve information from each layer. This model of using programming the functions will enable to apply DevOps model to the 5G Network and to apply software style to improve service and end user experience. For Example NFVO can write a Python code to program service which can compose of GSO , NFVO , SDNO to deliver seamless experience of service across the data centers .It can also monitor and report the SLA for customer offering and can integrate with OSS .

Multi tenancy is a key concept in the cloud world where by many applications like VNF can host on the same cloud the key ideas of VPC , Cell , Region ,Domain , AZ will support all VNF to assure of its share and control separately including Horizon dash board and low parameters tweaking such as connectivity with Neutron Provider networks . It means future the 5G Network is all about a number of data centers. The Co-Deployment is key trend in 5G and to start journey towards Hyper scaling of Clouds.

The recent report from Light reading have suggested that the industry will need layered DC architecture to offer 5G which means central DC cannot meet the service requirements , this is where AWS , Google ,Face book data centers have not delivered opening a big opportunity pipe of CT vendors to design the layers accurately to meet the service requirements . The DC design to meet the uRLC and eMBB latency and delay requirements is key enabler to build a future proof 5G Network.

The security measures need to emphasize in each layer of the network including NFVI, Cloud, MANO, Application and in VNF as well .Most important point is that the authentication /interface are de-coupled from the other CN functions .Various authentication methods will support plug and play with access agnostic manner and the authentication capabilities will be open for the Apps for the new stream revenue.

Though 5G network seems exciting but it must be clear it has to live with the existing legacy network for at least 3-5years in mature markets. It is therefore that 5G have to support number of networking modes to assure smooth integration with the 3G/4G networks and offloading options as well. The R13, R14 Décor /eDecor interworking will make such a solution possible .However it seems clear the future many of UE /Device functions must move to the Network side to assure the smooth evolution of the whole eco system.

It is assumed that in first phase only the 5G Radio also called gNB will be ready only and first phase must be its integration with EPC /4G it will also help to solve lot of issues related to radio before evolving to the 5G Core.

Finally the 5G Network will require a lot of Organization adjustment both in terms of skills and process. The new technology will require accurate modeling for the transformation and automation. To support this wave the CSP’s need to acquire lot of IT skills for the future programmable network. One of very close friend and Guru in DevOps once said to me it’s not about the code but why and how the code is written which will decide the future of organization roles in CSP’s and for sure the Scrum master is going to be the key to glue the transformation from as in situation to the to be situation because with so many solution evolving and no uniform way to apply DevOps or IaaC cycle to the NFV/SDN network.

I hope you have found this paper useful, I have tried to craft this paper solely to understand the myths of technologies and complexities that surround the 5G Core network Architecture . It is believed NFV, SDN, Transformation, DevOps, Telco Cloud dreams must be achieved before 5G Core arrives in the market . It is also rumored in the industry the Open Source is key contributor to achieve the goal. Hence as a Technologist it is exciting era to look how these complex and related technologies will guide the future network of 2020 .

As we continue to travel ,it is expected to find lot of new challenges along the road and I will try to keep my audience well informed with what we will face and how to find a collaboration model to solve it !!! Best of Luck


The key 5G Core Network Components are

  1. AMF is the access and mobility management function , the AMF will also include the Slice selection functionality
  2. SMF is session management function
  3. UPF is user or data pane management totally separate from control Plane
  4. UDM is unified data management a part of SDM
  5. NRF is the VNF repository to be called by each VNF through API the key requirement to build Open and scalar 5G Core Network

3GPP TR23.501

Light Reading

KT Telecom White Paper

3GPP TS 23.711

ETSI NFV Phase2 Specifications

OPNFV DANUBE the use case of Telco DevOps

Open -O Mercury use cases of VoLTE and Core Network System Integration

Linux Founation seminar on 04th May on Running a successful Open Source Project

Artifacts for Developing Performance Aware NFV Applications using DPDK

Summarized analysis of selection between key Performance technologies in NFV

Delivering end to end Solutions

What Does the Customer Really Need ?

Recently in Customer Workshops there seems to be lot of queries to quantify the Performance solution of NFVI and to position our Cloud in comparison with other market propositions such as RedHat, Mirantis and Cloud Band etc.

To Craft the required solution we decided to persuade customer by focusing on devising solution that is best fit instead of just showing a product functions. Whether to use DPDK ,EVS ,SR-IOV ? How to solve the compromise of not using the OVS vSwitch and still gain all the advantages in the cloud

What are the Options and their analysis

First we will explain what DPDK can achieve and then what it cannot, as you can see below is the DPDK architecture as you can find from Ice house Open Stack. Clearly the Kernel will be bypassed by introducing a Abstraction layer that make is possible for User space i.e. Instance workloads to reach NIC using Drivers as seen below

Traditionally we can see that it is the Kernel that process packets in OS and then it can reach NIC normally it requires all packets pass all Layers of OSI before it can hit the Deck.

A  virtual memory have its memory comprise of user Space (Where NFV Program runs) and Kernel (that controls and process it using System Call API which in Computer programming known as Interrupts) which means more processing time . Similarly default page tables have size of 4K bytes. Huge Pages can go to 1GB, which is exactly what DPDK uses. Bigger page tables mean more TLB hits and less time spent in page replacement from local disk.

So to sum up DPDK is a complete user space implementation with the ability to take advantage of compiler optimizations using the complete instruction set of the hardware and OS

DPDK, in the IT world what is it?

Data Plane Development Kit (DPDK) was initially started by Intel under the BSD open source licensing and then in 2013 it became an independent open source community via DPDK is a data plane software development kit that can be used to optimize packet processing. DPDK consist of a set of data plane libraries and network interface controller (NIC) drivers that can be used to develop fast packet processing applications on x86 platforms; but, it is not limited to just x86.

The main DPDK libraries are:

  • Environment Abstraction Layer (EAL) —interface to gain access to lower layer resources .It actually hides environment from Applications
  • Memory Manager— API to allocate memory because pool is created in huge memory and uses ring to store objects. Ring is basically entities to mark memory like heaps which are free and occupied.
  • Ring Manager— FIFO API for ring structure with lockless, multi-consumer table.
  • Memory Pool Manager — allocates pools of objects in memory.
  • Timer Manager — timer services for DPDK execution units with ability to execute functions asynchronously.
  • Poll Mode Drivers (PMD) — for 1GbE, 10GbE, and 40GbE and virtualized virtio Ethernet controllers.
  • Queue Manager: Implements safe lockless queues, instead of using spinlocks, that allow different software components to process packets, while avoiding unnecessary wait
  • Packet Flow Classification: DPDK Flow Classifier implements hash based flow to optimize processing

Finally DPDK uses polling method using TLB huge pages so that Polling can Optimize processing time at maximum compared to traditional interrupts without compromising for any additional load on CPU and hence enable to bypass kernel

What we Can Achieve the Results ?

High Performance that can be quantified as follows

– Improve 600% time’s performance PPS/Core (256Byte)

– improve 300% times Gbps/Core

– 100% Single trip latency improvement

– Rich Network Features VxLAN,  Security ,Qos , Gi-LAN Service Chain ,RBAC and Security

-VM Live-Migration

What we can achieve is huge but still not the Line speed forwarding and that is why DPDK is mainly target to improve processing performance in Virtualized application i.e. IMS , Circuit Switched Core not for Throughput intense applications like Packet Core , Firewall and Caching application  . The compromise for not using OVS Switch architecture will be solved by introducing Service Chaining and enabling DC Gateways hence primarily not using the L3 functionality of OVS Switches to build the complete solution .This is also the Direction ETSI NFV Phase2 Master White paper explains to industry to improve the performance of environment. Do check details of Performance Paper on ETSI website.

In fact during last year ETSI have encouraged more the EPA (Enhanced Platform Awareness) Architectures and infact in Open Stack favors already include “Extra_specs” in Key-Value pairs that can identify specific features desired to accelerate performance but because intention here is just to remain till the OS and Driver layer so EPA can be discussed later as it requires VNF , NFVO , VIM, Hyper visor all together to realize it .

@Courtesy of Open Stack

Intel White paper