Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks

Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks 

An architect’s perspective in 2020+ era


The recent global situation and use of Critical Telecom infrastructure and Security solutions in the Cloud has shown to many critics as well the esoteric of  terms like Hybrid Cloud , AI , Analytics and modern applications is so vital to bring society and economy forward .

Seeing the latest development where we have been actively joining the community in both Infrastructure and Application evolution both in the Telco and Enterprise Application world i can safely conclude that days where Infrastructure is engineered or built to serve application requirements are over . On the contrary with the wide range of adoption of application containerization and Kubernetes as platform of choice the future direction is to design or craft the application that can best take advantage from a standard cloud infrastructure.

Understanding this relation is key impetus between business who will flay and those who will crawl to serve the ever-moving parts of the Eco System which are the Applications


Source: Intel public

In this paper let us try to investigate some key innovations on Infrastructure both Physical and Cloud which is changing the industry Pareto share from Applications to Infra thereby enabling the developers to code and innovate faster.

Industry readiness of containerized solutions

The adoption of micro services and application standardization around the 12 factor App by cloud Pioneer Heroku in 2012 gave birth to the entire new industry that has matured far quickly compared to virtualization. A brief description of how it is impacting market and industry can be referenced in Scott Kelly paper in Dynatrace blog . This innovation is based on standardization of Cloud native infrastructure and CNCF principles around Kubernetes platforms aimed at following key points


The Covid-19 has proved the fact that if there  a single capability that is necessary for modern era business to survive then this is scalability , in recent weeks we have seen millions of downloads of video conferencing applications like Zoom , Webex , blue Jeans then similarly we have seen surge demand of services in the Cloud . Obviously, it would have been an altogether different story if still we were living in legacy Telco or IT world.  3


Immutable but Programmable

On every new deployment across the LCM of applications the services will be deployed on new infrastructure components, however all this should be managed via an automated framework. Although Containers in Telco space do require stateful and somehow mutable infrastructure however the beauty of Infra will keep the state out of its Core and managed on application and 3rd party level ensuring easy management of the overall stack

Portable with minimum Toil

Portability and ease of migration across infrastructure PoPs is the most important benefit of lifting applications to the containers, infact the evolution of Hybrid clouds is the byproduct business can reap by ensuring applications portability in

Easy Monitoring and Observability of Infra

There is large innovation happening on the Chip set, Network Chips (ASIC), NOS i.e P4 etc however the current state of Infra do not allow the applications and services to fully capitalize on these advantages. This is why there  are many workarounds and complexity both around application assessment and application onboarding in current Network and Enterprise deployments

One goof example of how the Container platforms is changing the business experience on observability is Dynatrace which allows the code level visibility , layers mapping and digital experience across all hybrid clouds .


Source: dynatrace

Composable Infrastructure

Already there looks a link from platform to infrastructure which will support delivery of all workloads with different requirements over a shared infrastructure. The Kubernetes as a platform already architecting to fulfill this promise however it requires further enhancements in Hardware, the first phase of this enhancement is using HCI, our recent study shows in a central DC using of HCI will save CAPEX by 20% annually. The further introduction of open hardware and consolidation of open hardware and open networking as explained in the later section of this paper will mean services will be built, managed and disposed on the fly.

From automated Infrastructure to Orchestrated delivery

Infrastructure and Network automation is no longer a myth and there are many popular frameworks to support it like Ansible , Puppet , Jenkins and Cisco DevNet .

However, all those who work on IT and Telco Applications design and delivery will agree the cumbersomeness of both application assessment/onboarding and application management with little infrastructure visibility. This is because the mapping between application and infrastructure is not automated. The global initiatives of both the OSC and SDO’s like prevalent in TMT industry has primarily focused on Orchestration solutions that is leveraging the advantages of the infrastructure specially on chip sets driven by AI/ML and enabling this relationship to solve business issues by ensuring true de-coupling between the Application and Infrastructure


Although the reader can say the platforms like Kubernetes has played a vital part for this move however without taking advantages of physical infrastructure simply it could not be possible. For example both Orchestration in IT side primarily driven by K8S and on Telco Side primarily driven by initiatives like OSM and ONAP is relying on infra to execute all pass through and accelerations required by the applications to fulfill business requirements  .

Infact the Nirvana state of Automated networks a more cohesive and coordinate interaction between application and infrastructure under the closed loop instructions of Orchestrator to enable delivery of Industry4.0 targets.

Benefiting from the Advantages of the Silicon

Advantages of Silicon were, are and will be the source of innovation in the Cloud and 5G era . When it comes to Hardware infrastructure role in whole Ecosystem, we must look to capitalize on following


The changing role of Silicon Chips and Architectures (X-86 vs ARM)

The Intel and AMD choices are something familiar to many Data center teams, somehow in data centers where performance is a killer still Intel XEON family outperforms AMD whose advantages of lower floor print (7nm) and better Core/Price ratio has not built a rational to select them. Other major direction supporting Intel is their supremacy in 5G , Edge and AI chips for which AMD somehow failed to bring a comparative alternative. The most important drawback as the author views is basically the sourcing issues and global presence which makes big OEM/ODM’s to prefer Intel over AMD.

However the Hi-Tec industry fight to dominate the market with multiple supply options specially during recent US-China trade conflict has put TMT industry in a tough choice to consider non X-86 Architectures something which obviously no one like to have as its Eco system is not mature and the author believes a un-rational selection will mean the future business may not be able to catch advantages coming from disruptors and open industry initiatives like ONF ,  TIP , ORAN etc

Following points should be considered while evaluating

  1. Ecosystem support
  2. Use cases (The one which support Max should win)
  3. Business case analysis to evaluate performance vs high density
            Except Edge and C-RAN obviously Intel beats ARM
  1. Aggregate throughput per Server
  2. NIC support specially FPGA and Smart NIC
          Obviously, Intel has a preference here
  1. Cache and RAM, over years Intel has focused more on RAM and RDIMM innovation so somehow on Cache side its thing ARM has an edge and should be evaluated. However consider fact not all use cases require it makes it a less distinct advantage
  2. Storage and Cores , this will be key distinguisher however we find both vendors are not good in both. Secondly their ready configuration means we have to compromise one over other
           This will be the killer point for the future silicon architecture selection
  1. Finally, the use of inbuilt switching modules in ARM bypassing totally the TOR/SPINE architecture in Data centers in totally may got proponents of Pre-Data center architecture era however promise of in-built switching in scaled architecture is not tested well. For example, it means it is a good architecture to be used in dense edge deployments but obviously as far as my say is not recommended for large central Data centers.

However only the quantitative judgement is not enough as too much dominance of intel meant they do not deliver the necessary design cadence as expected by business and obviously opened gates for others, it is my humble believe in the 5G and Cloud era at least outside the Data centers both Intel and ARM will have deployments and that they need to prove their success in commercial deployments so you should expect both XEON® and Exynos silicon recently .

FPGA ,SmartNICs and vGPU’s:

Software architecture has recently moved for C/C++/JS/Ruby to more disruptive Python/Go/YAML schemes primarily in a drive of business to adopt the Cloud . Business is addressing these challenges by requiring more and more X-86 compute power however improving the efficiency is equally important as well. As an example, Intel Smart NIC family PAC 3000 we tested for a long time to ensure we validate power and performance requirements for throughput heavy workloads.

Similarly, Video will be vital service in 5G however it will require SP’s to implement AI and ML in the Cloud. The engineered solutions of RedHat OSP and Openshift with NVIDIA vGPU means the data processing that was previously only possible in offline analytics using static data source of CEM and Wirefilters.



Envisaging the future networks that combines power of all hardware options like Silicon Chips, FPGA, Smart NICs, GPU’s is vital to solve the most vital and business savvy challenges we have been facing in the Cloud and 5G era.

Networking Infrastructure


There is no doubt networking has been the most important piece in Infrastructure and the networking importance has only increased with virtualization and with a further 10-Fold increase with Containers primarily as Data centers fight to deliver best solutions for East-West Traffic. Although there are a number of SDN or automation solutions however there performance has scale has really shifted the balance towards infrastructure where more and more vendors are now vesting on the advantages of ASIC’s and NPC’s to improve both the forward plane performance but also to make the whole stack including fabric and overlay automated and intelligent fulfilling IDN dream by using latest Intel chips that comes with inherent AI and ML capabilities .

The story of how hardware innovation is bringing agility to network and services do not ends here for example use of Smart NICS and FPGA to deploy SRV6 is a successful business reality of today to converge compute and networking infrastructure around shared and common infrastructure.

Central Monitoring

Decoupling, pooling and centralized monitoring is the target to achieve and already we know with so many solutions which are somehow totally different in nature like on networking side between fabric and overlay means to harmonize the solutions through concept of single view visibility. This will mean that when an application demands elasticity hardware does not need to be physically reconfigured. More compute power, for instance, can be pulled from the pool and applied to the application.

 From Hyperscale’s to innovators

The dominance of hyperscale’s in Cloud is well known however recently there had been some further movements that is disrupting the whole chain. For example, now ONF Open EPC can be deployed on OCP platform. Similarly, the TIP Open-RAN initiative is changing the whole landscape to image something which was not even in discussion a few years before.

Since the ONF is too focused on Software and advantage brought forward by NOS and P4 programming so I think it is important just to talk about OCP . The new innovations in rack design and open networking will ensure to define new compute and storage specifications that best meet the requirements for the unique business requirements  .Software for Open Networking in the Cloud (SONiC) was built using the SAI (Switch Abstraction Interface) switch programming API and has been adopted unsurprisingly by Microsoft, Alibaba, LinkedIn, Tencent and more. The speed at which adoption is taking place is phenomenal and new features are being added to the open source project all the time, like integration with Kubernetes and configuration management

Summary review

Finally, I am seeing a new wave of innovation and this time it is coming via harmonizing of architecture around Hardware, thanks to the effort in last few years around Cloud , Open Stack and Kubernetes. However, these types if initiatives will need a more collaborative efforts between OSC and SDO’s i.e TIP and OCP Project harnessing the best of both Worlds

However, with proliferation of so many solutions and offerings the standardization and alignment of common definitions of Specs for the Shared Infrastructure is very important.

dis SDN

Source: Adva

Similarly to ensure innovation delivers the promise the involvement of End user community will be very important , the directions like LFN CNTT , ONAP , ETSI NFV , CNCF and GSMA TEC are some of the streams which require operator community wide support and involvement to come out of clumsy picture of NFV/Cloud of last  decade to replace by true innovative picture of Network and Digital Transformation .A balanced approach from Enterprise and Telco industry will result the business of today to become the hyperscale’s of tomorrow .

I believe this is why after a break this is the topic I selected to write. I am looking forward for any comments and reviews that can benefit community at large


 The comments in this paper do not reflect any views of my employer and sole analysis based on my individual participation in industry, partners and business at large. I hope sharing of this information with the larger community is the only way to share, improve and grow. Author can be reached at


How Open Orchestration enhances  Enterprise, 5G , Edge and Containerized applications in Production


Source: ETSI <>


How Open Orchestration (OSM Release-7) enhances  Enterprise, 5G , Edge and Containerized applications in Production

An architect’s perspective from ETSI® the Standards People


As highlighted in the Heavy reading latest End-to-End Service Management for SDN & NFV all the major T1 Telco’s are currently refining their Transformation journey to bring standard Orchestration and Service modeling in their networks , one of such standard approach is promised by ETSI OSM a seed project from ETS® the standards people .

Recently in Q4 2019 ETSI OSM release the Release7 which address surmount challenges of brings CNF and Containerized applications to the production ETSI OPEN SOURCE MANO UNVEILS RELEASE SEVEN, ENABLES MORE THAN 20,000 CLOUD-NATIVE APPLICATIONS FOR NFV ENVIRONMENTS

This capability of ETSI® OSM is specifically important considering the ushering of 5G SA architecture and solutions which already find its way to the market thanks to early work from CNCF and specifically CNTT K8S specs . OSM brings value to the picture as it will allow to design, model, deploy and manage CNF’s (As ETSI NFV call is a containerized VNF) without any translation or modeling. It also lets operators experience early commercial use case of integration Helm2.0 in their production environments. On top of it will allow a NS (Network Service) to combine CNF’s with existing VNF’s or legacy PNF’s to deliver complex services in an easy to deploy and manageable manner.

In the following part of this paper I will try to share my understanding on OSM release7 and sum up results from ETSI OSM webinar on this subject held on JAN 16th 2020 . For details you may need to refer to webinar content itself and can be found  

Why Kubernetes is so important for Telco and Enterprise

Telco industry has experienced lot of pain points the way NFV journey has steered with focus on migrating existing PNF’s to the Cloud. K8S offers opportunity for all Platform providers, application vendors, assurance partners to build something on modern principles of micro services, DevOps and Open API’s driven. This is something that already made its way to Telco’s in OSS and IT systems as an example mycom OSI UPM , OSM and  infact ONAP all are already based on Kubernetes , the arrival of 5G SA and uCPE branches has driven almost all operators adopt networks to use Kubernetes . Further it is principally agreed as CSP’s move to Edge the K8S will be the platform of choice.

Foundation for K8S Clusters

Kubernetes made it simple for the applications and CNF’s to use API’s in a standard fashion using K8S Clusters which are deployed either in an open source manner or via Distros. The early adoption of CNF’s in Telco largely supports the consumption model of vendor Distros like RedHat OpenShift, Vmware PKS, Ericsson CCD to mention the most important ones.

Since containers are like a floating VM’s so networking architecture specially the one promised by L3 CNI plugin and Flannel is important direction to be supported in Platforms as it is supported in OSM .

The reusability of API makes it simple for application to craft unique application in form a build configuration files using artifacts of PoD, services, cluster, config map and persistent volumes which are defined in a very standard manner in K8S by which I mean deploy all artifacts through a single file.

ETSI® OSM can be deployed using both HELM2.0 as well as Juju charmed bundles


Foundation for Helm

Helm gives teams the tools they need to collaborate when creating, installing, and managing applications inside of Kubernetes. With Helm, you can… Find prepackaged software (charts) to install and use Easily create and host your own packages , Install packages into any Kubernetes cluster Query the cluster to see what packages are installed and running Update, delete, rollback, or view the history of installed packages Helm makes it easy to run applications inside Kubernetes. For details please refer to details HELM packages on

In a nut shell all day1 and day2 tasks required for the CNF’s are made possible using Helm and its artifacts known as Helm charts including application primitives, network connectivity and configuration capabilities.

Key Features of OSM Release7

OSM Release 7 is a carrier grade and below are its key features as per wiki

  • Improved VNF Configuration interface (One stop shop) for all Day0/1/2 operations
  • Improved Grafana dashboard
  • VNFD and NSD testing
  • Python3 support
  • CNF’s support in both options where OSM creates the Cluster or rely on OEM tools to provision it
  • Workload placement and optimization (Something very important for Edge and Remote clouds)
  • Enhancement in both Multi VIM and Multi SDN support
  • Support for Public Clouds

How OSM handles deployment of CNF’s

For most Telco guys this is most important question e.g how VNF package will be standardized with arrival of CNF’s , Will it mean a totally new Package or enhancement of existing.

Fortunately, OSM approach on this is modeling of Application in a standard fashion which means same package can be enhanced to reflect containerized deployment. On a NS level it can flexibly interwork with VNF/PNF as well, the deployment unit used to model CNF specific parameters is called KDU’s (Kubernetes Deployment Unit) other major change is K8S cluster under resources. It is important as it explains most important piece the Networking and related CNI interfaces.

OSM can deploy the K8S cluster using API integration or rely on 3rd party tools like Openshift® or PKS deploy it on instructions of OSM

Picture7Changes to NFVO interfaces

Just like Or-Vi is used for infrastructure integration with Orchestration the Helm2.0 (Will support 3.0 in near future) is used for infrastructure integration with K8S applications. Since the NBI supports mapping of KDU’s in same NSD it means only changes from orchestration point of view is on the south side only.

Workload Placement

As per latest industry standing and experience sharing in Kubecon and Cloud Native  summit Americas  there is a growing consensus that Container is the platform of choice for the Edge primarily due to its robustness , operational model and lighter foot print . As per our experience of containers here in STC a 40% reduction in both CAPEX and Foot print will be realized on DC’s if deployed Edge using Containers.

However, definition of business definition of Edge raise number of queries the most important of it are work load identification, placement and migration specially consider the fact the Edge is a lighter foot print that in future will host carrier mission critical applications.

Optimization of Edge from CSP perspective has to address following  Cost of compute in NFVI PoP’s , Cost of connectivity and VNFDFG something implemented by SFC’s and Constraints on service like SLA, KPI and Slicing


The issues with the Upgrades and How OSM addresses

Compared to early release the OSM ns action primitives allow the CNF to be upgrades to the latest release and execute both dryrun and  Juju tests to ensure the application performance bench mark is same like before  .Although this works best for small applications like LDAP the same is difficult to achieve with more complex CNF’s like 5G SA . Through liaison with LFN OVP program I am sure soon the issue will be addressed. We as operator have a plan to validate it on a 5G SA nodes.


My final thoughts on this that  Container journey for CSP is already a reality and coming very shortly in 2020+ and OSM ecosystem supports the commercialization of CNF’s through early use cases of 5G SA , Enterprise branch uCPE and most important Edge including MEC for which OSM seems to reach maturity  For details and how to participate and use do get involved in upcoming OSM Hackfest IN MARCH OSM-MR8 Hackfest

Many thanks to colleague , mentor and industry collaborator Jose Miguel Guzman , Francisco Javier Ramón Salguero  Gerardo García and Andy Reid for OSM growth in recent years … See you in Madrid




Linux Foundation


Addressing Solvency of Open source production and adoption models for TMT and Telco industry


Addressing Solvency of  Open source production and adoption models for TMT and Telco industry

An architect’s perspective



Technology, Media and Telecom industry known as TMT by commoners is going through major transformation programs globally. One of the prime recipes of this revolution are their vigor and participation in Open source. However, over years of experience in Open source revealed some key points to us, For example most of industry believes opensource is

  • Open (without defining what is Open)
  • Cheaper (with our a valid Business case and TCO working)
  • Simple to deploy and use (with out analyzing ecosystem and interworking).

On one hand I believe Opensource is a bandwagon every one wants to sit but is also a wagon no one want to drive at least in a commercial and production environment. Therefore, it is very important to share my views on this .

Understanding the Story of TMT Industry at a Glance 

Pictures are always an easy way to summarize and let me share some very useful insights from lumina networks  in recent #ONS Days Melbourne showing Opensource will address solve business issues around . Thanks for IldikoVancs for sharing this summary .

  • Automate any network with any form factor
  • Resource optimization and utilization
  • Offer NaaS and Slicing to Verticals most important 

Attracting business to Opensource is the most important challenge for Architects , Will building a best of breed staff fulfills TCO and CXO transformation objectives ? 


Source: @lunminanetworks

Let us see now results of applying story to practice through Impacts of Open source Glory in TMT industry

Use of open source in Telco has definitely a long-term value but because still as a community we have failed to transfer Open source frameworks to MVP of vendor products, some thing that can drive and navigate our Sourcing, RFX and vendor selection process. I think this was something that was supposed to be done at the outset of this journey but what really happened. I think some smart marketing crooks sold dreams to Telco’s that they will become future Google and hyper scales which was never the purpose and direction of a service industry like ours.

The long term repercussions of this was that still down the road of almost a decade we found ourselves surrounded by vendors that come with a proprietary and engineered solutions  that they say is aligned with Opensource like ONAP , OPENSTACK,K8S etc but the sad story reveals its just the concept or in SDK terms the front end that is something like Open while all backend and software forking is proprietary .

So despite I lead my company in many of these initiatives I am still circumspect of the approach we have taken or at least we should agree that we need to really now think about value quantification to put business metrics in perspective and to ponder how to really make these initiatives work.

Grow adoption through power of synergy


Openstack is one of the most successful example of Opensource adoption and summing our experiences in Cloud may support to build the best prospective in other domains, may it be technical issues , licensing models or operators and TMT’s transformation journeys . The wide adoption of Openstack in IT , application, Telco and verticals proved that opensource frameworks that can serve a wide use cases not limited to particular industry is the best way to proceed. It has really enabled to adjoin industries which were traditional on a minimum of an Isle’s distance.

However balancing an opensource adoption in a balanced way with out skewness towards a particular segment is still a big challenge, it comes a life line when we consider each industry requires unique characteristics and building something for commoners cannot fulfill business needs.  This is what leads industry to collaboration or which in business or RFX we name as partnership and ecosystems.

Sharing and not swallowing

As Johanne Mayer, Director MayerConsult and TM Forum Distinguished Fellow at the Layer123 Network Transformation Congress in San Jose the Telco’s in 5G era can not find a solution that comes from a single vendor. It simply is not possible as more and more verticals will come in to the picture. The success of such initiatives will depend more on working together then to get community outputs and build end to end solutions and products that simply are not aligned with open standards and concept of mixing best of the breed approach.

Integrating the Open source solutions


Open source is a journey of software and idea that future the solution will come from a hefty number of vendors. However, it is still a reality that at least till the decade Telco’s will run the hybrid networks including Cloud , PNF , VNF etc . So we need to solve these issues on API , standards and Fitness for use at the outset .

Also we need to see use of open source in customer X can be totally different than the use in vendor Y because the end product and business case will require some solution tailoring. The Personalization of each customer solution is very important.

RFX of Open source solutions

One of the bigger pain points in Open source solution adoption is to do with sourcing and RFX process. It is easier part for Telco to give requirements of an open solution but fact is which vendor will take end to end responsibility of third party in such multi-vendor offering.

Then how an Operator can be sure the vendor is not pushing its solution whereas ideal solution must be the Pull solution where based on requirements each component should be selected and then integrator to work on modalities of integrating them all . It is easier said than done however failure to adopt this direction means we find an integrator that can only work with certain vendors or at least can only take project if certain vendors on certain later are there. This is a frightening situation that will impede to slow adoption and commercialization of open source technology. If we address these points not only on technology level but also process and sourcing model then I am sure adoption will be as freely distributed as the Windows of today.

Its Services not the solutions

Some time ago I was reading a blog from NetApp  with a sentence that caught me in the moment

“You Can’t Change Your DNA – EMC Thinks Mainframe; Dell Thinks PC; NetApp Thinks Open Systems”

That is so true for the TMT and specifically the Telco industry , we must not be lured by story of openness and innovation that we forget our purpose. I think key to our survival in new era of digital transformation is fail fast and to deliver services .  Focusing on services and not on solution is so vital for Long term sustainability.



 When ROI of Open source solutions will realize

Although painful but reality is that no CSP unless the giants with big R&D arms have achieved real ROI or a fair visibility into it at least till now. One practical result of this is slow down strategy by many CSP’s and Telco adopted , however slowdown simply solves issue but delays it only .

Lately I have seen many Telco’s in EMEA to solve issue by breaking stack from the top or North side then to fix it from ground (NFVI upwards) however it a child’s cry unless we  at least agree and define clearly what is the definition of automation

The risk of industry failure to solve this issue is all major Telco’s are considering proprietary solutions of orchestration and automation to have a feel of something practical that can do something realistic on automation.

Then for real ROI we must agree on something which will finally scale , as an example spending X years while building two Data centers NFV stack we find once we will scale to lets say 10 the advantages of scale is simply not there. The issue of Open source introduction at the Edge will make this discussion more prevalent as simply an Operator can not afford to go to Edge with high setup cost, had open source solution not fix this side will obviously result in the slowdown of Edge as a whole.

Finally, just solving OSM and ONAP architectures with out ensuring vendors MVP align with it and that finally automation stacks can be built by combing best of breed from different vendors is and will remain a dream unless definition , reference architecture , reference implementation and testing/validation is not agreed and vendor’s are not enforced to apply this across their products . Although latest work on hackathons (LFN developer and testing forum)  have tried to address the same concern however still author believes it is not production ready . This is how LFN defines it and I think this is where standardization and SDO’s need catch vendors to align their MVP with architecture frameworks and standard .

“Co-hosted with GSMA, this LFN Developer & Testing Forum brings together developers across ONAP, CNTT, and OPNFV, with a special focus on VNF compliance and verification testing. As the principal event for LFN open source projects, the technical gathering provides a platform for community members to collaborate, plan future releases, conduct a plugfest, and perform interoperability, compliance, and integration testing across the full network stack, including NFVI, MANO, and VNFs”


 Future lies at the intersection of our path


With so many things happening in the industry around both open source development and testing and its commercialization, it is not surprising that many carriers especially those outside U.S and EU to follow a more conservative strategy

However my takeaway to CXO will be to ask technical team focus on bigger picture to know how and when the proprietary automation or Open source solution will converge to real open solution opening choices and offerings for the service providers , had this not addressed the Open spruce journey will remain a glory which every body want to talk about but a road no business wish to take finally .


My final thoughts on this topic is actually best described by a session on O-RAN by Orange in ONS Europe 2019 which is

“The Price to pay for Open source greater flexibility, innovation and openness is the complexity of test and integration” .

So finally, all will come to one line how to define MVP products, benchmark the solutions and about all develop a common test and integration model. It is clear that open source bring value but if we do not know how to deploy and fix the issue it becomes a nightmare nobody wish to keep. This is where SDO’s and Operator community needs to focus in 2020 . One such direction is the CNTT (Common NFVI Task Force) and CNCF TUG (Telecom User Group) which are expected to solve very issues highlighted above .

Currently we are working on the CNTT R2 which is expected to be a GA by JUNE 2020 . Similarly R2 of CNCF will combine the testing and validation of open source solutions to adress the issues that had been faced by TMT clients .


Edge Services commercial deployment models from PoC’s to MDT’s ~ A Journey from what/when to where and How

A holistic guide for edge Services hosting and commercial deployment models for a digital Telco


Market Segment: <Telco , Applications and Enterprise domain>


 Source: ResearchGate

In the latest Edge computing conference in London  the palpable notion was clear that developers clearly see a potential of applications that can exploit the potential of Edge services , consider the fact major  T1 operators made initial 5G rollouts it seems inevitably true  that CSP’s in the new 5G era either have to speedup or simply lose the game to Cloud vehement something that already happened in 4G era when OTT’s built there future on Telco’s dollars of investments . This is why I thought to share some insights on this direction as things look more clearer to me in this direction now compared to my thoughts shared earlier.

This paper is the part-2 and succession about MEC and edge deployments I shared before, if you have not read that paper, I suggest to please read it first  MEC as enabler of Telco’s Digital Transformation ~An ETSI ISG Perspective

I think a new paper is needed as a lot of progress in industry has happened since I shared my thoughts on first version of paper . Specifically, with 5G early deployments taking off more and more requirements are coming that necessitates an Edge deployment. Similarly, as we are becoming clearer about Edge services it looks like the boundary between clouds, applications, end users will become more diluted whereby finally customer expects to develop anything as per requirements, SLA’s, regulatory limitations, compute and edge resource needs etc


Source: ETSI

Edge deployments is not a new concept as those familiar with Web scales and Enterprise will confirm it however it is also a reality that MEC or Edge in Telco’s is still a Stanza among all transformation and cloud based solutions used by Telco’s but this is going to change specifically with startup’s and hyper scale cloud vendors joining the edge race as example of this is Vodafone using Saguna MEC solutions , Google and Aarna partnership going to Carriers Edge is another proof of a changing game in the Edge transformation  .

Since the core idea of this paper is to show my view point on practice and less theory so as to negate idea that MEC solutions are not production ready or still they depict some sort of elusive solutions. So, let’s start with the understanding of production ready and market hungry MEC use cases first as below.

Killer Use cases of MEC


The potential list of MEC and Edge solutions may vary but what as Telco or industry we should focus more is the quick win. As of now what I understand all Use cases considering current limitations of 4G  and 5G NSA are to be focused on

  1. Latency reduction like AR/VR , Gaming
  2. Data Reduction like analytics
  3. Off Load at the Edge like CDN , video streaming

From technology commercialization point of view and consider experience of projects deployments following seems like top use cases.

  • Top1: Video and AR
  • Top2: CDN and Video Caching
  • Top3: Video analytics
  • Top4: Gaming solutions using VR
  • Top5: Enterprise connectivity at the Edge
  • Top6: Remote Factory – DT MobileX key use case in EU market

Who should own the Edge solutions?

For this the best source I found recently is interview of CNBC with Verizon and IBM CEO which explains Edge will offer an opportunity to Telco’s which they offered to OTT in 3G/4G era . Nokia. Chris Jones says that mobile operators in particular have the advantage when “serving mission-critical industrial   automation applications requiring high performance wireless connectivity, where tight optimization of the entire data plane is essential.”

Telecom operators also have the motivation to use edge clouds to operate their own networks more efficiently

However, it is not enough as vendor capabilities required for Edge will be totally different than those of legacy. As an example, relying on Open API’s for the Edge applications and use cases more stitched to Cloud and Data Lakes like AI, ML, analytics will mean entrance of entirely new breed of vendors for example Saguna, Google, IBM with solutions deployed in 5G edge sites.

This is definitely something we could not imagine in 3G/4G era with one vendor dominating the wireless side provided both the access nodes like eNodeB’s and their related Core Infrastructure at least the RNC level.

The Hyperscale and Public Cloud giants have already shown muscle in this domain and we have seen active participation of AWS , Google , Azure in Edge deployments of Telco’s. As an example Azure Edge , AWS Outposts and Google Anthos

Enterprise opportunity for Edge solutions?

Las year we did a comprehensive white paper in ETSI for MEC application in Enterprise MEC in an Enterprise Setting: A Solution Outline (September 2018) and since then the requirements and focus of MEC use cases for enterprises has only increased  however when it comes to business and enterprise market still we believe biggest challenge to MEC and Edge commercialization is embodiment of use cases that fit exactly the enterprise requirements without looking too difficult and complex .

This is an area that requires a careful analysis of  applications and platforms and their packaging to support real world deployments , for this in 2019 ETSI MEC  has done lot of work together with partners and vendors to define MDT’s (MEC deployment trials) which focus on commercial use cases and actual deployments of MEC in the Edge . The full list of ETSI MEC MDT’s can be found here. MEC Deployment Framework.


A Special node on Hyperscale providers

Continuum to who Own’s the Edge it is more important to build capability using Hyper scales primarily as their edge infrastructure offerings are already mature and Telco’s do not come to a situation where they build something can not be integrated with their networks or will make them reach to customers sell their own solutions irrespective of access media .


There is a huge risk of cannibalization because skills, capability and customers outreach of cloud CSP’s is too mature compared to Telco’s.


 Running the RFP of Edge Deployments

It is true that everyone recognizes importance of Edge however $ dollarization of this is another issue. This is why it is very importance for a Telco to evaluate RFP and use cases that meet certain characteristics of which scaling is the most important. It is because in Edge we will be talking about 1000’s of nodes compared to 10’s of nodes previously in Core sites.

Next and most important is how to develop an Edge platform that can meet diverse applications in a common manner.Such applications will mostly come from nimble players and 3rd party

For RFP solution selection thus, I find below are top requirements to be considered.

  1. The Edge solution should seamlessly local breakout irrespective of access
  2. ZTM provisioning of all Edge services
  3. Remote and automatic O&M services for agile operations
  4. Operational efficiency (Only domain truly exist now VNF Day1 , FCAPS , RCA , Monitoring
  5. Intelligent automation (Predictive analysis, Day1 /Day2 automation)
  6. Should capitalize all infrastructure options like Public Cloud, Private Cloud, Hybrid cloud etc.
  7. PaaS and TaaS capabilities must be provided to developers and it should be provided as developers use tools in SDK environments of public cloud as of today. A friendly ecosystem for developer in terms of API abstraction and Testing using Swagger and OPENAPI2.0
  8. Total Infrastructure abstraction of infra

Finally, I think still Edge SDO’s are fragmented for example we can see ETSI MEC, OSF Edge computing, Akraino, Airship and we believe it is in the best interest of the industry as a whole to coalesce efforts and resources into a common way forward.

@ airship as application platform specially Containers seems deliver more promise

@StarlingX fully integrated and tested platform for Edge including FM/PM, where deployment is only one of the challenges to address? That needs alignment with App management

For Long term success it is very important to align vendors MVP with SDO efforts to have clear mapping with MEC applications and platform capability to ensure maximum monetization of MEC use cases.

Automation of Edge Deployments

One thing I specifically want to talk about is the automation at the edge. It is because it is different than Core sites orchestration is unique way that edge sites need more intelligence and automation along with scaling measures. It is infact therefore most of T1 Operators are building Edge Orchestration independently along above perimeters.

The other important issue of this automation is how these services will be consumed as ecosystem will not be infrastructure and VNF’s but also 3rd party, OTT’s and infact any developer.

The third and most important issues which Edge need to solve is Workload placements and optimization. Imagine a world where you want to host a Analytics application based on specific customer segment and use case so how you can decide where to deploy it , in 5G era it will not be something based on Site and coverage as we believe it today . This is something that has to be decided based on network real data in real time.

Then finally for the Edge we need to solve issues of workload portability ,  with cheap infrastructure at the edge how to migrate workloads across sites using same experiences as in hyper scales Cloud providers . This once achieved will certainly be a game changer for the Telco’s and will make them both famous and rich  like Mark Zuckerberg


Source: CISCO intelligent Edge

NFVI Models for the Edge

Lot of discussion happening in many communities to define and standardize the NFVI and Cloud offering at the edge , as a CSP will definitely look to reap advantages of scaling hence what we must look out is a pre-built and pre-integrated model that can be deployed in the far site in the most automated manner . The second key selection criteria must be data protection and security specially consider the fact that at the Edge we will run many mission critical service like Healthcare IoT, smart homes, automated cars etc. Finally Edge computing allows the transfer of data to occur in real-time. The predicted surge in data transfers from networking requires a robust system for data gathering and analyzing. All of this means HCI  should be the target model for investing in Infrastructure at the Edge .

The most important use cases will be around MEC and 5G  as an example which a CSP should target in a commercial environment .

  • Healthcare: AR/VR for pain management, AI/ML for imaging, IoT identification/segregation, remote home diagnostics
  • Manufacturing: AR/VR for training, AI/ML for IoT, IoT management, Private 5G
  • Retail: Smart mirror, AI/ML for consumer behavior, Private 5G, SD-WAN

Recently lot of work has done in Akraino community as per official website of Akraino states

“Launched in 2018, Akraino Edge Stack aims to create an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications. The Akraino Edge Stack is designed to improve the state of edge cloud infrastructure for enterprise edge, OTT edge, and carrier edge networks.”

The best direction for Edge commercialization is to start of small and scale fast using one of the following configuration models.

  • Config-0 = singe servers with out H/A (use case on 5G sites)

  • Config-1 = 2 Xsingle servers with H/A (use case on 5G Hub sites)

  • Config-3 = unicycle PoD

  • Config-4-=Tricycle PoD

  • Config-5=Cruisor PoD


This looks simple however it looks clear to me the Big Three RAN Infrastructure vendors lacks the ecosystem and hence we must start to partner with the vehement of Web scale Cloud providers to look this part as an example a Reference architecture from Google Anthos can look something like below .


Finally be noted the right ecosystem for Edge is the containers and not virtualization primarily because of application requirements and its alignment with API’s .

I will wrap this paper with the following take always as follows

  • Edge platforms must be ready one year ahead of 5G SA deployments if an operator has to fully monetize its advantages.
  • Use cases selection of MEC should target quick Win and at the same type support to increase customers appetite for Edge applications
  • Intelligence for Edge is more important than of core sites and Telco’s must select the solutions carefully considering this long-term direction.
  • Edge deployments and RFP’s should give more weight to building platforms that can support onboarding and LCM of any type of 3rd party applications
  • Finally, the time for the Edge is now.



Cisco Intelligent edge – public document

Ericsson Edge –public document

Ericsson Show case in #GITEX2019

5G Cloud and Edge Party – Heavy Reading Public Document

PoC Akraino courtesy of Aarna and Google team

Nokia White paper on the Edge – Public white papers

Understanding Road blocks for Telco’s NFV/SDN Commercialization, an Architect’s perspective


Understanding Road blocks for Telco’s NFV/SDN Commercialization, an Architect’s perspective


(Updated as per Progress/Review in NFV/SDN World Congress 2018)

CommSP’s shift towards virtualization started since 2012 but 5 years down the road still we do not find large workloads running on NFV/SDN. There are clear indications indirectly given by vendors to either move to Vendor Siloed NFV solutions or to slow down the NFV commercialization. It is also believed that with more and more traffic on NFV will reduce the performance. In other words the Open NFV Solutions are not meeting Telecom Service requirements at scale

In this Paper I should enlist the top obstructions both from standardization /technology and business from architecture aspects and I feel the suggested way will be our feedback to SDO’s /community to address them. Key point to summarize is that Telco’s cannot become IT company by mimic them but need to find a total new IT way for Telecom’s because we Telco’s have two intricate tasks to evolve existing networks in a seamless fashion and open our networks to 3rd parties and Application providers . Let’s try to reveal the Architect’s prospective

#1: Transition from Play Store to ONAP


From Pipe Lines to Distribution Facility ,but how to do this ?

To be the First mover is Fine but do we really measuring what we are supposed to achieve with NFV which is service agility and delivery from Months to Hours. Unfortunately the maturity is still low. After years of experience with NFV and Virtualization of Legacy Applications and many vendors claims their VNFs are Cloud ready it seems clear VNF should be Micro Service based and support Automatic LCM and it should smoothly work with Legacy Applications in seamless manner Similarly NFV Applications onboarding must support same concept as Mobile applications which is same model for all markets.

Till now the VNF certification is not automated although ETSI Plug tests have changed the situation , still the CommSPs are totally relying on Vendors for both VNF certification and its integration in network Based on our experience with leading vendors we still believe the vendors have not same understanding of Open standards and API’s as they still want to treat it like 3GPP , ETSI NFV ISG have done lot of work to solve these issues but at least still this part is not PnP and lacks automation . Over all the results from NFV solution components conformance testing is not the same as inter-operability. Telecom services usually have high requirement on performance and availability, usually requiring five nines of availability. Thus, when deploying services in software, the monitoring and validation is important, especially in face of failure, errors, and human mistakes.

Similarly as more and more applications come to cloud the case of Unified cloud for both Telecom and IT is a necessity specially consider management and migration aspect. However still the questions of Isolation and Security is not agreed upon. Even if all vendors claim their compliance to security it cannot be validated in field trials.Overall still there seems a Gap between Standardization/Academia and Industry which means technology on ground is not same as depicted in marketing brochure.

#2: NFV Performance At scale problem lies in VNF

The biggest challenge and Question we still trying to solve is will NFV will work at scale. Means if we put 100Gbps traffic on vEPC whether it will be same as put 1Tbps . This question is further diluted with all Service providers’ careful plans to virtualize. Even AT&T as pioneer have not really speed up to migrate till now. Based on Lab trails we get notion that NFV at scale will not work in optimal and it will impact customer experience. The cost of it is paid by Service providers in form of over sizing their infrastructure, services and wrong licensing models which as per latest Survey by RedHat costing them 36% more . The situation becomes more complex as there still not a mature and open testing framework and Service providers look again to vendors to perform and validate their own solutions . I think except for Telco’s with Strong R&D and industry presence it is not easy issue to solve in a quick go and it will continue to hamper all efforts to commercialize NFV networks at scale. Portability mechanisms and management across NFVI realizations. There could be multiple virtualization methods, multiple NFVIs, and multiple MANO systems. How to support seamless migration across different platforms is challenging.

Another aspect that makes the NFV Performance at scale difficult is VM layout for a specific capacity from different vendor varies so deploying on a single powerful VM or on multiple VMs not lead to same capacity. Finally, to reveal the actual performance that one will experience in the real network, we need to test with different network traffic, not only using plain dummy traffic to test throughput but also application-aware traffic , there are some mature test tools for NFR but not for FR and customer require Functional testing and performance validation at scale .

The other domain we need to understand is SDN as enabler for NFV not in Infrastructure but in applications to deliver NFV performance at scale. Offloading the VNF stateless information from Application like LB , Processing Tier to Switching tier like SDN can improve performance a lot but this approach requires strong Micro service architecture which still needs some more tile to mature for Telco’s

#3: Do Programmable Network real Target

It is not the programmable but the Automated Network which is target. However how we can program a network without modeling it properly. Problem that lie on hand currently is that there is a Gap between programmer and Telecom experts. ONAP is doing a good job to reduce a Gap by introducing easy to use tools that allow Service providers to automate the Network without knowing too much detailed programming. ONAP Service Logic Interpreter Directed Graph Guide (DG) is one such initiative. Details can be seen here

During Programming the Networks one more dimension we need consider is Service Function chaining. As of today SFC applied on inter VNF links or Forwarding Graphs however with Micro service architectures there can be “N” components/modules in the same VNF, how to chain such flows as it needs alignment of intra VNF and Inter VNF flows whose standardization is lacing as of today. Complete End to End design of ordering and parallelism is critical to the performance and the correctness of the entire service chain.

#4: NetOps is still not Agile compared to IT Counterpart

The Networking equivalent of DevOps which we so called NetOps is still not agile. Both the scrum runs as well as New Service TTM is still not able to deliver its promise. It is because as enlisted in SDXcentral latest survey that application automation (40%) is far more than Network part automation (20%) . In other words the lot of manual pieces make it difficult to stitch the end to end solution to achieve CI/CD pipe line. In fact the success of DevOps is more linked to close integration of tool set which simply is not normalized in Telecom industry as of now. The same is not true for NetOps. With multi-vendor network architectures, they are faced with trying to force-fit a diverse set of APIs and data formats to work together seamlessly to deploy all the components in the deployment pipeline.

“DevOps are often developers themselves; highly skilled at coding a solution to just about any problem. NetOps, on the other hand, are highly skilled networking professionals. Integration in a network is about protocol interoperability, not plug-ins and APIs. It’s a completely foreign world to most NetOps, and the tools and frameworks available are not well-suited to the kind of integration required to build a continuous deployment pipeline “

It’s clear from the challenges faced by NetOps in automating the network that they can’t catch up to their DevOps counterparts on their own. That means the networking industry at large must do more than just offer APIs and examples of automation. It means stepping up to meet the challenge with communities, support, and training that focuses as much on the basics of coding and continuous deployment as it does on interfacing with specific devices.


#5: MANO and Orchestration seems very complex


If Telco’s will ever reach their ultimate #1 goal of virtualization it must be automation and this piece cannot be achieved unless you truly orchestrate your network . It is a reality that it’s taking too long to get a good commercial MANO product. Infact on ONAP/ECOMP even AT&T started with orchestrator that was completely closed because none of the open source was ready. It’s not there yet.

Cable Labs believe that Just do an abstraction layer between your orchestrator and your VIM and VNFs. The workaround isn’t that bad for this part of the stack. APIs are becoming the de facto standard. The lower in the stack you go, that’s the best place for open source.

Issues for Orchestration and Network Service modeling are still to be solved and we need shift from ONAP and OSM demos to live trials. It is consensus that ONAP will be final destiny for Telco’s but with Telefonica leading OSM and claims on standardization especially on Information model the key element in Operationalize the NFV we are still not clear how market will evolve.

Can we think that ONAP will grab the NFVO pie and OSM will come to End to End Orchestrator owing to its detailed work on modeling of network and information model?

Vendor Agnostic Network Function and service model is still an open debate specially consider the fact many vendors are reluctant to freely share their VNF meta models or artifacts with each other .Until recently, there was a big debate in the telecom world whether YANG from IT world should also become the standard modeling language for orchestration. The ability to combine TOSCA and YANG integration is gaining wider acceptance now, as this approach seems to provide the best of both standards – where TOSCA is responsible the service life cycle and YANG controls the network configuration of the VNFs.

#6: Hardware acceleration road maps

Shared Memory, System on Chip and FPGA are technology trends that can really make Server cost model attractive to invest in NFV. However Intel Virtio standardization for smart NIC is a bit late. Even as of day of this writing it do not support the Off load making it harder to use it based on consolidated NFV solution based on OVS +DPDK processing . It leaves industry only choice to still rely on SRIOV for Smart NICS which is not a scalable solution as it require investment and customization /Support from VNF and ultimately making it hard to make a Light VNF which is our target.

For optimal Capacity and CAPEX investments by Carriers on NFV the use of smart NICS for data plane VNF’s is very important.Below is a nice summary of industry current understanding


#7: Architecture Limitations

Many VNF’s as of today assume their deployment in collocated Data Centers. It means different VNFC’s cannot split across Data Centers. It means difficult to support high content and latency applications. 3GPP CUPS architecture is one direction to address this challenge. Various distributed protocols and consistency mechanisms must be used to support a fully distributed NFV implementation. Some of the network functions, for example, traditional 3GPP and telecom NFs, are not easy to scale since they are not modular. It is important to reconsider its design to fit the new cloud architecture. I think use of VxLAN and L2 E-VPN tunnels in SDN/Networking and Cloud VNF must support its deployment is split Data centers.


#8: Issues with Open Source Standardization

It looks lucid to transform CommSPs from SDO’s to Open Source approach primarily because except the Top Operators most do not have relevant teams and support mechanism like R&D , participation with communities to make the whole chain work . Result is that even the Open source communities are being swayed by vendors who invested large money in them with aim to steer it in their favors. I think Service Providers require persistent interoperability in large scale multi-domain, multi-vendor deployments of NFV/SDN and Openstack, OPNFV and ONAP looks like minimum communities that should develop easy methods encourage all Operators participation. Similarly Cross organizations cooperation is essential to build a thriving environment for Operator innovation and to reduce fragmentation posing issues with Open NFV/SDN solutions. Similarly Network modeling and Service orchestration will be real value that Operators can achieve by this transformation but till now API exposure and Information model standardization looks like issue hampering the mass scale deployments


#9: Services is Key to Lead Open Solution Market


According to ABI Research spending on NFV Infrastructure including servers, storage devices, and switches, SDN, Cloud would decline over time. At the same time, software and services will show higher growth rates of 55% and 50%, respectively. Even of today the standardization and multi-vendor involvement challenges remain stagnant and ready complete ready for carrier grade deployments at scale. Result of all of this is NFV is not delivered on its promise to cannibalize PNF world Products but rather still today act as an investment over head on top of existing Legacy Network . The Problem statement of non-mature Services and System integrators will be for how long CommSPs can survive to Plan duplicate investments.

 It is strongly recommended to separate Service and specially the system integration from product of NFV/SDN solutions in practice , Focus on CoE and in-house R&D capability with presence/involvements with ISGs “

The Services together combined with independent testing framework will surely support for fast commercialization of NFV at scale.

#10: Why Cost is the first metric

NFV/SDN may happen to be the best technology of Industry but matter of fact is that if a Technology is not supporting business objectives it may not be fit for use in commercial networks. Latest analysis predict using a multi-vendor solution on NFVI , VNF ,MANO can cost Operator 5X times compared to procuring similar capacities in the PNF . Currently still main stream vendors trying to sell licenses and services in same manner as they used to sell legacy so for sure Cost metric as KPI cannot deliver meaningful information .Instead TTM ,Automation, new Product offerings seems right metric to analyze NFV/SDN ROI . But these benefits can be difficult to quantify making investments in new technology domains a bit difficult. Till now the CSP major bulk of investment is going to CAPEX  80-90% . This skews the very business model of cloud which is primarily OPEX. This is a shift from traditional ways that operators have done business and it can distort the monetary benefits NFV can deliver, making them harder to grasp. Address this sooner rather than later. Re-Balancing the Investment between CAPEX and OPEX is key especially with many S&S and Subscription based models in NFV/SDN


#11: Why NFV/SDN Licensing is not optimal

Five years down the NFV road and still the VNF applications are not licenses as per recommended best practice suited for applications in the Cloud. Still it is not standardized and we need rely on Procurement negotiations solely to see if we can move from proprietary VNF model to something like pay-per-use, pay-per-GByte, pay-per-Gbit, Pay per Active user (SAU) ,pay-by-maximum-instances, pay-per-day, pay-per-month, and pay-per-minute. Add licenses for multiple VNFs to the mix, and it could get ugly, fast.

Still the issues for centralized license manager for NFV is not standardized leaving Operator to myriad of licenses each of which need to be managed by its own vendor specific manner . This is causing bit operator limit its choice to 2~3 vendors for its NFV/SDN program blocking the way for ISG’s and small niche players.

Key recommendations for license managed in NFV can be summarized as follows


Epilogue and call for Next Action

As industry we still need support idea of NFV Openness as key enabler if we ever target to achieve Network Automation and get a viable ecosystem. There are big vendors who promote idea of Siloed solutions which seems like easy to go solutions, we should re-iterate these are not NFV solution but we can say like ATCA Phase2 solutions. If we as industry not empower our networks through power of openness and collaboration, even decade investment will not enable us to become like a software style company which was our initial target. So we must assure the overall Digital Transformation remains the central and focal point as we plan to virtualize the Network. Our participation in Key Open Source initiatives like Openstack, ODL, OPNFV, and ONAP will be key to realize the future we have been waiting for long time. Ultimately, if NFV does not allow CommSPs to reduce their infrastructure, management, and Specially licensing costs, customers will not improve their total cost of ownership (TCO), and adoption will be slow .It will still take time to make Multi-Vendor NFV/SDN solution looks easy same like proprietary solution and its operations/management unified consider the Hybrid Cloud trends . We are an industry in the phase of transformation and indeed we are an industry who still wait for NFV/SDN economies look almost comparable to Legacy world applications. It is only then NFV will really take a shape of main stream solution. We all know finally it will happen but how we approach it will ultimately decide how soon it will actually happen.

At last I only want to share my idea that just like IT in Telco the way application will be written and its LCM will be managed is key if Telco’s can be agile and follow NetOps approach . Unfortunately till now the industry embarked on a critical transformation journey with a bottom up approach starting with things they know, vendors re-purposed existing orchestration products, which led to the bolt-on architectures. In other words, they used same old methods and expected new results. This approach needs change in an automated way. This is an important issue and I hope in next Open Stack summit I shall be speaking about it about our unique understanding on this .





5. ONS2018 Summit slides – Accelerate-the-VNF-Integration


7. Towards an NFV enabled SDN Architecture


Report this

Published by

Saad Sheikh  ☁

Architecting Scalable and Production ready Orchestration frameworks for a Digital Business

Architecting Scalable and Production ready Orchestration frameworks for a Digital Business


<Telco , Applications and Enterprise domain>

“To achieve great things, two things are needed: a plan and not quite enough time.”   

Leonard Bernstein

Subsequent to my talk followed by nice demo of ETSI OSM by  Telefonica’s Francisco Javier Ramón Telefonica at the U.S Network transformation congress organized by  Layer 1,2,3 Network Transformation Congress one question always nodded me

 “when exactly Industry promises around software control and Orchestration will be Production ready with clear use cases and a distinct framework “

Almost after a six month gap I am here at the ONS Open Networking Summit Europe  and still I am thinking the same one.


All of us will agree what appear simple on paper in Practice a daunting task. This never dilutes the importance of front runners and practice after all with out delving on this journey also we can not solve the issues .

Therefore, as an active community member in the ETSI NFV NOC ,  OSM and Linux foundation EUAG I think is important to explain my own point of view where we stand and to apply a focused approach to solve it .

Therefore, the purpose of this paper is to summarize key challenging faced by Telco and IT industry on this domain and thereby defining an architecture based on best of the breed SDO and community frameworks that will ultimately drive a carrier RFP and vendor roadmaps to define the missing Lego of the Future Network.

The part1 of this paper is to enlist the Situation today that really makes it difficult to Automate and orchestrate the networks using a standard modeling framework. If you are reading it make sure you also read Part-2 the Ultimate story to be published in Nov 2019 .

Orchestration market as we know today is full of fragmentation and we are still finding the optimum way to address it . However, our years of working in this domain has made clear that for Orchestration to be successful it must solve both the abstraction view (hide service from underlying network) and automation view (a way to model network as a software program) . Obviously the two are related as automation is deeply stitched on Abstraction and modeling . Low abstraction means difficult to model and difficult to automate

I think our approach of selecting right Orchestration vendor has not been fair where we either tilted towards network vendors or to the BSS/OSS vendors with out setting right criteria hoping vendors will set it correctly. This is why as I recommend to CSP’s  to select an automation and orchestration product must consider following five characteristics as a baseline .


  1. NFVO must be modular in nature and not a full product
  2. Simple to design (day 0 , Web based , Drag and Drop )
  3. Should support truly open source with strong community support (e.g instead of ETSI MANO promote ONAP/OSM)
  4. Intelligent automation (Predictive analysis, Day1 /Day2 automation)
  5. Operational efficiency (Only domain truly exist now VNF Day1 , FCAPS , RCA , Monitoring)

Further these characteristics are fully endorsed by how industry views it with TMForum projects like NaaS , MEF LSO and ETSI SO frameworks. However, having said this still we find the full abstraction from business layer to underlying layer is not achieved till date  by E2E SO  or upper BSS layers making the full cycle still a manual and vendor defined way . The situation as existing in TMForum NaaS program and suggestion can be depicted below


NFVO Challenges

Having speak about these two points it is also important how we have deployed our projects anyways. Based on projects experience we believe major setback to Orchestration is coming from bottom up approach where all efforts have been primarily VNF driven with less focus on software control , e.g CSP finds NFVO from vendor X will only deliver value if VIM ,VNF from Vendor Y is selected , issues of standards specially unified meta modeling for VNF are not followed well by all vendors .

So to devise solution first list down issues we need solve .

  • Network service is different from VNF /Interfaces and Template  standardization [Get Rid of vendor defined VNFDs]
  • How to separate NFVI FCAPS solutions from NFVO ?
  • Is it possible to use NFVO components / From different vendors to make one NFVO
  • How NFVO controls MEC and other / D-Cloud architectures
  • One NFVO fits to support MV VIM , MV VNF scnerio , Telco , Enterprise etc  and may be PNF/VNF scnerio?
  • Service Orchestration like offer VoLTE / Service from portal one click?

A part from above there are some challenges coming from industry itself which can be summed as follows

  • ONAP vs OSM
  • Rely on vendor or community as most successful companies in IT Cloud /Automation were those who build their own solutions by the power of community
  • Can we promote a production ready G-VNFM and remove S-VNFM totally?
  • Can Telco and IT workloads can be automated in the same manner (Cloudify vs Rift approach)


NFVO Scaling and Performance Issues

It seems clear what started off as an industry idea and university R&D project is a commercial reality now and it means the solution must deliver a comprehensive TCO and savings if we have to deploy and use it . This is why a company CXO want to visualize Orchestration across the whole network from the Core Cloud to the Edge.


However although all EUAG’s have analyzed the Orchestration solution and know that real value chain exists at the Edge but still are finding the best way to  do it at scale .  There seems to be some clear challenges.

The real issues of Orchestration around scaling and orchestration can be defined as follows .

  • ETSI MANO was good in central DC but a more open and developer API based orchestration means it will have limitations to scale the Edge
  • ONAP seems modular, scalar , programmable but same time its complex. If you are not a developer type of company its really hard to keep pace with it
  • OSM is good for Operator specially its IM model , VNF/PNF support but will it be really carrier grade at the Edge with too much dependency on adaptors
  • Finally, should be deploy a virtualization at the Edge based on VM style (Akraino/StarlingX) or container style (Akraino/Airship).


NFVO and DevOps 

Telco Needs DevOps and that is why the Orchestration and automation business is taking a huge momentum, let us understand phases in which we can evolve .


  1. Phase1: When We believe we have reached a point for Virtual Network automation specially Day1
  2. Phase2: Next we want focus on Day0 design and Day2 specially VNF configuration automation for 3rd Party VNF’s
  3. Phase3: Next main idea if build complete CI/CD Pipe line through all artifacts and tools form many vendors including Open source ones
  4. Phase4 the pure Orchestra: This vision is important to support Play store vision for Orchestration

Knowing this vision is as vital as knowing the road to your destination. For a long journey we all know what matters most is we travel a handsome distance but making sure we know our target .

Defining Business use cases of automation

“ETSI NFV TST006 is still not mature , primarily DevOps in CSP will address Digital service management like ZSM through portal ,Auto Slice management , E2E Closed Loop control , service automation equals (===) Telco DevOps”

To ensure automation and orchestration business takes momentum it is vital to link it with a business use cases , however DevOps is still a fancy buzz word but with ETSI Release 3 (TST 006) we are coming out to define a more clear interpretation for CSP’s however it I still a Early draft and mainly only talks about process and responsibility split between developer (Vendor) and provider (CSP) . It obviously is important because in Telco we work with so many vendors  and their responsibilities and management is very important .

However still we want follow direction to introduce DevOps based on  a use cases to deliver maximum value for CSP with key use case which are shown below . Below use cases are a good start to ensure the value of automation is indeed delivered to business


Based on experience one can see there are lot of issues but definitely possible solutions and directions to address them .

I will wrap this paper here and will come back soon with Part2 – The Orchestration solution . This is based on our recent work with communities and to come to a conclusion that industry success will be more relied on coalescence and support for hybrid solutions than to take only one direction to solve issue.

This will be the epilogue of the Part2 of this paper and I sincerely hope it will add value to what you are doing irrespective of Telco or IT business .


ETSI docs





Reshaping the Telco Industry of future through Service Innovation and Simplification

Sheikh is Huawei Middle East Senior Solution and Delivery Architect covering domains of NFV, SDN ,Cloud,Internet of Things (IoT) and DevOps based innovation ,specially interested in those disruptive technology driving the industry transformation

Reshaping the Telco Industry of future through Service Innovation and Simplification

“Change your head (Mind) or wait for its removal”

According to the World’s Economic Forum report 2016-17 the investment in the global Telecom industry has underpinned an immense shift in the information, cash flows and societal development mostly adding value the global economy and improving lives of masses

In fact in next 5-10years the real value that society can reap due to Digitization and information will depend on the Telecom industry provide the underlying infrastructure and improve on customer experience side but the fact is that the Telecom industry profit has declined from 58% in 2010 to 44% in 2016 end (As per survey) and it seems clear the Health of this industry is a question in itself.

Infact this issue in telecom industry is like a iceberg theory and many coupled issues so CEO need to solve these underlying issues instead of just barraging old themes of aggressive products and rollout to improve profits .In fact such an approach can only bleed the industry woes to the bottom line . Traditionally IT and Telecom industries considered separate but the fact is that the silent nimble players like Skype, WhatsAPP, IMO , Wechat has silently taken huge profits while using Telco infrastructures to reach customers merely through digitization of services

Below are some key actions to focus, not only ideas some on ground fact that will put the Telco’s back in command


1.    Re-Architecting the Future Networks

Network synergy, Energy efficiency, Infrastructure sharing, high capacity OTN networks, Single RAN, Virtualization, NFV, Cloud based. These are all the terms you will listen from time to time but theme is simple make architecture simple and modular

It will enable you to not only offer a competitive price but also make new services innovate easily

2.    Operations simplification

This is a very important target for industry. I remember the ETSI Phase2 specification specially considered the integration between NFV and SDN to solve enterprise needs for use cases that extend beyond geographies of a typical data center.

Similarly the internet style of operations where main skill is based on programming and Meta modeling of issues instead of troubleshooting skills by written guides is very important.

In Huawei we have developed a customized process for the Telco’s based on DevOps but meeting more specific needs from carriers like high quality testing , fault demarcation and continuous delivery making best product based on Pilot and Production environment synergy and concept of remote delivery . We call this process IntOps.

The key point is that Telco’s can not merely copy IT companies default DevOps, Chef and Ansible playbooks to use them directly.

I think this is the main theme of CTIO to think how to achieve this goal.

3.    Orchestration

What is our definition of this metric is how the experience will be delivered to customer through Omni channel. The standardization of Os-Ma interface will lead us to a world where customer can not only purchase but can drive the service creation through online experience.

The Service Logic and Open Graphs will be intelligent enough to create services across many infrastructures

4.    IoT

The internet of everything is a game changer, and due to the reason each IoT use case is customized in terms of QOS and service modeling makes Operators prime drivers to speed up and monetize this opportunity for their transformation.

5.    Security ,Cyber crimes and personal data integrity 

As data volume increases beyond imagination and networks become converged the threats for safe communication is immensely threaten by the Security guarantees and the intrusion detection.

·        Security policy interaction with security devices

·        Technology to defense APT attack

·        Utilizing Industry Best Practice (CIS Standard) to enhance OS

·        Security abilities more than the original OpenStack

·        Security isolation of Virtual Resource for VNF

·        The integrity protection based on TPM

·        Introspection technology for business information encryption

·        Secured deployment, upgrading and PRIST

·        Actively participate in ETSI NFV Security Specifications

6.    Organization Re-structuring

Who will solve this issue when the decision maker itself fears his redundancy? Infact this point is complex with Telecom aging workforce. It seems difficult to tell a person all the skills he acquired in two decades are not relevant now.

How to let people work in cross function Scrum teams, how to replace each manger by leader or problem solver.

To solve idea is simple, nothing can be achieved overnight. Let us promote guys who accept the change and upgrade. Set targets and make policies that discourage lazy team’s .Focus on software skills more. Python /Cheff /OpenStack /JSON

7.    Application or Infrastructure make clear decisions

Compared to IT industries, still the Telecom industry reliance on infrastructures is huge .By this I means the Telco services are not Infrastructure independent. While on Enterprise Architecture for IT the whole pie lies on the Application itself.

NFV micro service is all the stuff targeting Telco separate from infrastructure but fact is that still many services rely on his part like SR-IOV, Smart NICS, FPGA and accelerators

The strategy set on this level will have huge impact on the future of Operators. The industry leaders like AT&T , Verizon , NTT ,DT all have make clear plans to be the Software defined operator so I think actions should follow the same .

8.    Replace troubleshooting by service automation

This is a crazy idea. The SON and Software defined networks can scale and heal automatically. But the programming logic and map need to build by operations team.

It require more specialized skills on both service modeling and coding and obviously many operations jobs will be depleted .

9.    Co-deployments and converged clouds

Telco in recent years have already deployed Cloud based applications like Cloud Core, Cloud Edge, VAS Cloud but the fact is that customer CTIO must view Cloud as a shared pool which is independent of any application. The year must see ways how to join the clouds.

Till now the Global resource sharing among tenants is key point that is not standardized and difficult to customize based on each tenant. I think the community is not doing business driven approach to solve such issues. This is also a reason why commercial companies look for productions ready Distros like Huawei Fusion, Mirantis Ocala MOS, and Redhat OSP etc

10.                       Process mapping 

This is last and most important part, as from decades Telco’s have followed a very standardized process and structures to run the organization. with the digitization many of these things is going to change like Team structures , process , procedures , upgrade the eTOM processes to the ITIL based processes , Define SLA’s across IT and Telecom domains and incorporate orchestration and key enabler

I have this idea to make the processes the Scrum leader must be appointed and below team member is essential to map the processes. It is good idea to match the processes with PMI ® ACP XP (Extreme programming matrix)

Based on across industry analysis it seems evident that with huge challenges ahead the time is now to adapt .On operational level its more about transforming the ways of working and on strategic level is to deliver complete digital experience ln addition we need to solve the cross industry Silows and specially legal and regulatory concerns

According to the World Economic Forum report there are still other factors like micro branding, atomization, legal and lastly digitization of user experience side but since the scope is too wide and I am not expert on such topics, so I am leaving those for other industry experts.

I have a sincere hope that these 10 points are very important for CXO to drive the Digitization of services across industries. Future we will not listen Cloud , NFV , SDN as different terms , also industries like banks , Telco’s , enterprise not separate Isle’s but joint realities driven by the common goal of digitization


1>ETSI standards

2>ITIL expert boot camp

3>World Economic forum report Released in January 2017





Adoption challenges in Open source and opportunities for niche System Integrator

In 2019 the IT and Telecom industry has really come to synergy point through the power of open Source solutions. I still remember days in September 2012 when we are firstly looking the Open Source solutions .It was really difficult that time to even think the Open Source can integrate the so called real Isle Industries of IT and Telecom. However the fact is that big Companies already doing production ready work for consolidation of IT and Telecom Datacenters.

What is most exciting for me is that I started 2019 with a CXO workshop asking the way forward to include the Banking Application on the same Data Center. Obviously at this time the idea is very vague because the Banking as we all know requires highly secure and reliable. Many of the ideas of traditional IT and Telecom cannot be directly applied there .However the message is clear the future IT services will cover whole Ecosystem and infact each industry is Just like Application appliance .

The Cloud Based infrastructure is an idea which is so important to develop in multi tenant environment because each vertical will come with its own set of requirements. It is clear that there is huge market demand for this and actually no vendor can eat all this market though many will try to do this.

This type of Cross industry and Multi tenant solutions will strongly depend on the power of Open Source solutions to couple the solutions form many vendors as a module. The concept which is known in industry as a micro service .However the integration of all these Open Source is very complex especially when it comes to adaption and matching CIO requirements. It is because although the solution can be Open but customer requirements can be customized. It is because no customer want to live in a house build on the wish of someone .The Personalization of each customer solution is very important.

Obviously all customer wish to transform to an agile infrastructure and reduce OPEX but really what must be there first steps like how to select the Components from the Open Source Solution? What must be there Cloud, VIM, MANO, VNF, Repository etc. This type of question many CXO already put to vendors and the answer of each vendor depend on its business model like what is there Ecosystem without delving to details whether it can satisfy customer real needs

I think this strategy is damaging the NFV adoption on a wider scale, because vendors want to push a solution while the best strategy for open Source solution must be a Pull solution which matches customer needs. Without this it is difficult to increase the real demand of NFV and Open Source Projects

After selecting the software the next daunting task must be how to make it functional.  The product view of this scenario is to develop all applications as a VNF Appliance through image file. It means any application can be available and only usage is licenses. However the story is not so simple. Infact it is in stark contrast to the Idea of App Store as prevailing in IT industry.

The part of such endeavors must be how to efficiently use these images to develop a service, something customer really need, way to auto test and auto deploy the service and obviously how to apply Puppet, Ansible and Chef Play books schemes to achieve it. I mean to say just the Appliance and their abstraction is not enough how to model the service and make it interwork is key to success for this realization

This type of open Solution ecosystem will support the wide range of adoption across the industry; maybe we will see the NFV going live in both Private and Public Clouds and the application will be as freely distributed as the Windows of today

However with more and more open solutions the dilemma to integrate them will become more and more important question by all CIO’s , not only this we will come to a situation where customer will question the reliability and performance of open source solutions in commercial deployments ? Can SI can use all open Source tools and offer something a single Pane glass operations modeled around Open API’s and programming abstraction .

This is the real value of System integrator, the Hercules the IT and Telco industry waiting for years.

Can 2019 be the year of that SI’s making an impact? The story is open and the best team will make it happen. System integration team has to listen to customers listening to the industry, partners and customer alike to solve the issues of Ecosystem.

Collaboration is the key and sharing of experiences to solve industry issues is critical to be the System integrator of choice for the industry