Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks

Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks 

An architect’s perspective in 2020+ era


The recent global situation and use of Critical Telecom infrastructure and Security solutions in the Cloud has shown to many critics as well the esoteric of  terms like Hybrid Cloud , AI , Analytics and modern applications is so vital to bring society and economy forward .

Seeing the latest development where we have been actively joining the community in both Infrastructure and Application evolution both in the Telco and Enterprise Application world i can safely conclude that days where Infrastructure is engineered or built to serve application requirements are over . On the contrary with the wide range of adoption of application containerization and Kubernetes as platform of choice the future direction is to design or craft the application that can best take advantage from a standard cloud infrastructure.

Understanding this relation is key impetus between business who will flay and those who will crawl to serve the ever-moving parts of the Eco System which are the Applications


Source: Intel public

In this paper let us try to investigate some key innovations on Infrastructure both Physical and Cloud which is changing the industry Pareto share from Applications to Infra thereby enabling the developers to code and innovate faster.

Industry readiness of containerized solutions

The adoption of micro services and application standardization around the 12 factor App by cloud Pioneer Heroku in 2012 gave birth to the entire new industry that has matured far quickly compared to virtualization. A brief description of how it is impacting market and industry can be referenced in Scott Kelly paper in Dynatrace blog . This innovation is based on standardization of Cloud native infrastructure and CNCF principles around Kubernetes platforms aimed at following key points


The Covid-19 has proved the fact that if there  a single capability that is necessary for modern era business to survive then this is scalability , in recent weeks we have seen millions of downloads of video conferencing applications like Zoom , Webex , blue Jeans then similarly we have seen surge demand of services in the Cloud . Obviously, it would have been an altogether different story if still we were living in legacy Telco or IT world.  3


Immutable but Programmable

On every new deployment across the LCM of applications the services will be deployed on new infrastructure components, however all this should be managed via an automated framework. Although Containers in Telco space do require stateful and somehow mutable infrastructure however the beauty of Infra will keep the state out of its Core and managed on application and 3rd party level ensuring easy management of the overall stack

Portable with minimum Toil

Portability and ease of migration across infrastructure PoPs is the most important benefit of lifting applications to the containers, infact the evolution of Hybrid clouds is the byproduct business can reap by ensuring applications portability in

Easy Monitoring and Observability of Infra

There is large innovation happening on the Chip set, Network Chips (ASIC), NOS i.e P4 etc however the current state of Infra do not allow the applications and services to fully capitalize on these advantages. This is why there  are many workarounds and complexity both around application assessment and application onboarding in current Network and Enterprise deployments

One goof example of how the Container platforms is changing the business experience on observability is Dynatrace which allows the code level visibility , layers mapping and digital experience across all hybrid clouds .


Source: dynatrace

Composable Infrastructure

Already there looks a link from platform to infrastructure which will support delivery of all workloads with different requirements over a shared infrastructure. The Kubernetes as a platform already architecting to fulfill this promise however it requires further enhancements in Hardware, the first phase of this enhancement is using HCI, our recent study shows in a central DC using of HCI will save CAPEX by 20% annually. The further introduction of open hardware and consolidation of open hardware and open networking as explained in the later section of this paper will mean services will be built, managed and disposed on the fly.

From automated Infrastructure to Orchestrated delivery

Infrastructure and Network automation is no longer a myth and there are many popular frameworks to support it like Ansible , Puppet , Jenkins and Cisco DevNet .

However, all those who work on IT and Telco Applications design and delivery will agree the cumbersomeness of both application assessment/onboarding and application management with little infrastructure visibility. This is because the mapping between application and infrastructure is not automated. The global initiatives of both the OSC and SDO’s like prevalent in TMT industry has primarily focused on Orchestration solutions that is leveraging the advantages of the infrastructure specially on chip sets driven by AI/ML and enabling this relationship to solve business issues by ensuring true de-coupling between the Application and Infrastructure


Although the reader can say the platforms like Kubernetes has played a vital part for this move however without taking advantages of physical infrastructure simply it could not be possible. For example both Orchestration in IT side primarily driven by K8S and on Telco Side primarily driven by initiatives like OSM and ONAP is relying on infra to execute all pass through and accelerations required by the applications to fulfill business requirements  .

Infact the Nirvana state of Automated networks a more cohesive and coordinate interaction between application and infrastructure under the closed loop instructions of Orchestrator to enable delivery of Industry4.0 targets.

Benefiting from the Advantages of the Silicon

Advantages of Silicon were, are and will be the source of innovation in the Cloud and 5G era . When it comes to Hardware infrastructure role in whole Ecosystem, we must look to capitalize on following


The changing role of Silicon Chips and Architectures (X-86 vs ARM)

The Intel and AMD choices are something familiar to many Data center teams, somehow in data centers where performance is a killer still Intel XEON family outperforms AMD whose advantages of lower floor print (7nm) and better Core/Price ratio has not built a rational to select them. Other major direction supporting Intel is their supremacy in 5G , Edge and AI chips for which AMD somehow failed to bring a comparative alternative. The most important drawback as the author views is basically the sourcing issues and global presence which makes big OEM/ODM’s to prefer Intel over AMD.

However the Hi-Tec industry fight to dominate the market with multiple supply options specially during recent US-China trade conflict has put TMT industry in a tough choice to consider non X-86 Architectures something which obviously no one like to have as its Eco system is not mature and the author believes a un-rational selection will mean the future business may not be able to catch advantages coming from disruptors and open industry initiatives like ONF ,  TIP , ORAN etc

Following points should be considered while evaluating

  1. Ecosystem support
  2. Use cases (The one which support Max should win)
  3. Business case analysis to evaluate performance vs high density
            Except Edge and C-RAN obviously Intel beats ARM
  1. Aggregate throughput per Server
  2. NIC support specially FPGA and Smart NIC
          Obviously, Intel has a preference here
  1. Cache and RAM, over years Intel has focused more on RAM and RDIMM innovation so somehow on Cache side its thing ARM has an edge and should be evaluated. However consider fact not all use cases require it makes it a less distinct advantage
  2. Storage and Cores , this will be key distinguisher however we find both vendors are not good in both. Secondly their ready configuration means we have to compromise one over other
           This will be the killer point for the future silicon architecture selection
  1. Finally, the use of inbuilt switching modules in ARM bypassing totally the TOR/SPINE architecture in Data centers in totally may got proponents of Pre-Data center architecture era however promise of in-built switching in scaled architecture is not tested well. For example, it means it is a good architecture to be used in dense edge deployments but obviously as far as my say is not recommended for large central Data centers.

However only the quantitative judgement is not enough as too much dominance of intel meant they do not deliver the necessary design cadence as expected by business and obviously opened gates for others, it is my humble believe in the 5G and Cloud era at least outside the Data centers both Intel and ARM will have deployments and that they need to prove their success in commercial deployments so you should expect both XEON® and Exynos silicon recently .

FPGA ,SmartNICs and vGPU’s:

Software architecture has recently moved for C/C++/JS/Ruby to more disruptive Python/Go/YAML schemes primarily in a drive of business to adopt the Cloud . Business is addressing these challenges by requiring more and more X-86 compute power however improving the efficiency is equally important as well. As an example, Intel Smart NIC family PAC 3000 we tested for a long time to ensure we validate power and performance requirements for throughput heavy workloads.

Similarly, Video will be vital service in 5G however it will require SP’s to implement AI and ML in the Cloud. The engineered solutions of RedHat OSP and Openshift with NVIDIA vGPU means the data processing that was previously only possible in offline analytics using static data source of CEM and Wirefilters.



Envisaging the future networks that combines power of all hardware options like Silicon Chips, FPGA, Smart NICs, GPU’s is vital to solve the most vital and business savvy challenges we have been facing in the Cloud and 5G era.

Networking Infrastructure


There is no doubt networking has been the most important piece in Infrastructure and the networking importance has only increased with virtualization and with a further 10-Fold increase with Containers primarily as Data centers fight to deliver best solutions for East-West Traffic. Although there are a number of SDN or automation solutions however there performance has scale has really shifted the balance towards infrastructure where more and more vendors are now vesting on the advantages of ASIC’s and NPC’s to improve both the forward plane performance but also to make the whole stack including fabric and overlay automated and intelligent fulfilling IDN dream by using latest Intel chips that comes with inherent AI and ML capabilities .

The story of how hardware innovation is bringing agility to network and services do not ends here for example use of Smart NICS and FPGA to deploy SRV6 is a successful business reality of today to converge compute and networking infrastructure around shared and common infrastructure.

Central Monitoring

Decoupling, pooling and centralized monitoring is the target to achieve and already we know with so many solutions which are somehow totally different in nature like on networking side between fabric and overlay means to harmonize the solutions through concept of single view visibility. This will mean that when an application demands elasticity hardware does not need to be physically reconfigured. More compute power, for instance, can be pulled from the pool and applied to the application.

 From Hyperscale’s to innovators

The dominance of hyperscale’s in Cloud is well known however recently there had been some further movements that is disrupting the whole chain. For example, now ONF Open EPC can be deployed on OCP platform. Similarly, the TIP Open-RAN initiative is changing the whole landscape to image something which was not even in discussion a few years before.

Since the ONF is too focused on Software and advantage brought forward by NOS and P4 programming so I think it is important just to talk about OCP . The new innovations in rack design and open networking will ensure to define new compute and storage specifications that best meet the requirements for the unique business requirements  .Software for Open Networking in the Cloud (SONiC) was built using the SAI (Switch Abstraction Interface) switch programming API and has been adopted unsurprisingly by Microsoft, Alibaba, LinkedIn, Tencent and more. The speed at which adoption is taking place is phenomenal and new features are being added to the open source project all the time, like integration with Kubernetes and configuration management

Summary review

Finally, I am seeing a new wave of innovation and this time it is coming via harmonizing of architecture around Hardware, thanks to the effort in last few years around Cloud , Open Stack and Kubernetes. However, these types if initiatives will need a more collaborative efforts between OSC and SDO’s i.e TIP and OCP Project harnessing the best of both Worlds

However, with proliferation of so many solutions and offerings the standardization and alignment of common definitions of Specs for the Shared Infrastructure is very important.

dis SDN

Source: Adva

Similarly to ensure innovation delivers the promise the involvement of End user community will be very important , the directions like LFN CNTT , ONAP , ETSI NFV , CNCF and GSMA TEC are some of the streams which require operator community wide support and involvement to come out of clumsy picture of NFV/Cloud of last  decade to replace by true innovative picture of Network and Digital Transformation .A balanced approach from Enterprise and Telco industry will result the business of today to become the hyperscale’s of tomorrow .

I believe this is why after a break this is the topic I selected to write. I am looking forward for any comments and reviews that can benefit community at large


 The comments in this paper do not reflect any views of my employer and sole analysis based on my individual participation in industry, partners and business at large. I hope sharing of this information with the larger community is the only way to share, improve and grow. Author can be reached at


How Open Orchestration enhances  Enterprise, 5G , Edge and Containerized applications in Production


Source: ETSI <>


How Open Orchestration (OSM Release-7) enhances  Enterprise, 5G , Edge and Containerized applications in Production

An architect’s perspective from ETSI® the Standards People


As highlighted in the Heavy reading latest End-to-End Service Management for SDN & NFV all the major T1 Telco’s are currently refining their Transformation journey to bring standard Orchestration and Service modeling in their networks , one of such standard approach is promised by ETSI OSM a seed project from ETS® the standards people .

Recently in Q4 2019 ETSI OSM release the Release7 which address surmount challenges of brings CNF and Containerized applications to the production ETSI OPEN SOURCE MANO UNVEILS RELEASE SEVEN, ENABLES MORE THAN 20,000 CLOUD-NATIVE APPLICATIONS FOR NFV ENVIRONMENTS

This capability of ETSI® OSM is specifically important considering the ushering of 5G SA architecture and solutions which already find its way to the market thanks to early work from CNCF and specifically CNTT K8S specs . OSM brings value to the picture as it will allow to design, model, deploy and manage CNF’s (As ETSI NFV call is a containerized VNF) without any translation or modeling. It also lets operators experience early commercial use case of integration Helm2.0 in their production environments. On top of it will allow a NS (Network Service) to combine CNF’s with existing VNF’s or legacy PNF’s to deliver complex services in an easy to deploy and manageable manner.

In the following part of this paper I will try to share my understanding on OSM release7 and sum up results from ETSI OSM webinar on this subject held on JAN 16th 2020 . For details you may need to refer to webinar content itself and can be found  

Why Kubernetes is so important for Telco and Enterprise

Telco industry has experienced lot of pain points the way NFV journey has steered with focus on migrating existing PNF’s to the Cloud. K8S offers opportunity for all Platform providers, application vendors, assurance partners to build something on modern principles of micro services, DevOps and Open API’s driven. This is something that already made its way to Telco’s in OSS and IT systems as an example mycom OSI UPM , OSM and  infact ONAP all are already based on Kubernetes , the arrival of 5G SA and uCPE branches has driven almost all operators adopt networks to use Kubernetes . Further it is principally agreed as CSP’s move to Edge the K8S will be the platform of choice.

Foundation for K8S Clusters

Kubernetes made it simple for the applications and CNF’s to use API’s in a standard fashion using K8S Clusters which are deployed either in an open source manner or via Distros. The early adoption of CNF’s in Telco largely supports the consumption model of vendor Distros like RedHat OpenShift, Vmware PKS, Ericsson CCD to mention the most important ones.

Since containers are like a floating VM’s so networking architecture specially the one promised by L3 CNI plugin and Flannel is important direction to be supported in Platforms as it is supported in OSM .

The reusability of API makes it simple for application to craft unique application in form a build configuration files using artifacts of PoD, services, cluster, config map and persistent volumes which are defined in a very standard manner in K8S by which I mean deploy all artifacts through a single file.

ETSI® OSM can be deployed using both HELM2.0 as well as Juju charmed bundles


Foundation for Helm

Helm gives teams the tools they need to collaborate when creating, installing, and managing applications inside of Kubernetes. With Helm, you can… Find prepackaged software (charts) to install and use Easily create and host your own packages , Install packages into any Kubernetes cluster Query the cluster to see what packages are installed and running Update, delete, rollback, or view the history of installed packages Helm makes it easy to run applications inside Kubernetes. For details please refer to details HELM packages on

In a nut shell all day1 and day2 tasks required for the CNF’s are made possible using Helm and its artifacts known as Helm charts including application primitives, network connectivity and configuration capabilities.

Key Features of OSM Release7

OSM Release 7 is a carrier grade and below are its key features as per wiki

  • Improved VNF Configuration interface (One stop shop) for all Day0/1/2 operations
  • Improved Grafana dashboard
  • VNFD and NSD testing
  • Python3 support
  • CNF’s support in both options where OSM creates the Cluster or rely on OEM tools to provision it
  • Workload placement and optimization (Something very important for Edge and Remote clouds)
  • Enhancement in both Multi VIM and Multi SDN support
  • Support for Public Clouds

How OSM handles deployment of CNF’s

For most Telco guys this is most important question e.g how VNF package will be standardized with arrival of CNF’s , Will it mean a totally new Package or enhancement of existing.

Fortunately, OSM approach on this is modeling of Application in a standard fashion which means same package can be enhanced to reflect containerized deployment. On a NS level it can flexibly interwork with VNF/PNF as well, the deployment unit used to model CNF specific parameters is called KDU’s (Kubernetes Deployment Unit) other major change is K8S cluster under resources. It is important as it explains most important piece the Networking and related CNI interfaces.

OSM can deploy the K8S cluster using API integration or rely on 3rd party tools like Openshift® or PKS deploy it on instructions of OSM

Picture7Changes to NFVO interfaces

Just like Or-Vi is used for infrastructure integration with Orchestration the Helm2.0 (Will support 3.0 in near future) is used for infrastructure integration with K8S applications. Since the NBI supports mapping of KDU’s in same NSD it means only changes from orchestration point of view is on the south side only.

Workload Placement

As per latest industry standing and experience sharing in Kubecon and Cloud Native  summit Americas  there is a growing consensus that Container is the platform of choice for the Edge primarily due to its robustness , operational model and lighter foot print . As per our experience of containers here in STC a 40% reduction in both CAPEX and Foot print will be realized on DC’s if deployed Edge using Containers.

However, definition of business definition of Edge raise number of queries the most important of it are work load identification, placement and migration specially consider the fact the Edge is a lighter foot print that in future will host carrier mission critical applications.

Optimization of Edge from CSP perspective has to address following  Cost of compute in NFVI PoP’s , Cost of connectivity and VNFDFG something implemented by SFC’s and Constraints on service like SLA, KPI and Slicing


The issues with the Upgrades and How OSM addresses

Compared to early release the OSM ns action primitives allow the CNF to be upgrades to the latest release and execute both dryrun and  Juju tests to ensure the application performance bench mark is same like before  .Although this works best for small applications like LDAP the same is difficult to achieve with more complex CNF’s like 5G SA . Through liaison with LFN OVP program I am sure soon the issue will be addressed. We as operator have a plan to validate it on a 5G SA nodes.


My final thoughts on this that  Container journey for CSP is already a reality and coming very shortly in 2020+ and OSM ecosystem supports the commercialization of CNF’s through early use cases of 5G SA , Enterprise branch uCPE and most important Edge including MEC for which OSM seems to reach maturity  For details and how to participate and use do get involved in upcoming OSM Hackfest IN MARCH OSM-MR8 Hackfest

Many thanks to colleague , mentor and industry collaborator Jose Miguel Guzman , Francisco Javier Ramón Salguero  Gerardo García and Andy Reid for OSM growth in recent years … See you in Madrid




Linux Foundation


Delivering 5 9’s Security for Mission Critical 5G Systems


“Can an Open Cloud Based System be more secure for Mission Critical Applications”


So finally the Frenzy of 5G Networks and how they will bridge the gaps between different industries and societies seems finally come to materialization .As most of the Tier1 Operators are working to build the Use cases that will support for early launch and market capture catalyst for early movers in the area still the area of 5G security seems gloomy with still lacking much detailed standards being output by ETSI and other SDO’s compared to 5G technology itself.

There are many questions in the air need to address both from architecture point of view and from End to End working solution perspective. For example

1.     Is 5G security same or conflicting with NV/SDN security?

2.     How operators will develop a unified solution that can meet requirements from all industries

3.     If a standard solution exist will it scale? Or finally in 2-3 Years down the road we need to live with lot of customized solution difficult to assure?

4.     What about solution relevance in Open source networks with many players around

5.     Finally how to imbue Cyber security dilemmas in the 5G Telco Networks.

6.     Will End user privacy will be a killer decision in 5G

I think this list gives author enough challenges faced by 5G and verticals and in this paper I shall try to build a high level model to address them in a unified UML model.

  In a world where computing is ubiquitous, where a mist of data and devices diffuses into our lives, where that mist becomes inseparable— indistinguishable—from reality, trustworthy computing is but axiomatic. ( David James Marcos /NSA)

Before dig deep to formulate the security architecture we should know at a high level the 5G system security will no longer be like 4G networks because of reason no single domain like traditionally Core/UE can promise complete security solution . The enigma of 5G security is huge involving devices like Malware , MitM, low cost devices , Air interface jamming , frequency scan , Back haul DDOS , packet sniffing , NFV and virtualization vulnerabilities , API issues , NW security , VNF application ,platform and IP vulnerabilities and hence we should analyze 5G system in depth from whole system aspect and need look in following important dimensions

1.     Decentralized Architecture: The biggest problem that lies ahead is that the Telco Networks are programmed to work not the way around. It actually means they do not predict and obviously do not interpolate to the scale of issues 5G will go to face. This is an architecture issue because like in 3G/4G source of security seems like in Core Network, in NFV/SDN it seem to imbue in the platform but for 5G planning a single control unit to handle and process all data seems impossible. But if we decentralize how to control it. We cannot decentralize without control it and how to control a device we do not trust? I think 5G must model a concept like Block Chain in Banking sector to share security but in a trusted manner and in addition not point of failure due to compromise in a unit or layer

The understanding of 5G System architecture and how it will influence the present Telco Services migration along with how it can make a thriving eco system is key area of interest for the architect. There are different dimensions like first we need to understand 5G is based on a SBA architecture which requires whole network separated from Infrastructure which makes NFV/SDN almost an inevitable enabler for it . It will allow the deployment of network a slice to support each use case separately. Currently how to model one solution and can it be applicable to customize it for each offering is key area of discussion in ETSI.



  2.     Resource demarcation: This is a scary topic because IMT2000 already divided network in three domains as per latency use case requirement. The dilemma is that it require different RF resource need to map to a different NFV/SDN DC resource in the Cloud is biggest problem that lies ahead is that the Telco Networks are programmed to work not the way around. It actually means they do not predict and obviously do not interpolate to the scale of issues 5G will go to face. This is an architecture issue because like in 3G/4G source of security seems like in Core Network, in NFV/SDN it seem to imbue in the platform but for 5G planning , so in a broad sense multi RAT for each slice may not be the right approach

3.     5G Network Threat Model extension: This host VNF’s which are source or sunk of user workload like DNS , AAA ,IPAM is east use case but introducing middle Box VNF like AS , Control plan and Media boxes means we need to introduce Telco Concepts like multi homing , A/S architectures , CSLB and on top of it complex dependence on IT Network redundancy like Bonds ,bridges and it makes the Security a big issue of concern . Obviously introducing a disparate solution means security threat boundary will extend than it is originally supposed to be


4.     5G Security Frame work for 5G SA System: Well I will not go in to the details here because an expert buddy has just done it perfectly watch Hitchhikers guide here

However I do want summarize a bit as follows the 5G Rel15 specifications consider EN-DC (E-UTRAN New Radio Dual Connectivity) as the defacto standard for 5G security at least in 2018 or let’s say till H1 2019 reason is obvious because the final Standalone Security specification TS33.501 will freeze in Dec ,2018 . Why EN-DC security is important but same time not very difficult to embrace is that The EN-DC security is based on the existing LTE security specification, TS 33.401 with EN-DC enhancement as shown below


The Good news about EN-DC is that it works almost the same way the LTE-DC runs the concepts of Key Generation, Key Management, Ciphering and Integrity Protection are re-used from LTE –DC concept TS23.501 while the DRB <Data Radio Bearer Security> context is added with regard to 5G Core Network. For EN-DC security, new X2 Information Elements, “SgNB security Key” and “UE Security Capabilities” is newly defined.


Here shows EN-DC bearers and PDCP termination points from Network side. MN is the master eNB and SN is the secondary gNB. If the PDCP/NR-PDCP is terminated in the MN, LTE security works, on the other hand, if the NR-PDCP is terminated in the SgNB, NR security covers. EEA is redefined as NEA, EIA is also now called NIA. As you can guess NEA, NIA stands for NR Encryption Algorithm and NR Integrity Algorithm

A good analysis of 5G security protocol can be seen in below


•       In 2018 implement EN-DC architecture almost same as LTE DC

•       Use existing USIM but program USIM/UICC it need USIM vendor support

•       5G Success depend on e-SIM trial special for IoT

5.     Assuring NFV/SDN security for 5G: 5G Network is not about a network but about a system. It involves a plethora of NFV, SDN and Network automation in context of Enablers for 5G to support the future SBA based architecture. These days biggest question we have been talking in the ETSI ISG Security group and in TMforum is actually do Network automation a bliss or curse for security assurance.

6.     Scalable Security solution : Historically the Telco companies and 3GPP must be credited of building a robust security architecture , it can be reflected in 2G/3G/4G and same is expected in 5G with only problem that scale of 5G devices is billions not millions and a solution to expand only Core network and related Authentication servers is not enough . It require inclusion of distributed security architectures and above all IAM solutions which best use network API exposure to guarantee security. It means in future Security as a service can be possible and that an operator can open the network to guarantee whole system security using best offering from the third party. Anyways it will not change 5G Security Frame work for 5G SA System as I explained in Point5 of this paper.

The scalable solution also means that security can be provisioned for each use case in an orchestrated manner something very similar like VNF OLM management where security policy, test criteria all can be customizable as per required use case and SLA.

7.     Security assessment and Verification: The 5G system is complex and include plethora of many technologies. The security context of IT , Cyber , Information security all are added along with the Telco security but till now even ETSI SA3 have not finalized the detailed scenario

The 5G System is big and complex , the 3GPP SA3 is doing a remarkable work to get the standard readiness and proto type before Rel-16 Stage-3 specs are output in June this year . The main focus of this year SA3 key targets are 1. Key hierarchy 2. Key derivation 3. Mobility 4. Access Stratum security 5. Non-Access Stratum security 6. Security context 7. Visibility and Configuration 8. Primary authentication 9. Secondary authentication 10. Inter working 11. non-3GPP access 12. Network Domain Security 13. Service based architecture 14. Privacy . I hope to refresh the material for whole 5G security by the time i got more visibility based on SA3 work and till the time got more inputs from vendors of exactly how they will be approaching this critical but important point in 5G .


National Security Agency review of Emerging Technologies

3GPP TR.501

3GPP TS28.891

3GPP TS 23.799

3GPP TS28.531

3GPP TS38.300


NFV SOL03 ,04


Key Industry Challenges and devising new models for Business enablement using End to End Network Slicing

Network Slice is a concept within 3GPP CT and its history can be traced back to R13/R14 with the introduction of static Slicing in LTE networks. It is being said that the real business case of Network Slicing will come with the arrival of 5G, although in Rel15 Stage3 release in Dec, 2017 still the complete definition with use case mapping  is missing but it is being said R16 Stage3 coming in Q2 2018 it will be available. 5G network must address the Network Slicing from dynamic slicing point of view which can be provisioned, managed and optimized through Orchestrator in real time for different use cases like Massive IoT, Ultra Reliable and enhanced MBB.
But what is the consistent definition of Network Slice? Because still I find the term means different to different teams and hence in this paper I will quest for a deep dive to come to a common definition and how it can be implemented in a consistent manner. In addition I want to answer one common question should we delay slicing now and wait for 5G Rel16 Stage 3 or we can start with enablement of simple slicing scnerio that can be upgradable to 5G as we move it.
Frankly speaking Slicing is not something only needed in 5G the prior networks do need and somehow support them due to one very reason which is business case like 4G EPC deployed in Oil industry, Corporate banks connectivity  etc. However in the 4G era there are some limitations like only support FUP RAN channels or not-shared RAN channels can be used. In a nutshell it means only a static slice can be provisioned mainly define well ahead of time mostly based on APN /IMSI. Since 5G is mostly about verticals and 3GPP CT3 is doing a fantastic job to define details of how a dynamic network slicing will be about. Please refer to 3GPP TR 28.801 just widening up the Options for runtime slice instance selection. IN 5G in addition to DNN (the 5G equivalent of an APN) we can use IMSI + NSSAI comprising up to 8 S-NSSAI for mapping the access session to a slice instance, also MANO will be evolved to support NSSF-Manager for handling all SLA/KPI and management part.
Hence we will try to analyze evolution of network slicing as we evolve future and software defined networks. The detailed understanding of concept requires understanding the whole concept involving the following dimensions.
1. Understanding Slice requirements from TR22.891
  • The operator shall be able to create and manage network slices that fulfil required criteria for different market scenarios.
  • The operator shall be able to operate different network slices in parallel with isolation that e.g. prevents data communication in one slice to negatively impact services in other slices.
  • The 3GPP System shall have the capability to conform to service-specific security assurance requirements in a single network slice, rather than the whole network.
  • The 3GPP System shall have the capability to provide a level of isolation between network slices which confines a potential cyber-attack to a single network slice.
  • The operator shall be able to authorize third parties to create, manage a network slice configuration (e.g. scale slices) via suitable APIs, within the limits set by the network operator.
  • The 3GPP system shall support elasticity of network slice in term of capacity with no impact on the services of this slice or other slices.
  • The 3GPP system shall be able to change the slices with minimal impact on the ongoing subscriber’s services served by other slices, i.e. new network slice addition, removal of existing network slice, or update of network slice functions or configuration.
  • The 3GPP System shall be able to support E2E (e.g. RAN, CN) resource management for a network slice.
Figure  1 Understanding Slice Requirements in NFV/SDN enabled Future Networks
It seems clear even if the Network slicing standard is not locked the existing NFV/SDN architecture can be used to enable it at least as over lay to provide resource isolation as Tenant level.
2. Understand Network Slicing from NFV/SDN Point of view
NFV/SDN is to enable agile delivery of service using multi-tenant provisioning in Common NFVI, one basic Slicing concept can be understood by provision multi VNF for different use cases like for Massive IoT we can have a light weight C-SGN combining both Control and User Plane while for uRLLC it can mean a distributed VNF’s using CUPS architecture to deliver real experience to the Edge. Even for eMBB it can mean end service can be delivered by avoiding vFW or other NE’s in order to build a big pipe to deliver video and live broadcast use cases. Some new additional function in Rel15/Rel16 can be used to deliver slice from Telco point of view like décor is considered to be an early enabler for the Slicing but obviously it do not come with end to end provision and monitor/management functions which can only be promised in 5G
Figure  2 Understanding Slicing context in NFV/SDN
The Above figure explains a good Reference to understand Network Slicing, for those familiar with NFV/SDN the key change is the addition of Network Slice layer. This layer may be part of NSD /VNFFGH or exist as an independent layer. Since one Service can comprise of multiple slices as needed so this layer is required even if 1:1 relation exist, it is abstraction layer to offer flexibility.
Services ⇒ It is a business service something which will be offered to end customer
Network Slice instance-NSI ⇒ a collection of resources from below layer s that defines a slice
Sub Network instance NSSI ⇒ consider it as a group of related VNF/PNF
Both the NSI and NSSI is delivered as part of NSD. Hence a Network Slice Instance (NSI) may be composed by none, one or more Network Slice Subnet Instance (NSSI), which may be shared by another NSI. Similarly, the NSSI is formed of a set of Network Functions, which can be either VNFs or PNFs.
Figure  3 Network Slice components
3. Network Slice enablement from 5G 
The real enablement of Dynamic Network slicing will come with 5G to ensure dedicated services to each customer for their customized use case or segment. The complete advantage of network slicing can only be achieved when whole Telco Network is virtualized including RAN and possibly RF modules also because the UE will request a slice based on specific ID that needs to be identified by RF, RAN to ensure it can be delivered end to end otherwise the whole concept of network slice is like a static overlay based on VPN/VRF as is implemented today. This is the main reason why all Tier1 CSP’s want to accelerate NFV/SDN use cases before 5G to assure the promise of Network slicing can delivered end to end. 5G CT RAN3 also investigating the implications of Network slice as 5G NR will open access to all access including 5G Wi-Fi, it is not well known how in such cases Network slice will be delivered specially with convergence with Wifi Networks but it is agreed in the planetary meeting that this function will be available for slicing. In 5G Networks the UE will keep track of all slices associated with it by connecting to unique AMF and hence one UE can be associated with 8 Unique Slice offering which is more than enough as per business requirements.
From RAN perspective idea of Network slicing involves that the Slice ID can be linked or transferred to correct NF in the Core Network. In the initial phase this slicing can be static but over long term this scnerio from UE till the DN must be automatic and dynamic using NFVO closed loop control methodology.
4. End to End Slice Management in 5G 
As we are well aware the key characteristics of Network slicing are two which are to sell common platform to end users and tenants using Naas <Network as a service> and to monitor /manage it. From end user perspective the Slice ID consists of Slice Type and Slice Differentiator to explain further use case with in a slice type <eMBB, IoT, uRLLC>. For some unique enterprise use cases it is also possible to provision unique nonstandard slice values. These values will be recognized by Radio network in 5G and UE will carry it during service use to assure the dedicated resources are delivered end to end. In order to manage the slice end to end each Telco VNF starting from RAN must have a new logical entity names NSSF (Network Slice selection function) to assure mapping of request to correct slice is done correctly and that consistency is delivered end to end. Below you will find how the Network view of Slice will come in NFV. As we can see the Os-Ma Sol5 is the key to integrate Slice management functions in the NFV.
Figure 4 E2E Slice management in 5G
From a resource management viewpoint, NSI can be mapped to an instance of a simple or composite NS or to concatenation of such NS instances. From a resource management viewpoint, different NSIs can use instances of the same type of NS (i.e. they are instantiated from the same NSD) with the same or different deployment flavors. 3GPP SA5 also considering exposure of Slice management to third party using Rest API which means an operator can become virtual and that slices can be provisioned, managed and optimized by 3rd party. This model is key to enable industry verticals in this domain.
In addition to that Transport Network being a multi service network must support slicing between 5G and non 5G Network services. Since all the services are placed in the VN so it is mandatory to isolate the traffic between different VN’s. Further  it is expected that the Slice manager will request the configuration of Managed Network Slice Subnet Instances (MNSSIs) to support the different 5G services (e.g., uRLLC, eMBB, etc.). A MNSSI will be supported by a VN. In the fronthaul network only one MNSSI is required since all services are carried between the RRU and DU in a common eCPRI encapsulation.
Figure 5 Slice Diversity in 5G
5. What is the Optimum way for NW slicing 
There are many SDO’s recently working on the NW slicing like NGMN, ETSI, NFV, 3GPP, etc. obviously because there is a lot of traction and appeal together to create a new business case to sell new products and solutions to the verticals and industry alike. As industry has witnessed In both pre and post era of Y2K of the industry and business shift from voice/SMS to MBB/Internet Era where CSP’s work as Pipes only the new 5G direction sets the new umbrella about the definition of broadband to include all verticals. Obviously to sell such dream the NW slicing need to cater dedicated slice and its E2E management and fulfillment by the tenant itself. This is the true model of SaaS something Facebook, Dropbox, Google have been doing so successfully in the past.
Figure 6 Network Slicing Logical view
However this is just the one sided view of picture the business case of 5G combined with business model requirements make it a worth of dime to consider possible definitions of a new standard in the ETSI NFV architecture itself.
Figure 7 Network Slicing Model in Hybrid Networks
However, the PNF would be managed outside of MANO, as well as sub-network parts composed of connection of PNF. This would require the Network Slice Life Cycle Management to also interface with a non-virtualized life cycle management & operation environment, as shown on Figure 4, with an open Nsl-PN interface, with PN standing for “Physical Network.”
6. Key Findings from ETSI GR NFV-EVE 012 standard on Network Slicing
Each tenant manages the slices that are operative in its administrative domain by means of its NFVO, logically placed in the tenant domain. Tenants rely on their NFVOs to perform resource scheduling functions in the tenant domain. As these resources may be provided by different Infrastructure Providers, the NFVO should need to orchestrate resources. Across different administrative domains in the infrastructure. Slicing requires the partitioning and assignment of a set of resources that can be used in an isolated, disjunctive or shared manner. A set of such dedicated resources can be called a slice instance. Defining a new network slice is primarily configuring a new set of policies, access control, monitoring/SLA rules, usage/charging consolidation rules and maybe new management/orchestration entity, when network is deployed with a given set of resources. In addition the ability to differentiate network slices through their availability and reliability and the ability for the network operator to define a priority for a network slice in case of scarce resource situations (e.g. disaster recovery) requires a strong coordination between NFV and SDN domain for slice management
In Point#2 I have already explained what it means to deploy a slice in Pure NFV environment so let us enlist Network slice as it apply to SDN below
A network slice in NFV context may belong to multiple Sites a compelling idea especially when VNF are split across different NFVi POPs
Within the context of NFV/SDN the Security is of prime importance for Slice management reason being that the Slice requires a view like a graph involving interconnecting VNF’s. Ideally it means for each slice to have a separate VNF and possibly PNF which is not optimal way hence it is assumed slice management and NFVO capable to ensure separation of data flows for each slice. In order to meet this requirement the VNF redesign may be necessary to incorporate the HMEE <Hardware mediated Execution Enclave> as discussed in ETSI NFV SEC 009.
Figure 8 What are the Slice End Points
A part from above the NSM <Network slice management> is very important as Slice involve NFV, SDN, Cloud and possibly PNF’s also and ability of Os-Ma SOL5 as well as the NFVO to manage hybrid environment is key.
In the nutshell Network slicing should enable proper network control and logical separation in terms of dedicated resources, operations isolation, feature isolation, reserved radio resource, separate policy control, SLA control, security and service reliability control. This is a unique differentiator of NFV/SDN/5G networks and opens new possibility for operators.
In this paper Author tried to explain new possibilities and evolution path for network slicing, it is well known before 5G SA complete standardization the Slicing will not be available end to end at least the dynamic slicing and also the framework of NSM is not locked however there are certain features and functions of slicing that can be exploited using NFV/SDN networks in today’s networks. It is very clear Operator need to start now for Slicing trying to implement use cases of static slicing using NFV/SDN and MANO functions as available today and that to upgrade it as the 5G becomes main stream. I hope audience have like this content and will help them details understand the details about important concept and that it will allow architects to best plan their 2020 Networks.
  • 3GPP TR22.891
  • 3GPP TS23.501
  • 3GPP TR 28.801
  • 5G_Americas_Network_Slicing_11.21_Final
  • NGMN 160113 Network Slicing v1 0
  • NFV-EVE 012v3.1.1 – GR – Network Slicing report
  • GSTR –TN5G Transport Network support  of IMT 2020/5G

MEC as enabler of Telco’s Digital Transformation ~An ETSI ISG Perspective

MEC as enabler of Telco’s Digital Transformation ~An ETSI ISG Perspective

During the last decade globally Telco’s have seen an outburst of Data traffic almost quadrupling every year however during same Telco’s were not smart enough to monetize the value gain which moved more and more upwards in all this time. This left Telco’s to just act as a Pipe Line having less visibility on Services and offerings from OTT. To make Telco Business and lucrative just like Software companies back in 2012 the Telecom industry leading by many T1 Operators started a move toward Virtualization /NFV I to convert legacy business to a SaaS company . The initial response by whole Telecom Industry was to on-board the Band wagon and start NFV by building their Centralized Telco Data Centres and evolve their Core Networks to the Cloud, however five years from start still Operators find it hard to solve the issues and it seems Idea of Central Service hosting in Cloud cannot deliver desired business results till date.

Why MEC is so important:

MEC or AKA Mobile Access Edge computing is a dedicated standardization effort in ETSI .In fact ETSI launched ETSI MEC group in Sep 2014 to find another way to transform Telecom Operators as Software companies by capitalizing on their Close proximity to users and services and further to Open their Networks to Public Cloud Hosting, 3rd Party Application development to meet Enterprise and Industry verticals requirements.

Historically Telco’s all applications on their Network which make it difficult to launch applications/services specially Targeting Latency, High Bandwidth, Real Time, Proximity awareness, MEC will address all those challenges to offload the Edge Services from the Network

In this paper I want to share our work experience at ETSI for MEC architecture and how it acutely addresses pain points of Telecom Operators to take Off the Network Edge services

How to Plan Edge Data Centres:

The answer varies as it should mainly align with PE-AGG or Access transport facilities. My understanding is that every Aggregation site is an Edge DC as an Operator start to deploy MEC Applications however as more and more complex use cases will deploy like Remote factory, Video Analytics it means to deploy Edge more close to Users. Our combine understanding in ETSI is that for such cases must be required more in Enterprise than for normal users so for complex cases Edge DC can be Enterprise on campus facilities

So to answer your question of how to deploy an Edge DC , focus on Edge DC < PE-AGG> now and on Access CO or Enterprise on premises later

How to develop Cost Effective Solutions for MEC

I think this point is complex primarily because legacy vendors are not open to optimize cost metrics for an Agile MEC Site. As an example currently Operator have two choice. First is to select Silow solution where all functions including Complete Openstack can be bundled in One Rack of about 2~10 Servers but the cost of deploying and later operations of this silow approach make it difficult to be adopted widely .

I think if you want to build a scalar solution for this best is to be based on Akraio www.akraino.rog and StarlingX . I think our direction is clear. If the MEC Site is an Edge DC the MEC can deploy multiple Racks around 4~100 servers, if MEC Site an Access CO it should be 2~10 Servers, for enterprise case it can be 1~2 servers a case most likely in Access Networks.

Again this is my understanding that Open Stack Remote compute project is not something possible to meet Edge MEC requirements especially due to limitations on Rabbit MQ bus and agent scaling hence StarlingX seems like best solution to evolve. A typical Controller /Compute architecture for StarlingX is seen below

How to solve integration and automation needs for MEC

What we can expect for MEC is huge distribution around 1000 Sites for a typical medium scale country , require integrate management for whole umbrella and ZOOM based operations which are simple ,scalable and automated , fast integration and innovation and optimized cost . All of these challenges are addressed or seems will be tacked by Akraino project, so what is Akraino . It will primarily address following points

 Develop API for Edge VNF’s

Middle Ware SDK’s to open cloud to 3rd party

Cross platform testing

 Full CI/CD pipeline for MEC

Let us try to understand the Akraino High Level Architecture as seen below

I think in current phase of MEC must be based on deployment and integration and less on integration and automation so still StarlingX is important to watch, however with early 2020 as ONAP based offerings coming to market the Akraino will be important focus, similarly Calico for simple SDN integration and Testing framework are three targets for Edge Deployments for smooth integration.

How to define an Independent Testing Framework

I think this point is complex primarily because still no compromise has been made between definition of Standard and Open. In the Era of NFV industry already faced lot of issues and finally it is not possible to use 3rd party tools in NFV environment for E2E testing. In other cases we know of situations where a mature commercial E2E testing company fail to offer this unless the VNF or VIM/NFVO layer is from them or their preferred partner. The E2E testing for a wide variety of MEC Applications is a further complex topic because the Applications diversity and expected Eco System will be far more complex. This is a hot topic and is under discussion in MEC-0025 to be released in March 2019.

In a nutshell except for UE interface which covers FR cases of MEC tests the most relevant test interfaces are Mp1,Mp2,Mm5,Mm6 as marked below

The biggest question for MEC E2E testing will be API standardization especially for Inter-operability cases (Which is almost there) and Package standardization (Which is still depatable) as most of work of Akraino Agile integration require exposure of above two capabilities of the platform for E2E testing .The second main concern is commercial offerings , believe or not since testing and its conformance require thorough understanding of all components therefore from vendors I have not seen a strong desire to build such agnostic tools leaving Telco’s to open source methods . This area is still debatable and we are continuously working with ETSI and over peers to materialize this with the introduction of MEC in our network.

How MEC can capitalize on Power of Virtualization and Orchestration

In order to get response of this common query I shall recommend audience to please refer to GR MEC 0017 talking about introduction of MEC in NFV environment and since MEC can be on NFV platform or on independent platform like a 3rd party Cloud so to build architecture it is key to develop a solution agnostic of Platform. Therefore the right strategy will be that MEC apps / MEC platforms / MEC Platform Managers is most effective if deployed as VNFs on top of a common NFVI. However, it emerged that a plain NFVO does not fulfill with all the requirements for MEC, and thus it must work in collaboration with a MEC Application Orchestrator (MEAO) to accomplish the MEC-specific tasks. But for the tasks that are the responsibility of NFVO, we do have a common NFVO for MEC and NFV through Mv1 interface. I think as per ETSI architecture early aspirant will not deploy Mv1 with MEAO but with E2E SO primarily because it makes more sense for Digital OSS transformation as well as to deploy single layer of Cross Domain Orchestration in the network

Capitalizing on Public Cloud strength to offer MEC  applications

As per our understanding MEC as platform do support communication between APPs in the edge cloud to the remote cloud which includes Public Cloud. Also, it supports relocating apps from the public cloud to the edge, provided they fulfill with the descriptors and the packaging required and defined by the MEC system. I think this is an exciting Era for Telco’s since we can host Edge Applications on Edge or Access CO’s DC and can use applications on Public Cloud which are more mature now to offer it , what it means is clear SAP , Financials , Geo Location and guess every service Can come to Telco’s in 3~5 years .

To summarize I think in this paper I have tried top 5~6 issues that will help architects how to smoothly evolve the Network to MEC and finally to expose network to 3rd party applications and developers to take best possible business benefit through edge.

Having said author believes still the MEC industry is in infancy at least from commercial offering point of view. Discussion of how to build the right use case is a big topic that requires deep analysis of macro situation and technology alignment.

Still the issues related to Lawful enablement of MEC applications is not mature , we have seen stern specifications of LI and normally it is expected all new Applications /technology abide by it .Obviously this approach need further study as robust and Agile MEC applications need analyze a unique , simple and nimble way to meet regulatory requirements

Again security is a big concern in MEC in the same way it forced Telco’s to slow evolution to Cloud it will try to slow down MEC .On top of it MEC due to its inherent design is a more concerned platform for security considering

a) MEC Applications require cooperation between Private and Public Cloud will require more detailed analysis of both the Architecture (In terms of Network, VNF etc)

b) API’s and End Points Security to develop a Secure and Robust solution .

c) Since definition of MEC do involve Access CO or Radio Site means MEC need to address security concerns from IoT , Sensors and a range of networks .These missing topics I shall try to cover in some future Papers.





6.ETSI GS NFV-TST 002: “Network Functions Virtualisation (NFV); Testing Methodology; Report on NFV Interoperability Testing Methodology”.

7. A special thanks to my colleagues in ETSI ISG Alex Reznik (HPE) , Sami Kekki (Huawei) , Fabio Giust (Athonet) ,Michele Carignani ( ETSI)

Solving the 5G Core Network System Architecture Challenges for smooth Evolution

Huawei MWC2017 White Paper for SOC Core

Solving the 5G Core Network System Architecture Challenges for smooth Evolution

Sheikh is Huawei Middle East Senior Architect for NFV , SDN ,Telco Cloud with focus on ICT Service delivery through Telco DevOps . Focussed to define the Roads for future 5G Core Network . Always interested in those disruptive technology driving the industry transformation

From early 90’s the Telecom industry has seen a steady progression from 1G, 2G, 3G, 4G and now 5G just seems around the corner. The proponents of 5G claims that there is huge potential in the new technology and will be bread and butter of Telecom Market for at least 10Years. Consider the Market worth of 5G along with the potential opportunities it will create it is reasonable to be ready for the new network readiness to milk the market , however the so many interfaces with too complex use cases means lot of complexity in integity and value creation for this NG Solution .

Infact the great opportunity also comes with big challenges, the inclusion of many verticals in the 5G network will mean loss of challenges especially with regard to Integration with existing technology and networks and smooth migration. This is also a question how to build the operations model of such converged Core or which we call in Huawei as SOC (Service Oriented Core) which will be shared among many verticals .

Actually the complexity and capacity volume which 5G networks are around to offer surmount complexity which require consideration around many domains like

  • Eradicate dependency on any Access Network and UE’s and still need assure 100% interworking and backward compatibility
  • NFV to build capacity
  • SDN to program capacity efficiently
  • API exposure to integrate many 3rd parties
  • Unified Pane of operations to make all components work as a System
  • Open Architecture based on Software industry Micro service and SOA

If we see to meet those challenges we need to understand lot of standards and customize it as per customer requirements and this is the key task for the system integrator.

The requirements of 5G Core networks will be much customized in two ways. The first one they have to offer the services to many end users like Industry vertical, Government, Healthcare, Telco’s, Media entertainment and secondly in each vertical it needs to offer a customized solution for every customer segment hence the traditional fixed use cases can not apply, as Core Network service catalog has to customized and optimize for each segment and customer.

With more than 10Gbps speeds readiness for the end-users/devices it is clear the brain of the 5G network must be based on hyper scaling of NFV Clouds powered by agility of SDN. May be the space of enterprise will also converge to Carrier to deliver across geographical provision capability through SD-WANs

How the CSP’s will deliver value of well-defined and standardized systems in this complex Eco systems surely will require lot of work to do around Tools , Management systems , Automation and efficiency to meet the new network requirements

With the overall system becoming more open the risk for Security will be huger. Especially with billions of devices connecting and authorizing through the DCN and 3rd party platforms will require proactive measure to find security breaches such as to build in-house security systems around red/blue teams, it will require security integration and testing in ways far exceeding simple LDAP or IAM measures.

To well imagine the change in Network architecture it is very important to understand the IT and Enterprise architecture, after all the two technologies NFV and SDN are inherited from the World of IT. As IT consultants we all know that the first base of IT applications is that they are totally separated from the Hardware. It means the Application itself can decide HA, Healing, Data availability and restore with almost no dependency on under lying layer. However in Telco’s most of the work depends on close coordination between the two layers. Hence the VNF design itself should adjust around micro service based on VNFC (VNF Components) integration through APIs’. It is really very important for service scaling.

As is the case in Telco industry today all the SLA compliance and KPI settlement is based on NF functional architecture with less focus and control for each user . However in 5G with so many users from different segments using the service the network really have to support the way of Naas (Network as a service) to provide the accurately tuned slice for the customer tied with the offered SLA/KPI . The Network slice is a concept that will extend beyond Tenant provision and control. It is because slice has to start from Radio resource to IP to Tenants to VNF. In other words the slice need to be well contained and defined and every Node and Function that to monitor/Manage and report the slice usage.

What is an API, recently we have seen huge growth in the API integrators and ecosystem .The market itself is worth billions and we really need to see the advantages API will bring to the 5G. Like it will support for any 3rd party integration, 3rd party programming and finally service modelling and control. This is a perfect new world.

What we know is Cloud is written in PYTHON and VNF support YANG modeling with YAML /XML for onboarding and above all the REST API is standard way to retrieve information from each layer. This model of using programming the functions will enable to apply DevOps model to the 5G Network and to apply software style to improve service and end user experience. For Example NFVO can write a Python code to program service which can compose of GSO , NFVO , SDNO to deliver seamless experience of service across the data centers .It can also monitor and report the SLA for customer offering and can integrate with OSS .

Multi tenancy is a key concept in the cloud world where by many applications like VNF can host on the same cloud the key ideas of VPC , Cell , Region ,Domain , AZ will support all VNF to assure of its share and control separately including Horizon dash board and low parameters tweaking such as connectivity with Neutron Provider networks . It means future the 5G Network is all about a number of data centers. The Co-Deployment is key trend in 5G and to start journey towards Hyper scaling of Clouds.

The recent report from Light reading have suggested that the industry will need layered DC architecture to offer 5G which means central DC cannot meet the service requirements , this is where AWS , Google ,Face book data centers have not delivered opening a big opportunity pipe of CT vendors to design the layers accurately to meet the service requirements . The DC design to meet the uRLC and eMBB latency and delay requirements is key enabler to build a future proof 5G Network.

The security measures need to emphasize in each layer of the network including NFVI, Cloud, MANO, Application and in VNF as well .Most important point is that the authentication /interface are de-coupled from the other CN functions .Various authentication methods will support plug and play with access agnostic manner and the authentication capabilities will be open for the Apps for the new stream revenue.

Though 5G network seems exciting but it must be clear it has to live with the existing legacy network for at least 3-5years in mature markets. It is therefore that 5G have to support number of networking modes to assure smooth integration with the 3G/4G networks and offloading options as well. The R13, R14 Décor /eDecor interworking will make such a solution possible .However it seems clear the future many of UE /Device functions must move to the Network side to assure the smooth evolution of the whole eco system.

It is assumed that in first phase only the 5G Radio also called gNB will be ready only and first phase must be its integration with EPC /4G it will also help to solve lot of issues related to radio before evolving to the 5G Core.

Finally the 5G Network will require a lot of Organization adjustment both in terms of skills and process. The new technology will require accurate modeling for the transformation and automation. To support this wave the CSP’s need to acquire lot of IT skills for the future programmable network. One of very close friend and Guru in DevOps once said to me it’s not about the code but why and how the code is written which will decide the future of organization roles in CSP’s and for sure the Scrum master is going to be the key to glue the transformation from as in situation to the to be situation because with so many solution evolving and no uniform way to apply DevOps or IaaC cycle to the NFV/SDN network.

I hope you have found this paper useful, I have tried to craft this paper solely to understand the myths of technologies and complexities that surround the 5G Core network Architecture . It is believed NFV, SDN, Transformation, DevOps, Telco Cloud dreams must be achieved before 5G Core arrives in the market . It is also rumored in the industry the Open Source is key contributor to achieve the goal. Hence as a Technologist it is exciting era to look how these complex and related technologies will guide the future network of 2020 .

As we continue to travel ,it is expected to find lot of new challenges along the road and I will try to keep my audience well informed with what we will face and how to find a collaboration model to solve it !!! Best of Luck


The key 5G Core Network Components are

  1. AMF is the access and mobility management function , the AMF will also include the Slice selection functionality
  2. SMF is session management function
  3. UPF is user or data pane management totally separate from control Plane
  4. UDM is unified data management a part of SDM
  5. NRF is the VNF repository to be called by each VNF through API the key requirement to build Open and scalar 5G Core Network

3GPP TR23.501

Light Reading

KT Telecom White Paper

3GPP TS 23.711

ETSI NFV Phase2 Specifications

OPNFV DANUBE the use case of Telco DevOps

Open -O Mercury use cases of VoLTE and Core Network System Integration

Linux Founation seminar on 04th May on Running a successful Open Source Project