Early lessons from Open RAN Deployment in brownfield ,a must delivery model to untap 5G Scale,Complexity and economy in 2021+

With most of Tier1 Operators rolling the ball for early experience of 5G Standalone based on 3GPP Release-16 that can offer very new and unique experiences to 5G  around uRLLC and mIOT along with improved QOS for broadband the whole industry is looking forward to accelerate 5G adoption in 2021 .

This is an ideal time for the industry to find new ways to improve human life and economy in the post covid world . However the biggest problem with 5G experience so far has been the Delivery model that can offer both Cost and Scale advantages . With the hyperscalers eyeing the Telco billion dollar market it is necessary for all vendors and Telco’s themselves to analyze and find ways towards new business models that are based on

  • Coherence
  • Data Driven
  • Service Quality

By coherence I mean when we go to 5G Radio site what components will be residing there

  • A 5G Radio site components
  • Edge Applications
  • Network connectivity

By Data I mean till now depsite dleiverying thouand’s of 5G sites we can not offer DaaS(Data as a service) to vertical’s the only visibiltiy we have is on horizontal interfaces in a 3GPP way .

The third and the most important piece is RF and RAN service . You talk to any RAN guy and he is not interested in both Coherence and Data unless we answer him a RAN service that is atleast same lr better the legacy .

This makes the story of Open RAN very exciting to analyze and this is the  story of my experience of leading such projects during last years in both Company and Industry . This is my humble opinion that Open RAN and other such innovative solutions must not be analyzed only through tehnology but an end to end view where some use cases are specificcally requiring such solutions to be succcessful .

Why Open RAN

For me the Open RAN is not about Cloud but finding a new and disruptive Delivery model to offer RaaS (RAN as a service). Have you ever imagined what will happen if a Hyperscaler like AWS or Azure acquire a couple of RAN companies who can build a Cloud Native RAN Applications  , the Telco can order it online and it can be spined in the Data center or on the Edge Device in a PnP(Plug and Play manner)

If you think I am exaggerating  already there are disucssions and PoC’s happening aorund this . So Open RAN is not about Cloud but about doing somethnig similar by the Telco Industry  in a Open and Carrier grade fashion .

This is where Open RAN is gaining momentum to bring power of Open , Data driven and Cloud based disaggregated solution to the RAN site .Future of Telco is a well-designed software stack that extends all the way from Core the last mile Site of Network  Crucially, it also allows for placing more compute, network and storage closer to the source of unrelenting volume of data – devices, applications, and end-users.

There is another aspect which is often overlooked which is transport cost , as per filed trial result use of Open RAN CPRI interface 7-2 increased the front haulf capacity utilziation by at least 30-40% primarily as there are lot of propietry overheads in CPRI world .

“For CXO it means  at least 30-40% direct savings on Metro and transport Costs”

What is Open RAN

5G RAN has a huge capacity requirements and naturally to scale this network disaggregation and layered network architecture is the key . Open RAN architecture can be depicted as follows .


Radio Unit that handles the digital front end (DFE) and the parts of the PHY layer, as well as the digital beamforming functionality .It is almost same architecture as in DBS (Distributed Basestation) architecture fofered by many legacy vendors . Conceptually AAU (Active antenna unit) is considered together with radio unit


Distributed unit handles the real time L1 and L2 scheduling functions mainly MAC split . it is realt time part of BBU (Baseband unit)


responsible for non-real time, higher L2 and L3 , it is non real time functions of BBU like reosurce pooling , optimization etc


RAN Intelligent contorller is the intelligence component of Open RAN that will collect all data and offer insights and innovation through xAPPS . similarly it cooperates with NT-RIC (Non real time RIC) which is part of ONAP/MANO to offer end to end end SMO (Service management and Orchestration) functions  

Interfaces of Open RAN

When it comes to understand Open RAN we need to understand both those defined by O-RAN and those by 3GPP and this is so important as it do require cross SDO liaisonship

O-RAN interfaces

  • A1 interface is the inerface between non real time RIC /Orchestrator and RAN components
  • E2 interface is the interface between NT-RIC (RAN Intelligent Controller) and CU/DU
  • Open FrontHaul is the interface between RU and DU mostly we are focusing on eCPRI 7-2 to standardize it
  • O2 interface is the interfaces betwee NFVI/CISM and Orchestrator

3GPP interfaces

  • E1 interface  is the the interface between CU-CP (control plane) and CU-UP (user plane)
  • F1 interface is the interface between CU and DU
  • NG-c is the interface between gNB CU-UP and AMF in 5G Core Network

To solve all interface and use case issues the ORAN Alliance is working in a number of streams  to solve issues .

  • WG1: Use Cases and Overall Architecture
  • WG2: The Non-real-time RAN Intelligent Controller and A1 Interface
  • WG3: The Near-real-time RIC and E2 Interface
  • WG4: The Open Fronthaul Interfaces, 
  • WG5: The Open F1/W1/E1/X2/Xn Interface
  • WG6: The Cloudification and Orchestration
  • WG7: The White-box Hardware Workgroup
  • WG8: Stack Reference Design
  • WG9: Open X-haul Transport.
  • Standard Development Focus Group (SDFG): Strategizes standardization effort. Coordinates and liaises with other standard organizations.
  • Test & Integration Focus Group (TIGF): Defines test and integration specifications across workgroups.
  • Open Source Focus Group (OSFG): Successfully established O-RAN SC to bring developer in the Open RAN ecosystem

Early Playground of Open RAN

The change in world economy impacted by geo political factors like a drive to replace Chinese vendors from networks like in Australia for national security reasons naturally change momentum to find both less costly and high-performance systems. Naturally one of the prime networks where Open RAN will get introduced are above .It is true that still there are some gaps in Open RAN performance mainly on the Base band processing and front haul but there are some use cases in which Open RAN already proved to be successful as shown below , the key point here is that although there are some issues but with some use cases ready it is important to introduce Open RAN now and to evolve it in a pragmatic way to ensure they can coexist with traditional RAN solutions

  • Private 5G Networks
  • Rural Deployment e.g 8T8R FDD
  • In building solutions
  • Macro sites

TIP Open RAN Project Progress

TIP A.K.A Telecom Infra Project is an Open-source project looking after a number of disruptive solutions that can make 5G networks both cost efficient and innovative. Below are some highlights on our progress in the community till 2021

A1: Built the Reference architecture for Multi vendor validations

Through support of operators, vendors and partners built a complete reference architecture to test and validate the complete stack end to end and SI integration

A2: Built the Reference architecture for Multi vendor validations

Worked to define the define the complete RFX requirements for Open RAN and it can be retrieved as below

TIP OpenRAN_OpenRAN Technical Requirements_FINAL

A3: Use cases of Open RAN success

In 2020 and better part of 2021 the community have worked tirelessly to couple Open RAN with some exciting use cases to capitalize the low hanging fruits of Open RAN as follows

  1. “Context Based Dynamic Handover Management for V2X”
  2. “QoE Optimization”
  3. “Flight Path Based Dynamic UAV Resource Allocation”
  4. “Traffic Steering”
  5. “Massive MIMO Optimization”
  6. “Radio Resource Allocation for UAV Applications”
  7. “QoS Based Resource Optimization”


A4: Success through RIC (RAN Intelligence controller)

There are two-fold advantages of RIC introduction in Open RAN architectures mainly first for RAN automation for both Managmeent and KPI optimization and secondly bring new and disruptive use cases through xAPPS and data driven operations including

  1. Smart troubleshooting of RAN
  2. RAN parameter optimization using AI
  3. Capacity predicting
  4. New Use cases by exposing API’s to 3rd part developer

A5: RAN configuration standardizations

Benchmarked a number of RAN configurations to be deployed in the field including

  1. Small cells mainly for SME and in building
  2. Low-capacity Macro (Rural)
  3. High-capacity Macro (Highways)
  4. RAN parameter optimization using AI

Highly encourage to join TIP to learn more


A6: Chip advancements brought prime time for Open RAN with 64T64R trial in 2021

Customers realy on 5G massive multiple-input and multiple-output (MIMO) to increase capacity and throughput. With Intel’s newest Intel Xeon processors, Intel Ethernet 800 series adapters and Intel vRAN dedicated accelerators, customers can double massive MIMO throughput in a similar power envelope for a best-in-class 3x100MHz 64T64R vRAN configuration”

Challenges and Solutions

In last two years we have come far to innovate and experiment with a lot of solutions on Open RAN , the list of issues we solved are huge so lets focus on only the top challenges today and how we are solving them today, let’s say to set a 2021 target our target is by the time O-RAN alliance  freezes coming releases Dawn  (June 2021)  and E Release (Dec 2021) we as a  community are able to fix those top issues as below .I apologize to leave down some less complex issues like deploy Open RAN , focus groups key interfaces specifications status specially momentum around 7-2 etc . I feel I ran of time, pages and energy and I fear I will test your patience with a bigger tome

P#1: Ecosystem issues in Open RAN

Based on our field trial we found for 8T8R we can achieve around 40% of cost reductio with Open RAN with the future transport architectures like “Open ROADM” we can build whole RAN network in open manner and achieve great cost and efficiency advantages. However when components are from different vendors the non-technical factors to make it successful is really challenging e.g

  • How to convince all team including RAN guys that what we are doing is right J and we should get rid of the black boxes
  • How to let all partners work together in a team which are historically competitors
  • Making software integration teams

P#2: Radio Issues in Open RAN

The Site swap scenarios in most brown field environments require efficient Antennas and Radios that are

  • 2G + 3G +4G +5G supported
  • Massive Mimo and beam forming support
  • Low cost

This is a fact that till now most of effort has been on the DU/CU part but now we need more attention solving the radios and antennas issues .

Lesson we learnt in 2020-2021 is that everything can not be solved by the software as finally every software need to run over a hardware . An increased focus on Radio , Antennas and COTS hardware is a must to acceerate any software innovation

P#3: Improve RAN KPI

No disruptive solution of Open RAN acceptable unless it can deliver a comparable performance to legacy systems like coverage, speed , KPI .

To make Open RAN main stream DT, tools, RAN benchmarking all need to be addressed and not only the Cloud and automation part 

P#4:SI and certification process

We already witnessed a number of SI forms and capabilities during NFV PSI however for a disruptive solution like Open RAN it need a different approach and SI should possess following

  • Complete vertical stack validation

Its not just the Cloud or the hardware but the end to end working solution that is required

  • Stack should include Radios and Hardware

Certification should consider RF/radio and hardware validation

  • Software capability and automation

To make it successful it is very important that SI is rich on bothy tools and capabilities on automation ,data and AI

Source: mavenir

P#5:Impacts of Telco PaaS and ONAP

To make Open RAN real success it is very important to consider this while building capabilities and specifications of other reference Architectures most of which are Telco PaaS and ONAP . If i go to explain this part i fear the paper length will become too long and may skew towards something not necessarily a RAN issue .

However just to summarize the ONAP community has been working closely with Open RAN to bring the reference architecture in upcoming release

and see some of agreed WI’s in below


Finally for Telco PaaS we are also working to include Telemetry , Packaging and Test requirements for Open RAN stack . Those who interested in these details kindly do check my early paper below

Open RAN a necessity for 5G era

Early experience with 5G proved that it is about scale and agility, with cost factors driving operators towards an efficient delivery model that is agile, innovative and that can unleash the true potential of network through Data and AI.

In additon as time will pass and more and more use cases will require integration of RAN with 3rd party xAPPs it will be definite need to eolvve to a architecture like Open RAN that will not only support coexistence and integration with legacy systems but also support fast innovation and fleibiltiy over time  . With early successful deployments of Open RAN already happened in APAC and US its improtant for all industry Catch the Momentum

Those who are proponents of closed RAN systems often say that an Open system can never compare with monolithic and ASIC based systems performance, similarly they claim the SI and integration challenge to stitch those systems far outweigh any other advantage.

The recent advantages in Silicon like Intel FLEX architecture with ready libraries like OPENESS and Open Vino has really made it possible to achieve almost same performance like monolithic RAN systems.

Above all latest launch of #intel 3rd generation XEON processors are proving to be a game changer in bringing COTS to the last mile sites .

 Above all involvement of SI in the ecosystem means the industry is approaching phase where all integration can be done through open API and in no time making true vision of level4 autonomous network possible. 

DUDistributed Unit
CUControl Unit
CP & UPControl Plane and User Plane
A & AIActive and Available Inventory
CLAMOControl Loop Automation Management Function
NFVINetwork Function Virtualized Infrastructure
SDNSoftware Defined Networks
L2TPLayer2 Tunneling Protocol
SBIService Based Interface
NRFNetwork Repository Function
NEFNetwork Exposure Function
NATNetwork Address translation
LBLoad Balance
HAHigh Availability
PaaSPlatform as a Service
ENIEnhanced Network Intelligence
ZSMZero touch Service Management
EFKElastic search, Fluent and Kibana
APIApplication Programming Interface

Evaluating Gaps and Solutions to build Open 5G Platforms and Capabilities

Since the release of much awaited 3GPP Release-16   in June last year lot of vendors have proliferated their products and brought their 5G SA A.K.A Standalone products to market and with promises like support of Slicing , massive IoT , uRLLC and improved , Edge capability ,NPN and IAB backhauling it is just natural all big Telco’s in APAC and globally have already started their journey towards 5G Standalone core . However, most of the commercial deployments are based on vendor E2E stack which is a good way to start journey and offer services quickly however with the type of services and versatility of solution specially on the industry verticals required and expected from both 3GPP Release16 and SA Core it is just a matter of time when one vendor cannot fulfill all the solutions and that is when a dire need to build a Telco grade Cloud platform will become a necessity.

During the last two years we have done a lot of work and progress in both better understanding of what will be the Cloud platforms  for 5G era , it is correct that as of now the 5G Core container platform  from open cloud perspective is not fully ready but we are also not too far from making it happen . From community Anuket  Kali that we are targeting in June is expecting to fulfill many gaps and our release cycle for XGVELA will try to close many gaps , so in a nutshell 2021 is the year where we expect a Production ready open cloud platforms avoiding all sorts of vendor lock ins .

Let’s try to understand top issues enlisted based on  5G SA deployments in Core and Edge  Vendors are mostly leveraging existing NFVI to evolve to CaaS by using a middle layer  shown Caas on Iaas , the biggest challenge is this interface is not open which means  there are many out of box enhancements done by each vendor and this is one classic case of “When open became the closed “


The most enhancement done on the adaptors for container images  are as follows

  • Provides container orchestration, deployment, and scheduling capabilities.
  • Provides container Telco  enhancement capabilities: Hugepage memory, shared memory, DPDK, CPU core binding, and isolation
  • Supports container network capabilities, SR-IOV+DPDK, and multiple network planes.
  • Supports the IP SAN storage capability of the VM container.
  1. Migration path from Caas on IaaS towards BMCaaS is not smooth and it will involve complete service deployment, it is true with most operators investing heavily in last few years to productionize the NFVi no body is really considering to empty pockets again to build purely CaaS new and stand-alone platform however smooth migration must be considered
  2. We are still in early phase of 5G SA core and eMBB is only use case so still we have not tested the scaling of 5G Core with NFVi based platforms
  3. ETSI Specs for CISM are not as mature as expected and again there are lot of out of box customizations done by each vendor VNFM to cater this.

Now lets come to point where the open platforms are lacking and how intend to fix it

Experience #1: 5G Outgoing traffic from PoD

The traditional Kubernetes and CaaS Platforms today handles and scales well with ingress controller however 5G PoD’s and containers outgoing traffic is not well addressed as both N-S and E-W traffic follows same path and it becomes an issue of scaling finally.

We know some vendors like Ericsson  who already bring products like ECFE and LB in their architecture to address these requirements.

Experience#2: Support for non-IP protocols

PoD is natively coming with IP and all external communication to be done by Cluster IP’s it means architecture is not designed for non-IP protocols like VLAN, L2TP, VLAN trunking

Experience#3: High performance workloads

Today all high data throughputs are supported CNI plugin’s which natively are like SR-IOV means totally passthrough, an Operator framework to enhance real time processing is required something we have done with DPDK in the open stack world

Experience#4: Integration of 5G SBI interfaces

The newly defined SBI interfaces became more like API compared to horizontal call flows, however today all http2/API integration is based on “Primary interfaces” .

It becomes a clear issue as secondary interfaces for inter functional module is not supported

Experience#5: Multihoming for SCTP and SI is not supported

For hybrid node connectivity at least towards egress and external networks still require a SCTP link and/or SIP endpoints which is not well supported

Experience#6: Secondary interfaces for CNF’s

Secondary interfaces raise concerns for both inter-operability, monitoring and O&M, secondary interfaces is very important concept in K8S and 5G CNF’s as it is needed during

  • For all Telecom protocols e.g BGP
  • Support for Operator frameworks (CRD’s)
  • Performance scenarios like CNI’s for SR-IOV

today only viable solution is by NSM i.e service mesh that solves both management and monitoring issues

Experience#7: Platform Networking Issues in 5G

Today in commercial networks for internal networking most products are using Multus+VLAN while for internal based on Multus+VxLAN it requires separate planning for both underlay and overlay and that becomes an issue for large scale 5G SA Core Network

Similarly, top requirements for service in 5G Networks are

  • Network separation on each logical interface e.g VRF and each physical sub interface
  • Outgoing traffic from PoD
  • NAT and reverse proxy

Experience#8: Service Networking Issues in 5G

For primary networks we are relying on Calico +IPIP while for secondary network we are relying ion Multus

Experience#9: ETSI specs specially for BM CaaS

Still I believe the ETSI specs for CNF’s are lacking compared to others like 3GPP and that is enough to make a open solution move to a closed through adaptors and plugin’s something we already experienced during SDN introduction in the cloud networks today a rigorous updates are expected on

  • IFA038 which is container integration in MANO
  • IFA011 which is VNFD with container support
  • Sol-3 specs updated for the CIR (Container image registry) support

Experience#10: Duplication of features on NEF/NRM and Cloud platforms  

In the 5G new API ecosystem operators look at their network as a platform opening it to application developers. API exposure is fundamental to 5G as it is built into the architecture natively where applications can talk back to the network, command the network to provide better experience in applications however the NEF and similarly NRF service registry are also functions available on platforms. Today it looks a way is required to share responsibility for such integrations to avoid duplicates

Reference Architectures for the Standard 5G Platform and Capabilities

Cap#1: Solving Data Integration issues   

Real AI is the next most important thing for Telco’s as they evolve in their automation journey from conditional #automation to partial autonomy . However to make any fully functional use case will require first to solve #Data integration architecture as any real product to be successful with #AI in Telco will require to use Graph Databases and Process mining and both of it will based on assumption that all and valid data is there .

Cap#2: AI profiles for processing in Cloud Infra Hardware profiles    

With 5G networks relying more on robust mechanisms to ingest and use data of AI , it is very important to agree on hardware profiles that are powerful enough to deliver AI use cases to deliver complete AI pipe lines all the way from flash base to tensor flow along with analytics .  

Cap#3: OSS evolution that support data integration pipeline    

To evolve to future ENI architecture for use of AI in Telco and ZSM architecture for the closed loop to be based on standard data integration pipeline like proposed in ENI-0017 (Data Integration mechanisms)

Cap#4: Network characteristics      

A mature way to handle outgoing traffic and LB need to be included in Telco PaaS

Cap#5: Telco PaaS     

Based on experience with NFV it is clear that IaaS is not the Telco service delivery model and hence use cases like NFVPaaS has been in consideration for the early time of NFV . With CNF introduction that will require a more robust release times it is imperative and not optional to build a stable Telco PaaS that meet Telco requirements. As of today, the direction is to divide platform between general PaaS that will be part of standard cloud platform over release iterations while for specific requirements will be part of Telco PaaS.

The beauty of this architecture is no ensure the multi-vendor component selection between them. The key characteristics to be addressed are

Paas#6: Telco PaaS Tools    

The agreement on PaaS tools over the complete LCM , there is currently a survey running in the community to agree on this and this is an ongoing study


Paas#2: Telco PaaS Lawful interception

During recent integrations for NFV and CNF we still rely on Application layer LI characteristics as defined by ETSI and with open cloud layer ensuring the necessary LI requirements are available it is important that PaaS include this part through API’s

Paas#3: Telco PaaS Charging Characteristics

The resource consumption and reporting of real time resources is very important as with 5G and Edge we will evolve towards the Hybrid cloud  

Paas#4: Telco PaaS Topology management and service discovery

A single API end point to expose both the topology and services towards Application is the key requirement of Telco PaaS

Paas#5: Telco PaaS Security Hardening

With 5G and critical services security hardening has become more and more important, use of tools like Falco and Service mesh is important in this platform

Paas#6: Telco PaaS Tracing and Logging

Although monitoring is quite mature in Kubernetes and its Distros the tracing and logging is still need to be addressed. Today with tools like Jaeger and Kafka /EFK needs to be include in the Telco PaaS

Paas#7: Telco PaaS E2E DevOps

For IT workloads already the DevOps capability is provided by PaaS in a mature manner through both cloud and application tools but with enhancements required by Telco workloads it is important the end-to-end capability of DevOps is ensured. Today tools like Argo need to be considered and it need to be integrated with both the general PaaS and Telco PaaS

Paas#8: Packaging

Standard packages like VNFD which cover both Application and PaaS layer

Paas#8: Standardization of API’s

API standardization in ETSI fashion is the key requirement of NFV and Telco journey and it needs to be ensured in PaaS layer as well. For Telco PaaS it should cover VES , TMForum,3GPP , ETSI MANO etc . Community has made following workings to standardize this

  • TMF 641/640
  • 3GPP TS28.532 /531/ 541
  • IFA029 containers in NFV
  • ETSI FEAT17 which is Telco DevOps
  • ETSI TST10 /13 for API testing and verification  

Based on these features there is an ongoing effort with in the LFN XGVELA community and I hope more and more users, partners and vendors can join to define the Future Open 5G Platform


Network Slicing and Automation for 5G (Rel-15+) – A RAN Episode

Figure 1. 5G network slices running on a common underlying multi-vendor and multi-access network. Each slice is independently managed and addresses a particular use case.
Courtesy of IEEE

Network Slicing is a great concept which has always been an attractive jargon for vendors who wish to bundle it with products to sell their products and solutions . However with the arrival of 3GPP Release16 and subsequent products arriving in market things are starting to change ,with so many solutions and requirements finding a novel slicing architecture that fits all is both technically complex and business wise not making lot of ROI sense . Today we will try to analyze and answer the latest progress and directions to solve this dilemma

Slicing top challenges

Based on our recent work in GSMA and 3GPP we believe below are the top questions both to evolve and proliferate slicing solutions

  • Can a Public Slicing solution fulfill vertical industry requirements
  • How to satisfy vertical industry that Slicing solution can fulfil their needs like data sovereignty , SLA , Security , Performance
  • Automation and Intelligence , can a public slicing solution flexible enough to provide all intelligence for each industry
  • Slicing for cases of 5G Infra sharing

Solution baseline principles

When we view Slicing or any tenant provisioning solution it is very important as E2E all layers including business fulfillment , network abstraction and Infrastructure including wireless adhere to the same set of principles .

This image has an empty alt attribute; its file name is image.png

A nice description of it can be found in 3GPP TS28.553 about management and orchestration for Network slicing and 3GPP TS28.554 KPI for 5G solutions and slicing . In summary once we take the systems view for Network Slicing the principles can be summarized to following

  • Slice Demarcation: A way to isolate each tenant and a possibility to offer different features of slicing bundle to different tenants , for example a Large enterprise with 10 features and 20 SLA while for small businesses 5 features and 5 SLA will do
  • Performance: A way to build a highly performant system , the postulate is once we engineer and orchestrate it will it work E2E
  • Observability : With 4B+ devices added every year and with industry setting a futuristic target of a Million Private networks by 2025 its just a pressing issue how to observe and handle such networks in real time

I think when we talk about Slicing mostly we speak about key Technology enablers like NFV , Cloud , MEC , SDN which is obviously great since a software of Network and Infra is vital . However not speaking about RAN #wireless and WNV (Wireless Network virtualization) is not a just . In this paper i just want to shed some light from RAN perspective , consider the fact still today around 65% of customers CAPEX/OPEX pumping in RAN and Transport it is vital to see this view for both conformant and realistic solution . if NFV/SDN/Cloud demands sharing among few 100’s tenants the RAN demands sharing among Million so resource sharing , utilization and optimization is vital

RAN Architecture

From E2E perspective the RAN part of slice is selected based on GST and NSSAI which is done by UE or the Core Network however its easier said than done when we need to view E2E Slicing following should be considered to build a scalable slicing solution

RAN#1: Spectrum and Power resources

The massive requirements for business towards services and slices require a highly efficient Radio resources , luckily low,mid and high bands combined with Massive Mimo is handling this part however not just spectrum and how to utilize this in efficient manner in form of form factor and power is vital .

When we need view RAN view of Slicing its not just the Spectrum it self or RF signal but also the Spectrum management like Macro , Femto and Het Nets including Open cellular . In summary still this part we are not able to understand well as it require some novel algorithms like MINLP (mixed integer non linear) programming which focus to optimize cost while increase resource usage at same time . As per latest trend a tiered RAN architecture combined by new algo like game matching through ML/AI is the answer to standardize this

RAN#2: RAN Dis-aggregation

Just like how NFV/SDN and Orchestration did for Core similarly Open-RAN and RIC (RAN intelligence controller ) will do for RAN . If you want to know more may be you need check author’s writeup about RAN evolution

RAN#3 RAN resource optimize

Based on our Field trials we find the use of Edge and MEC with RAN and specially for CDN will save around 25% of resources , the RAN caching is vital combined with LBO( Local break out) will help Telco’s fulfill the very pressing requirements from verticals . Again this is not just a cloudlet and software issue as different RAN architectures require a different approach like D2D RAN solution , Het Net and macro etc

RAN#4 Mid Haul optimize

Mid haul and Back haul capacity optimization is vital for slicing delivery and today this domain is still in a R&D funnel . A TIP project CANDI Converged Architectures for Network Disaggregation & Integration is some how evolving to understand this requirement

RAN#5 Edge Cost model

Edge solution for Slicing in context of RAN is cost model problem e.g how many MEC servers and location and it can relieve RAN and RF layer processing is the key , our latest work with Telco Edge cloud with different models for different site configuration is the answer

RAN#6 Isolation , elasticity and resource limitation

This is the most important issue for RAN slicing primarily due the the fact that they are different conflicting dimensions viz. extra resource isolation may make impossible to share resources and will limit services during peak and critical times , similarly much elasticity will make isolation and separation practically impossible , solutions for matching algorithms is the answer as it will help to build a RAN system which is not only less complex but also highly conformant . This is a make and break for RAN architecture for slicing

RAN#7 RAN infrastructure sharing for 5G

Today already the Infra sharing has started between ig players in Europe , the one questions that comes what about if a use purchase a slice and service from a tenant ,consider a whole sale view where the Infra is processed by sharing and bundling of resources from all national carriers due to reason that obviously the 5G infra from single operator is not sufficient from both coverage and capacity perspective

RAN#8 RAN Resource RAGF problem

In case of service mobility or congestion how UE can access the resources quickly may be in other sector or sites

RAN#9 Slice SLA

SLA of slices and its real time monitoring is the key requirements of business , however imagine a situation where shortage of shared resource pool make impossible to deliver the SLA

RAN#10 Slice Operations

Slice operations is not just the view of BSS and Operations as real time RAN resource usage and optimization is necessary , Have you ever thought how perfectly managed slice can exist with normal Telco service specially when you find there is a key event and many users will use service . I think this so some dimension still not well addressed . I have no hesitate to say when many CXO’s of enterprise convince them they should opt to build their own 5G private network this is exactly the problem they fear .


In today’s writeup i have tried to explain both the current progress, challenges and steps to build a successful slicing solution keeping the hat of a RAN architect , i believe this is very important to see the Radio view point which somehow i firmly believe has not gotten its due respect and attention in both standard bodies and by vendors , in my coming blog i shall summarize some key gaps and how we can approach it as still the slicing products and solutions are not carrier grade and it need further tuning to ensure E2E slicing and services fulfillment .

Why Cloud and 5G CNF architects must analyze docker depreciation after kubernetes 1.20

Kubernetes is deprecating Docker as a CRI after v1.20 which makes possible future all applications coverage over a single image standard which is OCI , consider the fact due to 5G Telco CNF secondary networking and running

  • Service aware protocols like SIP
  • connection aware like SCTP Multi homing most
  • Ensuring regulatory requirement specially on traffic separation
  • Load balancing
  • Nw isolation
  • Network acceleration

#CNF suppliers today already prefer OCI over #docker #ship . In the long road obviously it will support portability of all applications across cloud platforms .However a negative side it will impact our tool chains specially in cases where we use docker inside docker like tools such as #kaniko, #img, and most importantly  #buildah

If you an Architect who want to solve this challenge or a developer who is little naggy about applications #LCM can kindly refer to community blog post below


here is the detailed writeup from community for quick reference .

Kubernetes 1.20: The Raddest Release

Tuesday, December 08, 2020

Authors: Kubernetes 1.20 Release Team

We’re pleased to announce the release of Kubernetes 1.20, our third and final release of 2020! This release consists of 42 enhancements: 11 enhancements have graduated to stable, 15 enhancements are moving to beta, and 16 enhancements are entering alpha.

The 1.20 release cycle returned to its normal cadence of 11 weeks following the previous extended release cycle. This is one of the most feature dense releases in a while: the Kubernetes innovation cycle is still trending upward. This release has more alpha than stable enhancements, showing that there is still much to explore in the cloud native ecosystem.

Major Themes

Volume Snapshot Operations Goes Stable

This feature provides a standard way to trigger volume snapshot operations and allows users to incorporate snapshot operations in a portable manner on any Kubernetes environment and supported storage providers.

Additionally, these Kubernetes snapshot primitives act as basic building blocks that unlock the ability to develop advanced, enterprise-grade, storage administration features for Kubernetes, including application or cluster level backup solutions.

Note that snapshot support requires Kubernetes distributors to bundle the Snapshot controller, Snapshot CRDs, and validation webhook. A CSI driver supporting the snapshot functionality must also be deployed on the cluster.

Kubectl Debug Graduates to Beta

The kubectl alpha debug features graduates to beta in 1.20, becoming kubectl debug. The feature provides support for common debugging workflows directly from kubectl. Troubleshooting scenarios supported in this release of kubectl include:

  • Troubleshoot workloads that crash on startup by creating a copy of the pod that uses a different container image or command.
  • Troubleshoot distroless containers by adding a new container with debugging tools, either in a new copy of the pod or using an ephemeral container. (Ephemeral containers are an alpha feature that are not enabled by default.)
  • Troubleshoot on a node by creating a container running in the host namespaces and with access to the host’s filesystem.

Note that as a new built-in command, kubectl debug takes priority over any kubectl plugin named “debug”. You must rename the affected plugin.

Invocations using kubectl alpha debug are now deprecated and will be removed in a subsequent release. Update your scripts to use kubectl debug. For more information about kubectl debug, see Debugging Running Pods.

Beta: API Priority and Fairness

Introduced in 1.18, Kubernetes 1.20 now enables API Priority and Fairness (APF) by default. This allows kube-apiserver to categorize incoming requests by priority levels.

Alpha with updates: IPV4/IPV6

The IPv4/IPv6 dual stack has been reimplemented to support dual stack services based on user and community feedback. This allows both IPv4 and IPv6 service cluster IP addresses to be assigned to a single service, and also enables a service to be transitioned from single to dual IP stack and vice versa.

GA: Process PID Limiting for Stability

Process IDs (pids) are a fundamental resource on Linux hosts. It is trivial to hit the task limit without hitting any other resource limits and cause instability to a host machine.

Administrators require mechanisms to ensure that user pods cannot induce pid exhaustion that prevents host daemons (runtime, kubelet, etc) from running. In addition, it is important to ensure that pids are limited among pods in order to ensure they have limited impact to other workloads on the node. After being enabled-by-default for a year, SIG Node graduates PID Limits to GA on both SupportNodePidsLimit (node-to-pod PID isolation) and SupportPodPidsLimit (ability to limit PIDs per pod).

Alpha: Graceful node shutdown

Users and cluster administrators expect that pods will adhere to expected pod lifecycle including pod termination. Currently, when a node shuts down, pods do not follow the expected pod termination lifecycle and are not terminated gracefully which can cause issues for some workloads. The GracefulNodeShutdown feature is now in Alpha. GracefulNodeShutdown makes the kubelet aware of node system shutdowns, enabling graceful termination of pods during a system shutdown.

Major Changes

Dockershim Deprecation

Dockershim, the container runtime interface (CRI) shim for Docker is being deprecated. Support for Docker is deprecated and will be removed in a future release. Docker-produced images will continue to work in your cluster with all CRI compliant runtimes as Docker images follow the Open Container Initiative (OCI) image specification. The Kubernetes community has written a detailed blog post about deprecation with a dedicated FAQ page for it.

Exec Probe Timeout Handling

A longstanding bug regarding exec probe timeouts that may impact existing pod definitions has been fixed. Prior to this fix, the field timeoutSeconds was not respected for exec probes. Instead, probes would run indefinitely, even past their configured deadline, until a result was returned. With this change, the default value of 1 second will be applied if a value is not specified and existing pod definitions may no longer be sufficient if a probe takes longer than one second. A feature gate, called ExecProbeTimeout, has been added with this fix that enables cluster operators to revert to the previous behavior, but this will be locked and removed in subsequent releases. In order to revert to the previous behavior, cluster operators should set this feature gate to false.

Please review the updated documentation regarding configuring probes for more details.

Other Updates

Graduated to Stable

Notable Feature Updates

Release notes

You can check out the full details of the 1.20 release in the release notes.

Availability of release

Kubernetes 1.20 is available for download on GitHub. There are some great resources out there for getting started with Kubernetes. You can check out some interactive tutorials on the main Kubernetes site, or run a local cluster on your machine using Docker containers with kind. If you’d like to try building a cluster from scratch, check out the Kubernetes the Hard Way tutorial by Kelsey Hightower.

Release Team

This release was made possible by a very dedicated group of individuals, who came together as a team in the midst of a lot of things happening out in the world. A huge thank you to the release lead Jeremy Rickard, and to everyone else on the release team for supporting each other, and working so hard to deliver the 1.20 release for the community.

Release Logo

Kubernetes 1.20 Release Logo

raddestadjective, Slang. excellent; wonderful; cool:

The Kubernetes 1.20 Release has been the raddest release yet.

2020 has been a challenging year for many of us, but Kubernetes contributors have delivered a record-breaking number of enhancements in this release. That is a great accomplishment, so the release lead wanted to end the year with a little bit of levity and pay homage to Kubernetes 1.14 – Caturnetes with a “rad” cat named Humphrey.

Humphrey is the release lead’s cat and has a permanent blepRad was pretty common slang in the 1990s in the United States, and so were laser backgrounds. Humphrey in a 1990s style school picture felt like a fun way to end the year. Hopefully, Humphrey and his blep bring you a little joy at the end of 2020!

The release logo was created by Henry Hsu – @robotdancebattle.

User Highlights

Project Velocity

The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing, and is a neat illustration of the depth and breadth of effort that goes into evolving this ecosystem.

In the v1.20 release cycle, which ran for 11 weeks (September 25 to December 9), we saw contributions from 967 companies and 1335 individuals (44 of whom made their first Kubernetes contribution) from 26 countries.

Ecosystem Updates

  • KubeCon North America just wrapped up three weeks ago, the second such event to be virtual! All talks are now available to all on-demand for anyone still needing to catch up!
  • In June, the Kubernetes community formed a new working group as a direct response to the Black Lives Matter protests occurring across America. WG Naming’s goal is to remove harmful and unclear language in the Kubernetes project as completely as possible and to do so in a way that is portable to other CNCF projects. A great introductory talk on this important work and how it is conducted was given at KubeCon 2020 North America, and the initial impact of this labor can actually be seen in the v1.20 release.
  • Previously announced this summer, The Certified Kubernetes Security Specialist (CKS) Certification was released during Kubecon NA for immediate scheduling! Following the model of CKA and CKAD, the CKS is a performance-based exam, focused on security-themed competencies and domains. This exam is targeted at current CKA holders, particularly those who want to round out their baseline knowledge in securing cloud workloads (which is all of us, right?).

Event Updates

KubeCon + CloudNativeCon Europe 2021 will take place May 4 – 7, 2021! Registration will open on January 11. You can find more information about the conference here. Remember that the CFP closes on Sunday, December 13, 11:59pm PST!

Upcoming release webinar

Stay tuned for the upcoming release webinar happening this January.

Get Involved

If you’re interested in contributing to the Kubernetes community, Special Interest Groups (SIGs) are a great starting point. Many of them may align with your interests! If there are things you’d like to share with the community, you can join the weekly community meeting, or use any of the following channels:

Developing Edge Solutions for Telcos and Enterprise

According to Latest market research most of 5G Edge use cases will be realized in next 12-24 months however time to act now for Telco’s to leave them a chance , reason is very clear this is enough time for Hyperscalers to cannibalize the market something we already witnessed with OTT’s in 3G and with VoD and Content Streaming in 4G

Below are my thoughts on

  • What is Edge definition
  • What is Edge Differentiation
  • Why Telco should care about it
  • Why Software architecture so vital for Telco Edge Success

5G Site Solutions Disaggregation using Open RAN

According to Latest Market insights the RAN innovation for Telecom Lags behind others initiative by 7years which means call for more innovative and Disruptive delivery models for the Site solutions specially for next Wave of 5G Solutions .

However to reach the goal of fully distribute and Open RAN there needs to build a pragmatic view of brown fields and finding the Sweet Spot for its introduction and wide adoption .

Here are my latest thoughts on this and how Telecom Operators should adopt it . There is a still a time for industry wide adoption of Open RAN but as yo will find time to act is now .

What you will

What you will learn

  • Building Delivery Models for Open RAN in a brownfield
  • Understand what,when and how of Open RAN
  • What is Open RAN and its relation with 5G
  • Current Industry solutions
  • Define phases of Open RAN delivery
  • Present and Next Steps 5. Architecture and State of Play

Security Framework to Secure Online Education and Remote Workers

The Recent report by Cisco reveals 250% increase in security attacks since Covid-19 Sets in . It required a new paradigm of how to secure our online presence

A part from increasing attacks the factors like Government encourage to not using VPN and presence of public Wifi make the whole story more horrible .

Cisco Latest Umbrella Offering is the answer to these challenges with subscription SaaS Model and options of Delivery whether onprem or on the Cloud it is best suited for both home and Enterprise customers .

Logically Umbrella only requires DNS re-routing

Finally the Provision of policy manager to manage and subscribe to policies like Parental control and URL filtering for business and personal use is something important both from security and improve efficiency on line .

SaaS offerings for Security is the answer and for sure Cisco Umbrella is a good solution to adress that .For Details please refer to umbrealla.cisco.com

Cyber Security for 5G and Cloud World

New Cybersecurity Companies Have Their Heads In The Cloud

Cisco Latest report quantifies in 2019 1272 breaches that exposed 163M customer records . In a 5G and Cloud connected world to adress security concerns 5G Security SA5 and community defined some key principles that we must adhere to build dis-aggregated Networks .

1. Use of SUCI (Subscription concealed Identifier) to ensure even during first Latch the subscriber ID is not sent as plain test

2. 5G Auth and Key Agreement uses private/public key something very familiar to Cloud hyperscalers to grant resource access

3. Before device join network the Core will validate device then the device authentication start (This architecture make use of AMF ,UDM and AUSF and SEAF)

4. Use of Network Slicing in NPN and Public NW to ensure only users can reach his service slice only

5. To solve issues that limit operators use of encryption on Iu interface is addressed in 5G with the use of Data validation to ensure even protected streams can have integrity check

6. The New SecGW (Security end point GW) to tunnel the Radio GnB traffic directly at access/metro

7. API and Digest level protection for MEC and Developer system combined with security DDoS ,Malware protection

8. IdM and HSM for Infra security

For details refer to latest info graphics from Samsung

#Cyber #Security #Cloud #Infrastructure

Using Cloud and AI to Differentiate your 5G Investment

Source: Disney

In a recent Webinar about how to build a successful 5G networks a question that took my mind was .

“How successful we can be if we address a fundamentally new Problem using a new Technology if we still use old principles to build our Telecom Networks and with out disrupting the supply chains”

I think the answer for these type of questions in the context of 5G fundamentally will depends on following two key initiatives.

  1. How to use Radio spectrum to gain strategic advantage over competitors
  2. How to use Cloud to gain advantage for 5G

The Radio Spectrum is a complex topic primarily driven by many factors like regulatory and existing use of Spectrum making real 5G a slight different than what is really possible with Spectrum of today . This alone is not enough as Smart cells vs Wifi6 will be again something that will really depend on Spectrum use of 5G .These details i will leave it for now for future discussion and want to focus on Cloud and how really it will make your 5G successful.

During our recent work with in ETSI NFV Release4 Sol WG , GSMA and LFN CNTT we have discussed and agreed on a number of ways really Cloud can support you to differentiate your 5G network . Knowing this can be a real game changer for Opcos who are investing in 5G and Future Networks


A homogeneous Infrastructure Platform on 5G that can be used by all applications like traditional 5G CNF’s , MEC , Developer applications and any legacy IT /OTT applications that are required to be offered to users . One such example is OpenShift or VMware Edge and Last mile solutions using technologies like CNV or VCF7.0/NSXT3.0 that will build the edge clouds in an automated manners and enable day 2 through standard tools whether use VM or containers or BM’s as a baseline architecture

A uniform IPI that can be deployed using standard Red Fish solutions such as the one from HPE really will make is possible to build 5G using the Clone technology as used in most automotive industry today and that really enabled them to produce with minimum toil


Scalability in the last mile is the most important criteria for 5G Success . For example a compute solution that can scale and can provide power to process really all sort of workloads at the Edge is certainly a make or break for 5G . When it comes to Data one such example is storage and Disk , with solutions like RedHat Ceph3.0 that supports compression from Q3 2020 using its blue store offering and can integrate CephFS with NFS support makes the real convergence possible .

Convergence vs Automation

IT SRE and DevOps has gained lot of traction recently and this is not without a reason . It has certainly reduced the CFO bills and that is why the Telco’s want to achieve the same . However the requirements of workloads are really unique and that makes us to understand that real automation with out standard modeling is never possible .

On the Cloud side we can make use of TOSCA models together with solutions like automation hub together with secure catalog and registry means we can do both modeling for varying workload requirements and to automate it in the same fashion . Further we can do some advanced testing like the one we have been doing in PyATS

Registries and Repositories

The concept of 5G factory that we have been rigorously trying to achieve in Middle East Telco projects are really made possible using secure registries like Quay for containers , Dockerhub and its integration with Jenkins and CI/CD tools for Telco.

It is no surprise if i tell you these are most important differentiators as we introduce public clouds for 5G


The programmability of Immutable infrastructure is the biblical principle for 5G Networks . Both Service Mesh , NSM and Server less are deployed as operators which a practically CNI programs that makes your infra follow software YAML instead of tight and coupled instructions .Further to that the Operator supports full automation of both day0 and day2 Infrastructure tasks .

For K8S it is currently supported while for VM’s it will be available fully in Dec 2020

Openshift service mesh for 5G CP CNF’s is possible today with

  • Istio
  • Grafana
  • Prometheus
  • Kiali
  • Jaeger

Further to that today we faced a number of issues in Docker to Telco and use of CRI-O and PodMan will certainly support to advance the 5G .

“Podman is more light weight compared to CRI-O so you should expect it better performing on 5G Edge compared to PoDman .

5G Integration

Redhat Fuse online is one of solutions which abstracts Infrastructure and make it possible to integrate developer , integrator and tester using one tool . Except of container it also standardized your VM’s . E.g VM in Openshift running FTP service and that make it possible to run on native containers itself .Fuse Online provides a data mapper to help you do this. In a flow, at each point where you need to map data fields, add a data mapper step. Details for mapping etc

Red Hat® Fuse is a distributed integration platform with standalone, cloud, and iPaaS deployment options. Using Fuse, integration experts, application developers, and business users can independently develop connected solutions in the environment of their choice. This unified platform lets users collaborate, access self-service capabilities, and enforce governance.

An SDK is definitely helpful for 5G platform specially when it comes to open your networks for the developer who need .NET or JAVA . Quarkus from RedHat is a Kubernetes-Native full-stack Java framework aimed to optimize work with Java virtual machines.

Quarkus provides tools for Quarkus applications developers, helping them reduce the size of Java application and container image footprint, eliminate programming baggage, and reduce the amount of memory required.

Advanced Cluster Management

With huge number of 5G sites and future scnerio of site sharing between operators . It will be a real need to deploy Apps and manage them in a hybrid Cloud scnerio and nothing explains it better than burr sutter demo at the RedHat summit . A cool video from RedHat team is available there if you want to learn it more

In a summary you can mange

  • 5K+ poD’s
  • Create clusters in hybrid cloud like AWS,GCP,Azure, Bare metal and On prem
  • Policy management
  • Secure deployment by validating YAML and images using Quay/clair sorted by Labels
  • Possibility for developer to create and deploy policy using GUI

Above all RHACM makes is possible to measure SLA of Clusters and Optimize workloads e.g shift to other clusters in an automated manner .Certainly a Cool thing for 5G to serve heavy lift and Content driven applications

Heavy Lifting of Workloads

The proponents of silo vendor solutions often tell us that 5G Base band processing and e-CPRI heavy lifting with parallel processing will make X-86 a non practical choice to adopt classical cloud way .

However the latest Intel atomic series with FPGA’s and NVIDIA GPU’s means we can not only solve the Radio issues such as the ones we are trying to solve in Open-RAN but will enable to introduce latest technologies like AI and ML in 5G era networks . Those who are more interested in this domain can refer to latest work in ITU here

For ML/AI use cases in 5G there are many made possible in both Telco and vertical industry like Automobiles, warehouse monitoring etc today using GPU operator , Topology manager like shows visibility in to GPU ,NIC,BW,Performance etc.

Open Policy Pipeline can optimize the ML model itself using analytics functions of the Cloud

When it comes to Cloud value to data scientist in 5G using platforms like OCP or HPE Blue Data as follows

  • Anaconda tool sets for programming
  • Jupyter notebooks
  • CUDA and other similar libraries
  • Report on both Log and Policy compliance
  • Tekton Pipeline in OCP for CI/CD of ML/AI use cases
  • Models are made in Jupyter by scientists while it is triggered in the Tektron pipeline

Finally using OCP Open Model Manager we can Register, deploy and monitor open source models in one central environment, uniting data scientists and IT/DevOps.


The most important takeaway is that if we have to take full advantage from 5G we not only need to follow 3GPP and traditional Telecom SQI’s but also those advantages offered by Cloud . This is only way to not only manage a TCO attractive 5G but also will enable to introduce both Niche players and new services that will be required to build and drive a Post COVID-19 world economy .