How Open Orchestration enhances  Enterprise, 5G , Edge and Containerized applications in Production

Picture1

Source: ETSI <www.etsi.org>

 

How Open Orchestration (OSM Release-7) enhances  Enterprise, 5G , Edge and Containerized applications in Production

An architect’s perspective from ETSI® the Standards People

 

As highlighted in the Heavy reading latest End-to-End Service Management for SDN & NFV all the major T1 Telco’s are currently refining their Transformation journey to bring standard Orchestration and Service modeling in their networks , one of such standard approach is promised by ETSI OSM a seed project from ETS® the standards people .

Recently in Q4 2019 ETSI OSM release the Release7 which address surmount challenges of brings CNF and Containerized applications to the production ETSI OPEN SOURCE MANO UNVEILS RELEASE SEVEN, ENABLES MORE THAN 20,000 CLOUD-NATIVE APPLICATIONS FOR NFV ENVIRONMENTS

This capability of ETSI® OSM is specifically important considering the ushering of 5G SA architecture and solutions which already find its way to the market thanks to early work from CNCF and specifically CNTT K8S specs . OSM brings value to the picture as it will allow to design, model, deploy and manage CNF’s (As ETSI NFV call is a containerized VNF) without any translation or modeling. It also lets operators experience early commercial use case of integration Helm2.0 in their production environments. On top of it will allow a NS (Network Service) to combine CNF’s with existing VNF’s or legacy PNF’s to deliver complex services in an easy to deploy and manageable manner.

In the following part of this paper I will try to share my understanding on OSM release7 and sum up results from ETSI OSM webinar on this subject held on JAN 16th 2020 . For details you may need to refer to webinar content itself and can be found https://www.brighttalk.com/webcast/12761/380670  

Why Kubernetes is so important for Telco and Enterprise

Telco industry has experienced lot of pain points the way NFV journey has steered with focus on migrating existing PNF’s to the Cloud. K8S offers opportunity for all Platform providers, application vendors, assurance partners to build something on modern principles of micro services, DevOps and Open API’s driven. This is something that already made its way to Telco’s in OSS and IT systems as an example mycom OSI UPM , OSM and  infact ONAP all are already based on Kubernetes , the arrival of 5G SA and uCPE branches has driven almost all operators adopt networks to use Kubernetes . Further it is principally agreed as CSP’s move to Edge the K8S will be the platform of choice.

Foundation for K8S Clusters

Kubernetes made it simple for the applications and CNF’s to use API’s in a standard fashion using K8S Clusters which are deployed either in an open source manner or via Distros. The early adoption of CNF’s in Telco largely supports the consumption model of vendor Distros like RedHat OpenShift, Vmware PKS, Ericsson CCD to mention the most important ones.

Since containers are like a floating VM’s so networking architecture specially the one promised by L3 CNI plugin and Flannel is important direction to be supported in Platforms as it is supported in OSM .

The reusability of API makes it simple for application to craft unique application in form a build configuration files using artifacts of PoD, services, cluster, config map and persistent volumes which are defined in a very standard manner in K8S by which I mean deploy all artifacts through a single file.

ETSI® OSM can be deployed using both HELM2.0 as well as Juju charmed bundles

Picture2

Foundation for Helm

Helm gives teams the tools they need to collaborate when creating, installing, and managing applications inside of Kubernetes. With Helm, you can… Find prepackaged software (charts) to install and use Easily create and host your own packages , Install packages into any Kubernetes cluster Query the cluster to see what packages are installed and running Update, delete, rollback, or view the history of installed packages Helm makes it easy to run applications inside Kubernetes. For details please refer to details HELM packages on https://helm.sh/blog/helm-3-released/

In a nut shell all day1 and day2 tasks required for the CNF’s are made possible using Helm and its artifacts known as Helm charts including application primitives, network connectivity and configuration capabilities.

Key Features of OSM Release7

OSM Release 7 is a carrier grade and below are its key features as per wiki

  • Improved VNF Configuration interface (One stop shop) for all Day0/1/2 operations
  • Improved Grafana dashboard
  • VNFD and NSD testing
  • Python3 support
  • CNF’s support in both options where OSM creates the Cluster or rely on OEM tools to provision it
  • Workload placement and optimization (Something very important for Edge and Remote clouds)
  • Enhancement in both Multi VIM and Multi SDN support
  • Support for Public Clouds

How OSM handles deployment of CNF’s

For most Telco guys this is most important question e.g how VNF package will be standardized with arrival of CNF’s , Will it mean a totally new Package or enhancement of existing.

Fortunately, OSM approach on this is modeling of Application in a standard fashion which means same package can be enhanced to reflect containerized deployment. On a NS level it can flexibly interwork with VNF/PNF as well, the deployment unit used to model CNF specific parameters is called KDU’s (Kubernetes Deployment Unit) other major change is K8S cluster under resources. It is important as it explains most important piece the Networking and related CNI interfaces.

OSM can deploy the K8S cluster using API integration or rely on 3rd party tools like Openshift® or PKS deploy it on instructions of OSM

Picture7Changes to NFVO interfaces

Just like Or-Vi is used for infrastructure integration with Orchestration the Helm2.0 (Will support 3.0 in near future) is used for infrastructure integration with K8S applications. Since the NBI supports mapping of KDU’s in same NSD it means only changes from orchestration point of view is on the south side only.

Workload Placement

As per latest industry standing and experience sharing in Kubecon and Cloud Native  summit Americas  there is a growing consensus that Container is the platform of choice for the Edge primarily due to its robustness , operational model and lighter foot print . As per our experience of containers here in STC a 40% reduction in both CAPEX and Foot print will be realized on DC’s if deployed Edge using Containers.

However, definition of business definition of Edge raise number of queries the most important of it are work load identification, placement and migration specially consider the fact the Edge is a lighter foot print that in future will host carrier mission critical applications.

Optimization of Edge from CSP perspective has to address following  Cost of compute in NFVI PoP’s , Cost of connectivity and VNFDFG something implemented by SFC’s and Constraints on service like SLA, KPI and Slicing

Picture3

The issues with the Upgrades and How OSM addresses

Compared to early release the OSM ns action primitives allow the CNF to be upgrades to the latest release and execute both dryrun and  Juju tests to ensure the application performance bench mark is same like before  .Although this works best for small applications like LDAP the same is difficult to achieve with more complex CNF’s like 5G SA . Through liaison with LFN OVP program I am sure soon the issue will be addressed. We as operator have a plan to validate it on a 5G SA nodes.

Picture4

My final thoughts on this that  Container journey for CSP is already a reality and coming very shortly in 2020+ and OSM ecosystem supports the commercialization of CNF’s through early use cases of 5G SA , Enterprise branch uCPE and most important Edge including MEC for which OSM seems to reach maturity  For details and how to participate and use do get involved in upcoming OSM Hackfest IN MARCH OSM-MR8 Hackfest

Many thanks to colleague , mentor and industry collaborator Jose Miguel Guzman , Francisco Javier Ramón Salguero  Gerardo García and Andy Reid for OSM growth in recent years … See you in Madrid

Picture5

References:

ETSI

Linux Foundation

OVP

How to plan the Best Server Solution for a Converged Telco Cloud Data center

During a recent workshop one question popping up which server make is the best to be deployed in a Telco Data center ?

As a Solution Advisory i know that such a question has no fixed answer and answer depend on solution , its basis and long term business and network strategy of customers but still for such questions customers require explain in terms of experience in similar projects and how to actually make such a comparison ? Specially for converted IT and Telco Data Centers .

To start with for this scenario in carriers business we have two solutions viz. Rack Servers and Blade Servers to fill in the blank ; the first thing we need to look at is the industry of NFV and Enterprise till date .What we know is that in NFV normally we are using 42U racks and Blade servers like HP7000, C7000 Gen8/Gen9 Servers, and Huawei E9000 all are blade servers and also Dell M630 also a blade server.

Technically we all know both server types are COTS X86 machines so must be OK to use any of them then why industry has come to such thinking of adopting the Blade servers? After detailed research and compared the benchmarks I can summarize the selection based on following items

1.Performance: in a Telco Cloud, performance is the killer decision because customer will only adopt the solution if it is same or better than the legacy one. We all know that blade servers processing in terms of V3 CPU Core, Disk and Obviously DRAM is much faster than Rack. Although the advocates of Rack servers proclaim with new technology the expansion in upgrade in rack servers is easy but it is balanced when we know the Space /Power /Maintenance of cabling issues in Rack server deployments

2. Cost: There is no doubt the Rack servers main intention was to support mass deployment in data centers which require less performance and quality, For example Google Cloud, Amazon Web hosting, Face book .In fact for these data centers it is more pragmatic to make expansion in bay and for complete racks to have cost advantage, this is one area where Rack servers are certainly doing better than blade servers

3.Scaling: a Telco Cloud must be hyper scalar .It means we can increase by server, by module without complexity. In fact for both type of servers the Compute node expansion is possible by scalar way but the complexity for O&M and installation for the two is not the same. We all know Rack servers need more cabling, more artifacts to get the job done .So for me it lacks promise of scaling

4.Energy: In a big data center it matters most to save power. Comparing the different solutions from different vendors we found that the Blades are more compact. I knew there is a term famous to data center guys about Power usage like power provided to application specific nodes to total power input. Sorry I fail to recall the ratio name but it is far better in Blades in Telco Special case

5.Infrastructure cost: The blades are best from Form factor point of view with Shared power, switches, disk and is a reason for less cabling needs compared to Rack

6 Solution unification: I think all Solution Architects know in a complex ICT world all the big carriers are trying to consolidate both IT and Telco domains to come to a unified solutions Finally .the initial High CAPEX is OK if it promise large OPEX reductions in the TCO cycle. Since Blade servers promise certain optional features and functions which Racks do not have so it is reasonable to deploy with Blade servers or to select multiple type of rack server’s .To explain this one can see the open Stack EPA (Enhance platform awareness) case what we know is that the ideas of Solution unification fits well for NFV when we use Blade servers

I hope these bench marks will enable you to plan your data centers but please note I am enthusiastic writer and blogger. So the comments and analysis in this paper are Sheikh’s own as per industry reach and self understanding, they do not represent any official understanding from my Employer Huawei.

Secondly there are many white box vendors and market of both Rack and Blades are huge. You may come to a situation where the Blade in analysis is worse than the counterpart so the reviews are more relevant if you are planning a Telco Cloud because this is my main focus area…