Australia continues to innovate 5G with Applications and Industry

Australia | Gavi, the Vaccine Alliance

How 5G will change the Geo maps and Economy in the Post Covid-19 world , some key initiatives we learn from Australia 5G Ramp

1. Australian Federal Govt to invest more than $21M to boost use of industrial use of 5G across Australia


2. The Key industry Govt want to harness through this grant are agriculture mining, logistics and manufacturing.

3. Attracting top talent in development is key target as 5G comes to life with 1/3 of Aussies already covered by 5G


4. More than $8.1M will improve Spectrum including adoption of DSS and launch of Ku band 26GHz

5. Government plan at least 1GHZ Spectrum availability for uRLCC (Above 279MHZ in US and 300 MHZ in West Europe)


6. Government Smart rollout initiatives plan to increase 5G throughput from 300Mbps to above 1Gbps by 2021+

With Iphone 5G launching this month I am optimistic in all sense AUstralis will keep on as one of leading markets for 5G

5G network infographic


#MyAustralia

https://www.canstarblue.com.au/about-canstar-blue/

Advertisement

Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks

Application Aware Infrastructure Architecture of Future Enterprise and Telecom Networks 

An architect’s perspective in 2020+ era

cropped-blogs-copyrights-1.jpg

The recent global situation and use of Critical Telecom infrastructure and Security solutions in the Cloud has shown to many critics as well the esoteric of  terms like Hybrid Cloud , AI , Analytics and modern applications is so vital to bring society and economy forward .

Seeing the latest development where we have been actively joining the community in both Infrastructure and Application evolution both in the Telco and Enterprise Application world i can safely conclude that days where Infrastructure is engineered or built to serve application requirements are over . On the contrary with the wide range of adoption of application containerization and Kubernetes as platform of choice the future direction is to design or craft the application that can best take advantage from a standard cloud infrastructure.

Understanding this relation is key impetus between business who will flay and those who will crawl to serve the ever-moving parts of the Eco System which are the Applications

2

Source: Intel public

In this paper let us try to investigate some key innovations on Infrastructure both Physical and Cloud which is changing the industry Pareto share from Applications to Infra thereby enabling the developers to code and innovate faster.

Industry readiness of containerized solutions

The adoption of micro services and application standardization around the 12 factor App by cloud Pioneer Heroku in 2012 gave birth to the entire new industry that has matured far quickly compared to virtualization. A brief description of how it is impacting market and industry can be referenced in Scott Kelly paper in Dynatrace blog . This innovation is based on standardization of Cloud native infrastructure and CNCF principles around Kubernetes platforms aimed at following key points

Scalability

The Covid-19 has proved the fact that if there  a single capability that is necessary for modern era business to survive then this is scalability , in recent weeks we have seen millions of downloads of video conferencing applications like Zoom , Webex , blue Jeans then similarly we have seen surge demand of services in the Cloud . Obviously, it would have been an altogether different story if still we were living in legacy Telco or IT world.  3

Source: https://www.linkedin.com/pulse/effect-covid-19-work-from-home-enterprise-traffic-your-amit-sinha/

Immutable but Programmable

On every new deployment across the LCM of applications the services will be deployed on new infrastructure components, however all this should be managed via an automated framework. Although Containers in Telco space do require stateful and somehow mutable infrastructure however the beauty of Infra will keep the state out of its Core and managed on application and 3rd party level ensuring easy management of the overall stack

Portable with minimum Toil

Portability and ease of migration across infrastructure PoPs is the most important benefit of lifting applications to the containers, infact the evolution of Hybrid clouds is the byproduct business can reap by ensuring applications portability in

Easy Monitoring and Observability of Infra

There is large innovation happening on the Chip set, Network Chips (ASIC), NOS i.e P4 etc however the current state of Infra do not allow the applications and services to fully capitalize on these advantages. This is why there  are many workarounds and complexity both around application assessment and application onboarding in current Network and Enterprise deployments

One goof example of how the Container platforms is changing the business experience on observability is Dynatrace which allows the code level visibility , layers mapping and digital experience across all hybrid clouds .

4e

Source: dynatrace

Composable Infrastructure

Already there looks a link from platform to infrastructure which will support delivery of all workloads with different requirements over a shared infrastructure. The Kubernetes as a platform already architecting to fulfill this promise however it requires further enhancements in Hardware, the first phase of this enhancement is using HCI, our recent study shows in a central DC using of HCI will save CAPEX by 20% annually. The further introduction of open hardware and consolidation of open hardware and open networking as explained in the later section of this paper will mean services will be built, managed and disposed on the fly.

From automated Infrastructure to Orchestrated delivery

Infrastructure and Network automation is no longer a myth and there are many popular frameworks to support it like Ansible , Puppet , Jenkins and Cisco DevNet .

However, all those who work on IT and Telco Applications design and delivery will agree the cumbersomeness of both application assessment/onboarding and application management with little infrastructure visibility. This is because the mapping between application and infrastructure is not automated. The global initiatives of both the OSC and SDO’s like prevalent in TMT industry has primarily focused on Orchestration solutions that is leveraging the advantages of the infrastructure specially on chip sets driven by AI/ML and enabling this relationship to solve business issues by ensuring true de-coupling between the Application and Infrastructure

1

Although the reader can say the platforms like Kubernetes has played a vital part for this move however without taking advantages of physical infrastructure simply it could not be possible. For example both Orchestration in IT side primarily driven by K8S and on Telco Side primarily driven by initiatives like OSM and ONAP is relying on infra to execute all pass through and accelerations required by the applications to fulfill business requirements  .

Infact the Nirvana state of Automated networks a more cohesive and coordinate interaction between application and infrastructure under the closed loop instructions of Orchestrator to enable delivery of Industry4.0 targets.

Benefiting from the Advantages of the Silicon

Advantages of Silicon were, are and will be the source of innovation in the Cloud and 5G era . When it comes to Hardware infrastructure role in whole Ecosystem, we must look to capitalize on following

5e

The changing role of Silicon Chips and Architectures (X-86 vs ARM)

The Intel and AMD choices are something familiar to many Data center teams, somehow in data centers where performance is a killer still Intel XEON family outperforms AMD whose advantages of lower floor print (7nm) and better Core/Price ratio has not built a rational to select them. Other major direction supporting Intel is their supremacy in 5G , Edge and AI chips for which AMD somehow failed to bring a comparative alternative. The most important drawback as the author views is basically the sourcing issues and global presence which makes big OEM/ODM’s to prefer Intel over AMD.

However the Hi-Tec industry fight to dominate the market with multiple supply options specially during recent US-China trade conflict has put TMT industry in a tough choice to consider non X-86 Architectures something which obviously no one like to have as its Eco system is not mature and the author believes a un-rational selection will mean the future business may not be able to catch advantages coming from disruptors and open industry initiatives like ONF ,  TIP , ORAN etc

Following points should be considered while evaluating

  1. Ecosystem support
  2. Use cases (The one which support Max should win)
  3. Business case analysis to evaluate performance vs high density
            Except Edge and C-RAN obviously Intel beats ARM
  1. Aggregate throughput per Server
  2. NIC support specially FPGA and Smart NIC
          Obviously, Intel has a preference here
  1. Cache and RAM, over years Intel has focused more on RAM and RDIMM innovation so somehow on Cache side its thing ARM has an edge and should be evaluated. However consider fact not all use cases require it makes it a less distinct advantage
  2. Storage and Cores , this will be key distinguisher however we find both vendors are not good in both. Secondly their ready configuration means we have to compromise one over other
           This will be the killer point for the future silicon architecture selection
  1. Finally, the use of inbuilt switching modules in ARM bypassing totally the TOR/SPINE architecture in Data centers in totally may got proponents of Pre-Data center architecture era however promise of in-built switching in scaled architecture is not tested well. For example, it means it is a good architecture to be used in dense edge deployments but obviously as far as my say is not recommended for large central Data centers.

However only the quantitative judgement is not enough as too much dominance of intel meant they do not deliver the necessary design cadence as expected by business and obviously opened gates for others, it is my humble believe in the 5G and Cloud era at least outside the Data centers both Intel and ARM will have deployments and that they need to prove their success in commercial deployments so you should expect both XEON® and Exynos silicon recently .

FPGA ,SmartNICs and vGPU’s:

Software architecture has recently moved for C/C++/JS/Ruby to more disruptive Python/Go/YAML schemes primarily in a drive of business to adopt the Cloud . Business is addressing these challenges by requiring more and more X-86 compute power however improving the efficiency is equally important as well. As an example, Intel Smart NIC family PAC 3000 we tested for a long time to ensure we validate power and performance requirements for throughput heavy workloads.

Similarly, Video will be vital service in 5G however it will require SP’s to implement AI and ML in the Cloud. The engineered solutions of RedHat OSP and Openshift with NVIDIA vGPU means the data processing that was previously only possible in offline analytics using static data source of CEM and Wirefilters.

6

Source: https://developer.nvidia.com/gtc/2020/video/s22106

Envisaging the future networks that combines power of all hardware options like Silicon Chips, FPGA, Smart NICs, GPU’s is vital to solve the most vital and business savvy challenges we have been facing in the Cloud and 5G era.

Networking Infrastructure

7

There is no doubt networking has been the most important piece in Infrastructure and the networking importance has only increased with virtualization and with a further 10-Fold increase with Containers primarily as Data centers fight to deliver best solutions for East-West Traffic. Although there are a number of SDN or automation solutions however there performance has scale has really shifted the balance towards infrastructure where more and more vendors are now vesting on the advantages of ASIC’s and NPC’s to improve both the forward plane performance but also to make the whole stack including fabric and overlay automated and intelligent fulfilling IDN dream by using latest Intel chips that comes with inherent AI and ML capabilities .

The story of how hardware innovation is bringing agility to network and services do not ends here for example use of Smart NICS and FPGA to deploy SRV6 is a successful business reality of today to converge compute and networking infrastructure around shared and common infrastructure.

Central Monitoring

Decoupling, pooling and centralized monitoring is the target to achieve and already we know with so many solutions which are somehow totally different in nature like on networking side between fabric and overlay means to harmonize the solutions through concept of single view visibility. This will mean that when an application demands elasticity hardware does not need to be physically reconfigured. More compute power, for instance, can be pulled from the pool and applied to the application.

 From Hyperscale’s to innovators

The dominance of hyperscale’s in Cloud is well known however recently there had been some further movements that is disrupting the whole chain. For example, now ONF Open EPC can be deployed on OCP platform. Similarly, the TIP Open-RAN initiative is changing the whole landscape to image something which was not even in discussion a few years before.

Since the ONF is too focused on Software and advantage brought forward by NOS and P4 programming so I think it is important just to talk about OCP . The new innovations in rack design and open networking will ensure to define new compute and storage specifications that best meet the requirements for the unique business requirements  .Software for Open Networking in the Cloud (SONiC) was built using the SAI (Switch Abstraction Interface) switch programming API and has been adopted unsurprisingly by Microsoft, Alibaba, LinkedIn, Tencent and more. The speed at which adoption is taking place is phenomenal and new features are being added to the open source project all the time, like integration with Kubernetes and configuration management

Summary review

Finally, I am seeing a new wave of innovation and this time it is coming via harmonizing of architecture around Hardware, thanks to the effort in last few years around Cloud , Open Stack and Kubernetes. However, these types if initiatives will need a more collaborative efforts between OSC and SDO’s i.e TIP and OCP Project harnessing the best of both Worlds

However, with proliferation of so many solutions and offerings the standardization and alignment of common definitions of Specs for the Shared Infrastructure is very important.

dis SDN

Source: Adva

Similarly to ensure innovation delivers the promise the involvement of End user community will be very important , the directions like LFN CNTT , ONAP , ETSI NFV , CNCF and GSMA TEC are some of the streams which require operator community wide support and involvement to come out of clumsy picture of NFV/Cloud of last  decade to replace by true innovative picture of Network and Digital Transformation .A balanced approach from Enterprise and Telco industry will result the business of today to become the hyperscale’s of tomorrow .

I believe this is why after a break this is the topic I selected to write. I am looking forward for any comments and reviews that can benefit community at large

Annex

 The comments in this paper do not reflect any views of my employer and sole analysis based on my individual participation in industry, partners and business at large. I hope sharing of this information with the larger community is the only way to share, improve and grow. Author can be reached at snasrullah@swedtel.com

 

How Open Orchestration enhances  Enterprise, 5G , Edge and Containerized applications in Production

Picture1

Source: ETSI <www.etsi.org>

 

How Open Orchestration (OSM Release-7) enhances  Enterprise, 5G , Edge and Containerized applications in Production

An architect’s perspective from ETSI® the Standards People

 

As highlighted in the Heavy reading latest End-to-End Service Management for SDN & NFV all the major T1 Telco’s are currently refining their Transformation journey to bring standard Orchestration and Service modeling in their networks , one of such standard approach is promised by ETSI OSM a seed project from ETS® the standards people .

Recently in Q4 2019 ETSI OSM release the Release7 which address surmount challenges of brings CNF and Containerized applications to the production ETSI OPEN SOURCE MANO UNVEILS RELEASE SEVEN, ENABLES MORE THAN 20,000 CLOUD-NATIVE APPLICATIONS FOR NFV ENVIRONMENTS

This capability of ETSI® OSM is specifically important considering the ushering of 5G SA architecture and solutions which already find its way to the market thanks to early work from CNCF and specifically CNTT K8S specs . OSM brings value to the picture as it will allow to design, model, deploy and manage CNF’s (As ETSI NFV call is a containerized VNF) without any translation or modeling. It also lets operators experience early commercial use case of integration Helm2.0 in their production environments. On top of it will allow a NS (Network Service) to combine CNF’s with existing VNF’s or legacy PNF’s to deliver complex services in an easy to deploy and manageable manner.

In the following part of this paper I will try to share my understanding on OSM release7 and sum up results from ETSI OSM webinar on this subject held on JAN 16th 2020 . For details you may need to refer to webinar content itself and can be found https://www.brighttalk.com/webcast/12761/380670  

Why Kubernetes is so important for Telco and Enterprise

Telco industry has experienced lot of pain points the way NFV journey has steered with focus on migrating existing PNF’s to the Cloud. K8S offers opportunity for all Platform providers, application vendors, assurance partners to build something on modern principles of micro services, DevOps and Open API’s driven. This is something that already made its way to Telco’s in OSS and IT systems as an example mycom OSI UPM , OSM and  infact ONAP all are already based on Kubernetes , the arrival of 5G SA and uCPE branches has driven almost all operators adopt networks to use Kubernetes . Further it is principally agreed as CSP’s move to Edge the K8S will be the platform of choice.

Foundation for K8S Clusters

Kubernetes made it simple for the applications and CNF’s to use API’s in a standard fashion using K8S Clusters which are deployed either in an open source manner or via Distros. The early adoption of CNF’s in Telco largely supports the consumption model of vendor Distros like RedHat OpenShift, Vmware PKS, Ericsson CCD to mention the most important ones.

Since containers are like a floating VM’s so networking architecture specially the one promised by L3 CNI plugin and Flannel is important direction to be supported in Platforms as it is supported in OSM .

The reusability of API makes it simple for application to craft unique application in form a build configuration files using artifacts of PoD, services, cluster, config map and persistent volumes which are defined in a very standard manner in K8S by which I mean deploy all artifacts through a single file.

ETSI® OSM can be deployed using both HELM2.0 as well as Juju charmed bundles

Picture2

Foundation for Helm

Helm gives teams the tools they need to collaborate when creating, installing, and managing applications inside of Kubernetes. With Helm, you can… Find prepackaged software (charts) to install and use Easily create and host your own packages , Install packages into any Kubernetes cluster Query the cluster to see what packages are installed and running Update, delete, rollback, or view the history of installed packages Helm makes it easy to run applications inside Kubernetes. For details please refer to details HELM packages on https://helm.sh/blog/helm-3-released/

In a nut shell all day1 and day2 tasks required for the CNF’s are made possible using Helm and its artifacts known as Helm charts including application primitives, network connectivity and configuration capabilities.

Key Features of OSM Release7

OSM Release 7 is a carrier grade and below are its key features as per wiki

  • Improved VNF Configuration interface (One stop shop) for all Day0/1/2 operations
  • Improved Grafana dashboard
  • VNFD and NSD testing
  • Python3 support
  • CNF’s support in both options where OSM creates the Cluster or rely on OEM tools to provision it
  • Workload placement and optimization (Something very important for Edge and Remote clouds)
  • Enhancement in both Multi VIM and Multi SDN support
  • Support for Public Clouds

How OSM handles deployment of CNF’s

For most Telco guys this is most important question e.g how VNF package will be standardized with arrival of CNF’s , Will it mean a totally new Package or enhancement of existing.

Fortunately, OSM approach on this is modeling of Application in a standard fashion which means same package can be enhanced to reflect containerized deployment. On a NS level it can flexibly interwork with VNF/PNF as well, the deployment unit used to model CNF specific parameters is called KDU’s (Kubernetes Deployment Unit) other major change is K8S cluster under resources. It is important as it explains most important piece the Networking and related CNI interfaces.

OSM can deploy the K8S cluster using API integration or rely on 3rd party tools like Openshift® or PKS deploy it on instructions of OSM

Picture7Changes to NFVO interfaces

Just like Or-Vi is used for infrastructure integration with Orchestration the Helm2.0 (Will support 3.0 in near future) is used for infrastructure integration with K8S applications. Since the NBI supports mapping of KDU’s in same NSD it means only changes from orchestration point of view is on the south side only.

Workload Placement

As per latest industry standing and experience sharing in Kubecon and Cloud Native  summit Americas  there is a growing consensus that Container is the platform of choice for the Edge primarily due to its robustness , operational model and lighter foot print . As per our experience of containers here in STC a 40% reduction in both CAPEX and Foot print will be realized on DC’s if deployed Edge using Containers.

However, definition of business definition of Edge raise number of queries the most important of it are work load identification, placement and migration specially consider the fact the Edge is a lighter foot print that in future will host carrier mission critical applications.

Optimization of Edge from CSP perspective has to address following  Cost of compute in NFVI PoP’s , Cost of connectivity and VNFDFG something implemented by SFC’s and Constraints on service like SLA, KPI and Slicing

Picture3

The issues with the Upgrades and How OSM addresses

Compared to early release the OSM ns action primitives allow the CNF to be upgrades to the latest release and execute both dryrun and  Juju tests to ensure the application performance bench mark is same like before  .Although this works best for small applications like LDAP the same is difficult to achieve with more complex CNF’s like 5G SA . Through liaison with LFN OVP program I am sure soon the issue will be addressed. We as operator have a plan to validate it on a 5G SA nodes.

Picture4

My final thoughts on this that  Container journey for CSP is already a reality and coming very shortly in 2020+ and OSM ecosystem supports the commercialization of CNF’s through early use cases of 5G SA , Enterprise branch uCPE and most important Edge including MEC for which OSM seems to reach maturity  For details and how to participate and use do get involved in upcoming OSM Hackfest IN MARCH OSM-MR8 Hackfest

Many thanks to colleague , mentor and industry collaborator Jose Miguel Guzman , Francisco Javier Ramón Salguero  Gerardo García and Andy Reid for OSM growth in recent years … See you in Madrid

Picture5

References:

ETSI

Linux Foundation

OVP

Evolving your Network in the Cloud Era the Introduction of SDN in your Network

0 (2)

The Introduction of SDN in your Network

The design of Underlay and the route convergence is key whenever you plan to evolve a legacy NFVI network to SDN.

With Juniper Contrail you must see the community RA to see how OVS VxLAN defines the baseline to avoid all E-W traffic definitions in the Underlay. Below two pictures give you principle on Underlay and principles going forward.

0 (1)
  1.  VM1 sends an ARP Request packet to request for VM3’s MAC address.
  2. After receiving the ARP Request packet, VTEP1 searches the ARP table for VM3’s MAC address, and sends an ARP Reply packet for VM3 to VM1.
  3. After VM1 sends data packets to VM3, VTEP1 searches the local MAC forwarding table. After the packets match the VXLAN tunnel table, VTEP1 encapsulates the packets into VXLAN packets and then finds the mapping Layer 2 VNIs based on the BDs of the packets. VTEP1 then uses the Layer 2 VNIs as the VXLAN VNIs and forwards the packets to VTEP2.
  4.  After receiving the data packets, VTEP2 decapsulates them, searches for the destination MAC address in the local MAC forwarding table, and then forwards the packets to VM3.
0 (3)

Firewalls manage and control traffic generated in communication within a VPC, between VPCs and external networks such as the Internet, MPLS VPNs, and private lines, and between public clouds and tenants’ private clouds through IPsec VPN.

During the Lab validation we used the BGP as the routing protocol for both the underlay and overlay

However for Production environment it is very important to refer to scaling side of VTEPS . Although having VTEP inside OVS will have no impact on scale.

It is better to have VTEP as close to the source/destination of traffic as possible as you minimize the number of intermediate forwarding elements. You can see that most SDN solutions opt for having a VTEP inside the hypervisor/OVS even when they can do it on a TOR (Contrail, ACI, Nuage). And there’s no impact on O&M since sw VTEPs are not supposed to be managed by cloud operators but instead are automatically programmed by VIM’s networking driver (e.g. Neutron, NSX). Also the functionality performed by sw VTEPs is quite simple (L2-L4 forwarding) so they are usually thoroughly tested and “just work”. Where I work we have done a couple of relatively big Telco SDN DC networks (~500 compute nodes) with sw VTEPs and didn’t have any problems with that approach. When they do break, however, troubleshooting sw VTEPs is quite complicated and is usually done by SDN vendor’s TAC. The only serious disadvantage of having a sw VTEP is performance.

There are several solutions that we’ve implemented to boost the performance:

1) SRIOV

2) DPDK

3) VXLAN hardware offload

4) hw VTEP on TOR and there’s plenty more that we haven’t tried .

Delivering 5 9’s Security for Mission Critical 5G Systems

1

“Can an Open Cloud Based System be more secure for Mission Critical Applications”

2

So finally the Frenzy of 5G Networks and how they will bridge the gaps between different industries and societies seems finally come to materialization .As most of the Tier1 Operators are working to build the Use cases that will support for early launch and market capture catalyst for early movers in the area still the area of 5G security seems gloomy with still lacking much detailed standards being output by ETSI and other SDO’s compared to 5G technology itself.

There are many questions in the air need to address both from architecture point of view and from End to End working solution perspective. For example

1.     Is 5G security same or conflicting with NV/SDN security?

2.     How operators will develop a unified solution that can meet requirements from all industries

3.     If a standard solution exist will it scale? Or finally in 2-3 Years down the road we need to live with lot of customized solution difficult to assure?

4.     What about solution relevance in Open source networks with many players around

5.     Finally how to imbue Cyber security dilemmas in the 5G Telco Networks.

6.     Will End user privacy will be a killer decision in 5G

I think this list gives author enough challenges faced by 5G and verticals and in this paper I shall try to build a high level model to address them in a unified UML model.

  In a world where computing is ubiquitous, where a mist of data and devices diffuses into our lives, where that mist becomes inseparable— indistinguishable—from reality, trustworthy computing is but axiomatic. ( David James Marcos /NSA)

Before dig deep to formulate the security architecture we should know at a high level the 5G system security will no longer be like 4G networks because of reason no single domain like traditionally Core/UE can promise complete security solution . The enigma of 5G security is huge involving devices like Malware , MitM, low cost devices , Air interface jamming , frequency scan , Back haul DDOS , packet sniffing , NFV and virtualization vulnerabilities , API issues , NW security , VNF application ,platform and IP vulnerabilities and hence we should analyze 5G system in depth from whole system aspect and need look in following important dimensions

1.     Decentralized Architecture: The biggest problem that lies ahead is that the Telco Networks are programmed to work not the way around. It actually means they do not predict and obviously do not interpolate to the scale of issues 5G will go to face. This is an architecture issue because like in 3G/4G source of security seems like in Core Network, in NFV/SDN it seem to imbue in the platform but for 5G planning a single control unit to handle and process all data seems impossible. But if we decentralize how to control it. We cannot decentralize without control it and how to control a device we do not trust? I think 5G must model a concept like Block Chain in Banking sector to share security but in a trusted manner and in addition not point of failure due to compromise in a unit or layer

The understanding of 5G System architecture and how it will influence the present Telco Services migration along with how it can make a thriving eco system is key area of interest for the architect. There are different dimensions like first we need to understand 5G is based on a SBA architecture which requires whole network separated from Infrastructure which makes NFV/SDN almost an inevitable enabler for it . It will allow the deployment of network a slice to support each use case separately. Currently how to model one solution and can it be applicable to customize it for each offering is key area of discussion in ETSI.

3

 

  2.     Resource demarcation: This is a scary topic because IMT2000 already divided network in three domains as per latency use case requirement. The dilemma is that it require different RF resource need to map to a different NFV/SDN DC resource in the Cloud is biggest problem that lies ahead is that the Telco Networks are programmed to work not the way around. It actually means they do not predict and obviously do not interpolate to the scale of issues 5G will go to face. This is an architecture issue because like in 3G/4G source of security seems like in Core Network, in NFV/SDN it seem to imbue in the platform but for 5G planning , so in a broad sense multi RAT for each slice may not be the right approach

3.     5G Network Threat Model extension: This host VNF’s which are source or sunk of user workload like DNS , AAA ,IPAM is east use case but introducing middle Box VNF like AS , Control plan and Media boxes means we need to introduce Telco Concepts like multi homing , A/S architectures , CSLB and on top of it complex dependence on IT Network redundancy like Bonds ,bridges and it makes the Security a big issue of concern . Obviously introducing a disparate solution means security threat boundary will extend than it is originally supposed to be

4

4.     5G Security Frame work for 5G SA System: Well I will not go in to the details here because an expert buddy has just done it perfectly watch Hitchhikers guide here https://www.linkedin.com/pulse/hitchhikers-guide-5g-security-special-edition-junny-song/

However I do want summarize a bit as follows the 5G Rel15 specifications consider EN-DC (E-UTRAN New Radio Dual Connectivity) as the defacto standard for 5G security at least in 2018 or let’s say till H1 2019 reason is obvious because the final Standalone Security specification TS33.501 will freeze in Dec ,2018 http://www.tech-invite.com/3m33/tinv-3gpp-33-501.html#toc . Why EN-DC security is important but same time not very difficult to embrace is that The EN-DC security is based on the existing LTE security specification, TS 33.401 with EN-DC enhancement as shown below

5

http://www.3gpp.org

The Good news about EN-DC is that it works almost the same way the LTE-DC runs the concepts of Key Generation, Key Management, Ciphering and Integrity Protection are re-used from LTE –DC concept TS23.501 while the DRB <Data Radio Bearer Security> context is added with regard to 5G Core Network. For EN-DC security, new X2 Information Elements, “SgNB security Key” and “UE Security Capabilities” is newly defined.

6

Here shows EN-DC bearers and PDCP termination points from Network side. MN is the master eNB and SN is the secondary gNB. If the PDCP/NR-PDCP is terminated in the MN, LTE security works, on the other hand, if the NR-PDCP is terminated in the SgNB, NR security covers. EEA is redefined as NEA, EIA is also now called NIA. As you can guess NEA, NIA stands for NR Encryption Algorithm and NR Integrity Algorithm

A good analysis of 5G security protocol can be seen in below https://www.ethz.ch/content/dam/ethz/special-interest/infk/inst-infsec/information-security-group-dam/research/software/5G_lanzenberger.pdf

7

•       In 2018 implement EN-DC architecture almost same as LTE DC

•       Use existing USIM but program USIM/UICC it need USIM vendor support

•       5G Success depend on e-SIM trial special for IoT

5.     Assuring NFV/SDN security for 5G: 5G Network is not about a network but about a system. It involves a plethora of NFV, SDN and Network automation in context of Enablers for 5G to support the future SBA based architecture. These days biggest question we have been talking in the ETSI ISG Security group and in TMforum is actually do Network automation a bliss or curse for security assurance.

6.     Scalable Security solution : Historically the Telco companies and 3GPP must be credited of building a robust security architecture , it can be reflected in 2G/3G/4G and same is expected in 5G with only problem that scale of 5G devices is billions not millions and a solution to expand only Core network and related Authentication servers is not enough . It require inclusion of distributed security architectures and above all IAM solutions which best use network API exposure to guarantee security. It means in future Security as a service can be possible and that an operator can open the network to guarantee whole system security using best offering from the third party. Anyways it will not change 5G Security Frame work for 5G SA System as I explained in Point5 of this paper.

The scalable solution also means that security can be provisioned for each use case in an orchestrated manner something very similar like VNF OLM management where security policy, test criteria all can be customizable as per required use case and SLA.

7.     Security assessment and Verification: The 5G system is complex and include plethora of many technologies. The security context of IT , Cyber , Information security all are added along with the Telco security but till now even ETSI SA3 have not finalized the detailed scenario

The 5G System is big and complex , the 3GPP SA3 is doing a remarkable work to get the standard readiness and proto type before Rel-16 Stage-3 specs are output in June this year . The main focus of this year SA3 key targets are 1. Key hierarchy 2. Key derivation 3. Mobility 4. Access Stratum security 5. Non-Access Stratum security 6. Security context 7. Visibility and Configuration 8. Primary authentication 9. Secondary authentication 10. Inter working 11. non-3GPP access 12. Network Domain Security 13. Service based architecture 14. Privacy . I hope to refresh the material for whole 5G security by the time i got more visibility based on SA3 work and till the time got more inputs from vendors of exactly how they will be approaching this critical but important point in 5G .

References

National Security Agency review of Emerging Technologies

3GPP TR.501

3GPP TS28.891

3GPP TS 23.799

3GPP TS28.531

3GPP TS38.300

NFV EVE 011

NFV SOL03 ,04

 

Key Industry Challenges and devising new models for Business enablement using End to End Network Slicing

Network Slice is a concept within 3GPP CT and its history can be traced back to R13/R14 with the introduction of static Slicing in LTE networks. It is being said that the real business case of Network Slicing will come with the arrival of 5G, although in Rel15 Stage3 release in Dec, 2017 still the complete definition with use case mapping  is missing but it is being said R16 Stage3 coming in Q2 2018 it will be available. 5G network must address the Network Slicing from dynamic slicing point of view which can be provisioned, managed and optimized through Orchestrator in real time for different use cases like Massive IoT, Ultra Reliable and enhanced MBB.
But what is the consistent definition of Network Slice? Because still I find the term means different to different teams and hence in this paper I will quest for a deep dive to come to a common definition and how it can be implemented in a consistent manner. In addition I want to answer one common question should we delay slicing now and wait for 5G Rel16 Stage 3 or we can start with enablement of simple slicing scnerio that can be upgradable to 5G as we move it.
Frankly speaking Slicing is not something only needed in 5G the prior networks do need and somehow support them due to one very reason which is business case like 4G EPC deployed in Oil industry, Corporate banks connectivity  etc. However in the 4G era there are some limitations like only support FUP RAN channels or not-shared RAN channels can be used. In a nutshell it means only a static slice can be provisioned mainly define well ahead of time mostly based on APN /IMSI. Since 5G is mostly about verticals and 3GPP CT3 is doing a fantastic job to define details of how a dynamic network slicing will be about. Please refer to 3GPP TR 28.801 just widening up the Options for runtime slice instance selection. IN 5G in addition to DNN (the 5G equivalent of an APN) we can use IMSI + NSSAI comprising up to 8 S-NSSAI for mapping the access session to a slice instance, also MANO will be evolved to support NSSF-Manager for handling all SLA/KPI and management part.
Hence we will try to analyze evolution of network slicing as we evolve future and software defined networks. The detailed understanding of concept requires understanding the whole concept involving the following dimensions.
1. Understanding Slice requirements from TR22.891
  • The operator shall be able to create and manage network slices that fulfil required criteria for different market scenarios.
  • The operator shall be able to operate different network slices in parallel with isolation that e.g. prevents data communication in one slice to negatively impact services in other slices.
  • The 3GPP System shall have the capability to conform to service-specific security assurance requirements in a single network slice, rather than the whole network.
  • The 3GPP System shall have the capability to provide a level of isolation between network slices which confines a potential cyber-attack to a single network slice.
  • The operator shall be able to authorize third parties to create, manage a network slice configuration (e.g. scale slices) via suitable APIs, within the limits set by the network operator.
  • The 3GPP system shall support elasticity of network slice in term of capacity with no impact on the services of this slice or other slices.
  • The 3GPP system shall be able to change the slices with minimal impact on the ongoing subscriber’s services served by other slices, i.e. new network slice addition, removal of existing network slice, or update of network slice functions or configuration.
  • The 3GPP System shall be able to support E2E (e.g. RAN, CN) resource management for a network slice.
Figure  1 Understanding Slice Requirements in NFV/SDN enabled Future Networks
It seems clear even if the Network slicing standard is not locked the existing NFV/SDN architecture can be used to enable it at least as over lay to provide resource isolation as Tenant level.
2. Understand Network Slicing from NFV/SDN Point of view
NFV/SDN is to enable agile delivery of service using multi-tenant provisioning in Common NFVI, one basic Slicing concept can be understood by provision multi VNF for different use cases like for Massive IoT we can have a light weight C-SGN combining both Control and User Plane while for uRLLC it can mean a distributed VNF’s using CUPS architecture to deliver real experience to the Edge. Even for eMBB it can mean end service can be delivered by avoiding vFW or other NE’s in order to build a big pipe to deliver video and live broadcast use cases. Some new additional function in Rel15/Rel16 can be used to deliver slice from Telco point of view like décor is considered to be an early enabler for the Slicing but obviously it do not come with end to end provision and monitor/management functions which can only be promised in 5G
Figure  2 Understanding Slicing context in NFV/SDN
The Above figure explains a good Reference to understand Network Slicing, for those familiar with NFV/SDN the key change is the addition of Network Slice layer. This layer may be part of NSD /VNFFGH or exist as an independent layer. Since one Service can comprise of multiple slices as needed so this layer is required even if 1:1 relation exist, it is abstraction layer to offer flexibility.
Services ⇒ It is a business service something which will be offered to end customer
Network Slice instance-NSI ⇒ a collection of resources from below layer s that defines a slice
Sub Network instance NSSI ⇒ consider it as a group of related VNF/PNF
Both the NSI and NSSI is delivered as part of NSD. Hence a Network Slice Instance (NSI) may be composed by none, one or more Network Slice Subnet Instance (NSSI), which may be shared by another NSI. Similarly, the NSSI is formed of a set of Network Functions, which can be either VNFs or PNFs.
Figure  3 Network Slice components
3. Network Slice enablement from 5G 
The real enablement of Dynamic Network slicing will come with 5G to ensure dedicated services to each customer for their customized use case or segment. The complete advantage of network slicing can only be achieved when whole Telco Network is virtualized including RAN and possibly RF modules also because the UE will request a slice based on specific ID that needs to be identified by RF, RAN to ensure it can be delivered end to end otherwise the whole concept of network slice is like a static overlay based on VPN/VRF as is implemented today. This is the main reason why all Tier1 CSP’s want to accelerate NFV/SDN use cases before 5G to assure the promise of Network slicing can delivered end to end. 5G CT RAN3 also investigating the implications of Network slice as 5G NR will open access to all access including 5G Wi-Fi, it is not well known how in such cases Network slice will be delivered specially with convergence with Wifi Networks but it is agreed in the planetary meeting that this function will be available for slicing. In 5G Networks the UE will keep track of all slices associated with it by connecting to unique AMF and hence one UE can be associated with 8 Unique Slice offering which is more than enough as per business requirements.
From RAN perspective idea of Network slicing involves that the Slice ID can be linked or transferred to correct NF in the Core Network. In the initial phase this slicing can be static but over long term this scnerio from UE till the DN must be automatic and dynamic using NFVO closed loop control methodology.
4. End to End Slice Management in 5G 
As we are well aware the key characteristics of Network slicing are two which are to sell common platform to end users and tenants using Naas <Network as a service> and to monitor /manage it. From end user perspective the Slice ID consists of Slice Type and Slice Differentiator to explain further use case with in a slice type <eMBB, IoT, uRLLC>. For some unique enterprise use cases it is also possible to provision unique nonstandard slice values. These values will be recognized by Radio network in 5G and UE will carry it during service use to assure the dedicated resources are delivered end to end. In order to manage the slice end to end each Telco VNF starting from RAN must have a new logical entity names NSSF (Network Slice selection function) to assure mapping of request to correct slice is done correctly and that consistency is delivered end to end. Below you will find how the Network view of Slice will come in NFV. As we can see the Os-Ma Sol5 is the key to integrate Slice management functions in the NFV.
Figure 4 E2E Slice management in 5G
From a resource management viewpoint, NSI can be mapped to an instance of a simple or composite NS or to concatenation of such NS instances. From a resource management viewpoint, different NSIs can use instances of the same type of NS (i.e. they are instantiated from the same NSD) with the same or different deployment flavors. 3GPP SA5 also considering exposure of Slice management to third party using Rest API which means an operator can become virtual and that slices can be provisioned, managed and optimized by 3rd party. This model is key to enable industry verticals in this domain.
In addition to that Transport Network being a multi service network must support slicing between 5G and non 5G Network services. Since all the services are placed in the VN so it is mandatory to isolate the traffic between different VN’s. Further  it is expected that the Slice manager will request the configuration of Managed Network Slice Subnet Instances (MNSSIs) to support the different 5G services (e.g., uRLLC, eMBB, etc.). A MNSSI will be supported by a VN. In the fronthaul network only one MNSSI is required since all services are carried between the RRU and DU in a common eCPRI encapsulation.
Figure 5 Slice Diversity in 5G
5. What is the Optimum way for NW slicing 
There are many SDO’s recently working on the NW slicing like NGMN, ETSI, NFV, 3GPP, etc. obviously because there is a lot of traction and appeal together to create a new business case to sell new products and solutions to the verticals and industry alike. As industry has witnessed In both pre and post era of Y2K of the industry and business shift from voice/SMS to MBB/Internet Era where CSP’s work as Pipes only the new 5G direction sets the new umbrella about the definition of broadband to include all verticals. Obviously to sell such dream the NW slicing need to cater dedicated slice and its E2E management and fulfillment by the tenant itself. This is the true model of SaaS something Facebook, Dropbox, Google have been doing so successfully in the past.
Figure 6 Network Slicing Logical view
However this is just the one sided view of picture the business case of 5G combined with business model requirements make it a worth of dime to consider possible definitions of a new standard in the ETSI NFV architecture itself.
Figure 7 Network Slicing Model in Hybrid Networks
However, the PNF would be managed outside of MANO, as well as sub-network parts composed of connection of PNF. This would require the Network Slice Life Cycle Management to also interface with a non-virtualized life cycle management & operation environment, as shown on Figure 4, with an open Nsl-PN interface, with PN standing for “Physical Network.”
6. Key Findings from ETSI GR NFV-EVE 012 standard on Network Slicing
Each tenant manages the slices that are operative in its administrative domain by means of its NFVO, logically placed in the tenant domain. Tenants rely on their NFVOs to perform resource scheduling functions in the tenant domain. As these resources may be provided by different Infrastructure Providers, the NFVO should need to orchestrate resources. Across different administrative domains in the infrastructure. Slicing requires the partitioning and assignment of a set of resources that can be used in an isolated, disjunctive or shared manner. A set of such dedicated resources can be called a slice instance. Defining a new network slice is primarily configuring a new set of policies, access control, monitoring/SLA rules, usage/charging consolidation rules and maybe new management/orchestration entity, when network is deployed with a given set of resources. In addition the ability to differentiate network slices through their availability and reliability and the ability for the network operator to define a priority for a network slice in case of scarce resource situations (e.g. disaster recovery) requires a strong coordination between NFV and SDN domain for slice management
In Point#2 I have already explained what it means to deploy a slice in Pure NFV environment so let us enlist Network slice as it apply to SDN below
A network slice in NFV context may belong to multiple Sites a compelling idea especially when VNF are split across different NFVi POPs
Within the context of NFV/SDN the Security is of prime importance for Slice management reason being that the Slice requires a view like a graph involving interconnecting VNF’s. Ideally it means for each slice to have a separate VNF and possibly PNF which is not optimal way hence it is assumed slice management and NFVO capable to ensure separation of data flows for each slice. In order to meet this requirement the VNF redesign may be necessary to incorporate the HMEE <Hardware mediated Execution Enclave> as discussed in ETSI NFV SEC 009.
Figure 8 What are the Slice End Points
A part from above the NSM <Network slice management> is very important as Slice involve NFV, SDN, Cloud and possibly PNF’s also and ability of Os-Ma SOL5 as well as the NFVO to manage hybrid environment is key.
In the nutshell Network slicing should enable proper network control and logical separation in terms of dedicated resources, operations isolation, feature isolation, reserved radio resource, separate policy control, SLA control, security and service reliability control. This is a unique differentiator of NFV/SDN/5G networks and opens new possibility for operators.
In this paper Author tried to explain new possibilities and evolution path for network slicing, it is well known before 5G SA complete standardization the Slicing will not be available end to end at least the dynamic slicing and also the framework of NSM is not locked however there are certain features and functions of slicing that can be exploited using NFV/SDN networks in today’s networks. It is very clear Operator need to start now for Slicing trying to implement use cases of static slicing using NFV/SDN and MANO functions as available today and that to upgrade it as the 5G becomes main stream. I hope audience have like this content and will help them details understand the details about important concept and that it will allow architects to best plan their 2020 Networks.
References
  • 3GPP TR22.891
  • 3GPP TS23.501
  • 3GPP TR 28.801
  • 5G_Americas_Network_Slicing_11.21_Final
  • NGMN 160113 Network Slicing v1 0
  • NFV-EVE 012v3.1.1 – GR – Network Slicing report
  • GSTR –TN5G Transport Network support  of IMT 2020/5G

NFV/SDN Platform or Application Driven? Set Right Focus to reach the final Goal

broacde

As industry enters the Industry4.0 and many world leaders as WEF talk about their country commitments and support for Transformation it is evident we are entering an Era of disruption at scale. However still the true benefits of CSP transformation are not quantified and it is not sure what is told in theory has same significance in practice?

So where is the real problem , we still remember NFV was initially set to reduce CAPEX of CSP’s facing decline revenues but later we found every white box cannot fulfil the requirements and NFV COTS cost is tenfold the cost of IT servers so CAPEX dream was never conceived correctly , recently though even ETSI formed new ISG’s for using OPEX as main driver for NFV  http://www.kddi-research.jp/english/newsrelease/2016/022201.html but did this the real issue .It actually depends on how NFV is conceived .

 

In whole industry how to take the journey has two approaches

 

Build Platform first or Build Application first , the proponents of former are main Cloud companies who want to see application as IT service which do not fits well for the Telco Service . After all in CSP’s the revenue is from Application not from platform.

The later claims application is the key and platform must be built for it. In a short term this looks more appealing as every CSP want to virtualize a PNF right so is more logical that a heavy vendor with the idea of application leading means more sense but what lies beneath the iceberg is questioning self are we building a platform that can meet future requirements in at least 5-10years . Unfortunately many CSP’s are not long sighted on it primarily due to only one reason that they never faced such issue. So right example to see will be Facebook. If finally Mark decided to take FB to the communication industry (whose signs are quite strong) how they will build. Obviously make a platform irrespective of service. Here I am not saying service is not important but I want to say that Platform must serve every service.

 

An example will make things easy to grab, Vendor X have its VNF (VDU design) if ask platform how to develop will give requirements of NUMA Placements, Pass-through , Bonds and forwarding plan design that will somehow can limit future hosting of new vendor VNF . Similarly for DC L0 the COTS/SAN dimensioning will be in-accurate. E.g based on m1.tiny and m1.large flavor size and VM placement last time I observed multi-vendor scenario may lead to not use your infrastructure more than 65% . This is a big compromise.

 

So it looks promising if we build platform first and irrespective of service, I do understand some limitations will come based on service type like Media and control plan require a different HA /AZ but at scale NE’s from same service class must be supported by a common Reference Architecture. Especially if along the Journey same platform will be open to IT and to Programmers the key is to control the platform not application.

 

In my later blog I will explain how the platform can be planned specially for ICT converged case and how it will help you size the DC and reduce CAPEX but for now we just look down on key Functions of an ICT Platform

 

  • Unified O&M

Rally, Ansible, 3rd party tools, Drive train, vROPS, Functest, Dovetail how to unify them around a simple one click architecture is key and realize config automation. Similarly platform must support same cluster for both Telco and IT DC’s as one platform

 

  • Supports all Performance (not high performance)

Instead of high performance the platform must support all applications with different performance

 

  • Multi Tenancy ( with minimum HA/HG’s)

vApps from both IT and Telco can onboard using same process and standardized API’s. It looks difficult till micro services architecture comes in place around 2020

 

 

  • Auto scaling

The use of AI in later years in NFV/SDN is only possible if auto scaling works well in scaled network. I mean in many DC’s where resources are in pool. I do not see a quick solution soon because of Super VIM architecture that still need to fit well with concept of hyper scaling. In the current auto scaling as defined in Open-O standardization looks like a issue and auto scaling parameters in VNFD of VNF1 can be very different from VNF2 at least this is our experience. I think ONAP Beijing release will address this issue somehow along the road as confirmed by Confluence team to me last week.

 

  • Distribute DC

The main theme is how NFV will work if for same VNF the VM’s are placed in different DC’s , how service will scale and how NFVO will real time optimize NFVI resource , this idea need more refinement because one VIM only controls one DC Infrastructure at this time

 

Sincerely the list is quite long but above 5 are the key points for Platform if somehow it needs to meet at least next 5Yr requirements. My next blog should try to find out how to actually plan the NFVi and can we say that VNF requirements of Server/Storage is enough?

 

Sheikh is the Chief Architect Consultant for NFV, SDN and Telco Cloud in Saudi Telecom Company which is the Biggest ICT Operator in Middle East, Always interested in those disruptive technology driving the industry transformation, Author hails from Telco CSP background and since 2013 working on Telco Cloud domain including Amazon, Huawei, Mirantis, VMware, RedHat etc. The comments in my writings are my own and shall not be considered as any relation/binding with those of my employer.

Understanding Asymmetrical NFVI Port design requirements in OPENSTACK OPNFV

Understanding Asymmetrical NFVI Port design requirements in OPENSTACK OPNFV

In all Open NFV Solution deployments we come to a situation of asymmetrical port/bandwidth planning in NFVI , as an example In both Huawei solutions like based on Huawei Switch CX310, CX910, CX912 or based on HP Flex Net design normally the Down link ports are as much twice (2X) compared to up link (1X) ports but because traffic move from server to vNIC to switch module and outward for both UL and DL case so why is this so.

The answer is NFV horizontal and vertical traffic scenario as the south-north traffic takes the uplink, and the east-west traffic will takes the down link. The uplink and downlink is asymmetric because of the following reasons.

1)   East-West traffic is usually greater than North-South because transactions are multiplied during the computation process. NFV based system, once it receives request from north-south direction it needs to communicate with the computing modules to compute the final result. During this process many internal transfers take place and execution thread spans many multiple computing module before delivering final response in the uplink direction.

2)   The downlink handles not only computational tasks assigned to the server but it also needs to deal with the auxiliary processes relevant to the chassis and computing module health. Those include heartbeat and management transactions which are run in the background but consume switch bandwidth in parallel to computing transactions therefore additional throughput needs to be factored into.

3)   The computing node redundancy requires that it has two ports are used. As result there must be 2 times more ports on the downlink.

4)   If we enable (Micro segmentation) e.g. in VMware the ACL and Security analysis need to be performed on each Compute host before it leaves the server. In fact all this overhead is populated in East –West and hence the down link traffic

5)   Finally In open stack HCI or High performance computing many of the functions are split from controller to host like DVR, DLR (VMware)  , Neutron Host and control node . This architecture means the direction is more processing related tasks to be delegated to compute. An architecture with more processing also require lot of message exchange across the direct pipe between compute and hence leads to asymmetric traffic between uplink and downlink

This are some of key reasons why the two directions are asymmetric and that both the traffic as well as ports need to be planned this way to satisfy Fast Data Stack requirements stimulated by NFV use Cases specially for VNF involving data plane.

About Author: Sheikh is Huawei Middle East Senior Architect for NFV , Telco Cloud ,SDN with focus on ICT Service delivery through Telco DevOps . Focused to define the Roads for future 5G Core Network . Always interested in those disruptive technology driving the industry transformation