Optimizing APAC Cloud Strategy to Accelerate regions growth

Telsyte Australian Hyperscale Cloud Market Study 2020 (Prepared for AFR).png

Apparently and hopefully the APAC market has passed its Covid-19 driven Shallow Dip in economy due to lockdown and with markets opening there is a strong desire on both Australian government level and Private equities to accelerate growth through increased digitization . As per recent media talks by Government and Country’s Chief Scientist Dr Cathy Foley the Cloud Adoption is vital to increase public sector informed decisions to improve quality and experience for Australians

Although Cloud definitely delivers on its promise for quick ROI by virtue of Open API’s and Developer experience the hidden license and consumption model fees leads to pressure on business TCO and KPI’s as the platform stabilized .

It is always the last 10% that will be most difficult as company reaches its scale and need to Optimize its existing Apps and Infrastructure to generate value

Do we really have a cost model that can optimize the Cloud and get us rid of consumption model that only performs at scale ?

Cloud Optimization

In my last role i were appointed to conduct a huge exercise to lift applications from On-pre to the Cloud , our Infra had 300+ applications all the way from simple admin webapps to complex large Tier DB Exadata applications . The exercise lead to following reuslts

  • Most applications are not cloud ready and can not be lifted to Cloud with out refactoring
  • All App owners own their silo hardware and there is no uniform tools on Infrastructure monitoring and capacity optimization
  • It helped us reach to conclusion just lifting everything to cloud is never going to work and may be Cloud Optimization is right approach to do including
    • Architecture for central monitoring
    • Use Inventory and Realtime data to optimize existing resource usage
    • Improved architectures to use Hardware accelerations when it outperforms commodity
Dropbox. When the company embarked on its infrastructure optimization initiative in 2016, they saved nearly $75M over two years by shifting the majority of their workloads from public cloud to “lower cost, custom-built infrastructure in co-location facilities”
source: Dropbox public information

Making right Cost equation form Hybrid Architecture

It is clear that if we can optimize the Cloud by finding a model that can help us rid of hidden costs from public clouds it can mean increased equity of at least 5% Market cap YoY which is why every CXO must analyze this

To understand this equation for a mid to large size company having cloud spend (~100M) the cost saving target from Hybrid or optimized cloud will be 50% of ~50M and 5% Cap will mean 2.5M YoY Market Cap improvements on company commitments through increase performance by cloud alone , so it is quite evident Cloud Optimization is necessary. There is no reason that we must move to cloud and keep there for religious dogma

Cooperation and not competition

Today we live in a world that value innovation more than tradition and that should continue with Cloud . For many initiatives including Edge i were often asked should we rely on on-prem vendors or hyperscalers to do it and my answer always is .

The market is too big to let one company fulfill its needs , There is due share for all

It is clear just in last year we have seen hyperscalers Amazon, Google and Microsoft that combine reaches a market cap exceeding 5Trillion have invested heavily on Telco and offerings for Cloud but how they do it obviously because they are making huge money by running their own infrastructures all the way from hardware to the Cloud ad Tools

Where the customer value is ?

There is hardly no way a customer can real Public cloud bill as the GPL and pricing are locked only way is to find the right model towards Hybrid Clouds that should be evolved as follows

  • The SaaS model for whole stack all the way from Hardware to software
  • The Inventory and tools that help to optimize the infrastructure vs new capacity every time
  • An architecture that can live all models private , public , on-prem etc
  • A migration mechanism to make workloads more portable as and when required

References

HOW TO PERFROM INPLACE UPGRADE TO Windows10 20H2

Since I have faced issued to upgrade my PC to latest Windows10 version and no useful community guide so i found it useful to share my experience and steps without deleted any of “VMWare” folders in your machine

Here is how to Fix it

  1. Open this link https://www.microsoft.com/en-us/software-download/windows10ISO
  2. Under “Create Windows 10 Installation Media”, click “Download Tool Now” .
  3. Click “Run” or “Open file” in the lower bar of the screen.
  4. If you see the User Account Control window, click Yes
  5. Click Accept Terms and Conditions
  6. Select “Update this PC Now”.
  7. The download will start. Just click ok, yes or I accept for the next message.
  8. Wait until it is complete and your device will restart automatically.

If still get same error you need to do fresh installation using Please follow these steps:

1. Please access this link: https://www.microsoft.com/en-ca/software-download/windows10

2. Click on “Download Tool Now”

3. Run the Tool

4. Accept the license terms page

5. Choose on Create installation media for another PC, and then select Next.

6. Select the language, edition, and architecture

5. Select USB This process may take about an hour or two to complete the downloading process

Until your device is forcing you to update to the latest version. You will see a prompt that you need to update your device as soon as possible. Also, this is the process for custom installation: https://answers.microsoft.com/en-us/windows/forum/windows_10-windows_install-winpc/how-to-perform-a-custom-installation-of-windows/38adfa8c-32f8-4354-8c53-13b5f2cf7e44

If you want to proceed with custom installation, you may back up the installers of your app because the apps will be deleted on this process.

You can contact Microsoft back on this link if you need further assistance: https://support.microsoft.com/en-us/contact/chat/4

Delivering Edge Architecture Standardization in Edge user group

Edge Deployments are gaining momentum in Australia APAC and rest of markets , due to sheer size of Edge there are new challenges and opportunities for all players in the Ecosystem including

  • Hardware Infrastructure suppliers i.e Dell, HPE , Lenovo etc
  • On-prem Cloud vendors like RedHat ,VMware
  • Hybrid Cloud companies like IBM , Mirantis
  • Public Cloud and Hyperscalers like AWS , Azure , Google etc
  • SI’s like Southtel , Tech Mahindra etc

However one thing which a Telco community need to do is to make a standard architecture and specifications of Edge that will support not only build a thriving ecosystem but also achieve promises of global scale and developer experience . Within Open Infrastructure community we have been working with in Open Infra Edge computing group to achieve exactly this .

Focus Areas

Following is the Scope and areas we are enabling today

  • Defining Reference Architectures for Edge Delivery model in the form of Reference Architectures , Reference Model and Certification process where we are working together with #GSMA and #Anuket in Linux Foundation
  • Defining Use cases based on real RFX and Telco customer requirements
  • Requirements prioritization for each half year
  • Enabling Edge Ecosystem
  • Output the White paper specially on Implementation and Testing Frameworks

Edge Architectures

Alongside Linux foundation Akraino blueprints we are enabling blue prints and best practices in Edge user group however we are emphasizing that the Architecture remains as vendor agnostic as possible with different flavors and vendors solving following challenges Edge Computing Group – OpenStack

  • Life-cycle Management. A virtual-machine/container/bare-metal manager in charge of managing machine/container lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown). (Current Projects: TK)
  • Image Management. An image manager in charge of template files (a.k.a. virtual-machine/container images). (Current Projects: TK)
  • Network Management. A network manager in charge of providing connectivity to the infrastructure: virtual networks and external access for users. (Current Projects: TK)
  • Storage Management. A storage manager, providing storage services to edge applications. (Current Projects: TK)
  • Administrative. Administrative tools, providing user interfaces to operate and use the dispersed infrastructure. (Current Projects: TK)
  • Storage latency. Addressing storage latency over WAN connections.
  • Reinforced security at the edge. Monitoring the physical and application integrity of each site, with the ability to autonomously enable corrective actions when necessary.
  • Resource utilization monitoring. Monitor resource utilization across all nodes simultaneously.
  • Orchestration tools. Manage and coordinate many edge sites and workloads, potentially leading toward a peering control plane or “selforganizing edge.”
  • Federation of edge platforms orchestration (or cloud-of-clouds). Must be explored and introduced to the IaaS core services.
  • Automated edge commission/decommission operations. Includes initial software deployment and upgrades of the resource management system’s components.
  • Automated data and workload relocations. Load balancing across geographically distributed hardware.
  • Synchronization of abstract state propagation Needed at the “core” of the infrastructure to cope with discontinuous network links.
  • Network partitioning with limited connectivity New ways to deal with network partitioning issues due to limited connectivity—coping with short disconnections and long disconnections alike.
  • Manage application latency requirements. The definition of advanced placement constraints in order to cope with latency requirements of application components.
  • Application provisioning and scheduling. In order to satisfy placement requirements (initial placement).
  • Data and workload relocations. According to internal/external events (mobility use-cases, failures, performance considerations, and so forth).
  • Integration location awareness. Not all edge deployments will require the same application at the same moment. Location and demand awareness are a likely need.
  • Dynamic rebalancing of resources from remote sites. Discrete hardware with limited resources and limited ability to expand at the remote site needs to be taken into consideration when designing both the overall architecture at the macro level and the administrative tools. The concept of being able to grab remote resources on demand from other sites, either neighbors over a mesh network or from core elements in a hierarchical network, means that fluctuations in local demand can be met without inefficiency in hardware deployments.

Edge Standards under Review

Although owing to Carrier grade Telco service requirements on the Edge preference has always been on StarlingX and this is what are maturing to GA but there are many other standards we are standardizing at the Edge as follows

StarlingX

Complete cloud infrastructure solution for edge and IoT
• Fusion between Kubernetes and OpenStack
• Integrated stack
• Installation package for the whole stack
• Distributed cloud support

K3S and Minimal Kubernetes

  • Lightweight Kubernetes distribution
  • Single binary
  • Basic features added, like local storage provider, service load balancer, Traefik ingress controller
  • Tunnel Proxy

KubeEdge specially for IOT

  • Kubernetes distribution tailored for IoT
  • Has orchestration and device management features
  • Basic features added, like storage provider, service loadbalancer, ingress controller
  • Cloud Core and EdgeCore

Submariner

  • Cross Kubernetes cluster L3 connectivity over VPN tunnels
  • Service discovery across clusters
  • Connects clusters with overlapping CIDR-s

Call for Action

Weekly meeting on Mondays at 6am PDT / 1300 UTC
https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings
● Join our mailing list and IRC channel for more edge discussions
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
○ #edge-computing-group channel on Freenode

Procedure to Join MIRC Channel

Following are the Steps to join as many guys reported they find issues in MIRC latest version after 7.5 so i wanted to give some summary here

Step1: Registration and Nickname setitngs

You may see some notices from Nickserv that the nick you use is already taken by someone else. The notice looks like this: Nickserv notice Well in this case you need to choose another nickname. You can do this easily by typing

/nick nick_of_your_choice

/nick john_doe

Nickserv will keep telling you this notice until you found a nick, which is not registered by someone else. If you want to use the same nick every time when you connect you may register it. The service called Nickserv handles the nicks of all registered users of the Network. The nick registration is free and you just need an email to confirm that you are a real person. To register the nick you currently use type

/nickserv register password email

/nickserv register supersecret myemail@address.net

Note: Your email address will be kept confidential. We will never send you spam mails or mails were we request private data (like passwords, banking accounts, etc). After this you will see a notice from nickserv telling you this:

– NickServ – A passcode has been sent to myemail@address.net, please type /msg NickServ confirm <passcode> to complete registration

Check your email account for new mails. Some email providers like hotmail may drop our mail sent by our services into your spamfolder. Open the mail and you will find a text like this:

Hi, You have requested to register the following nickname some_nickname. Please type ” /msg NickServ confirm JpayrtZSx ” to complete registration. If you don’t know why this mail is sent to you, please ignore it silently. PLEASE DON’T ANSWER TO THIS MAIL! irchighway.net administrators.

Just copy and paste the part /msg NickServ confirm JpayrtZSx into your status window of you mIRC. Then press the enter key. A text like:

– *NickServ* confirm JpayrtZSx – 
– NickServ – Nickname some_nickname registered under your account: *q@*.1413884c.some.isp.net –
– NickServ – Your password is supersecret – remember this for later use.
– * some_nickname sets mode: +r

should appear after this. This means you finished your registration and the nick can only be used by you or you can force someone else if he/she uses your nick to give it back to you. If you disconnect then you need to tell nickserv that the nick is yours. you can do that by:

/nickserv identify password e.g. /nickserv identify supersecret

if the password is correct it should look like this:

* some_nickname sets mode: +r – 
– NickServ – Password accepted – you are now recognized.

In mIRC you can do the identification process automatically so you don’t have to care about this anymore. Open the mIRC Options by pressing he key combination Alt + O then select the category Options and click on Perform you will see this dialog: Perform window

Check Enable perform on connect and add: if ($network == irchighway) { /nickserv identify password } in the edit box called Perform commands Close the options by clicking OK. Now your mIRC will automatically identify you every time you connect to IRCHighway.

Step2: Setting SAS/CAP authentication

mIRC added built-in SASL support in version 7.48, released April 2017. The below instructions were written for version 7.51, released September 2017. Earlier versions of mIRC have unofficial third-party support for SASL, which is not documented here. freenode strongly recommends using the latest available version of your IRC client so that you are up-to-date with security fixes.

In the File menu, click Select Server…
In the Connect -> Servers section of the mIRC Options window, select the correct server inside the Freenode folder, then click Edit
In the Login Method dropdown, select SASL (/CAP)
In the second Password box at the bottom of the window, enter your NickServ username, then a colon, then your NickServ password. For example, dax:hunter2
Click the OK button

Step3: Joining Channel

Following command to join the channel , best of luck

/connect chat.freenode.net 6667 SID_SAAD:XYZPASSWORD
/join #edge-computing-group

References

  1. https://gist.github.com/xero/2d6e4b061b4ecbeb9f99
  2. https://irchighway.net/14-blog/gaming/14-i-m-new-to-irc
  3. https://freenode.net/kb/answer/mirc
  4. https://www.delltechnologies.com/en-au/solutions/edge-computing/index.htm?gacd=9685819-7002-5761040-271853941-0&dgc=ST&gclid=CjwKCAjwqIiFBhAHEiwANg9szpqx5CQ3z_Q5oeI1eTXLtfXVNDBJSj_vNinJFO7667YIywxAQIlPARoCIogQAvD_BwE&gclsrc=aw.ds
  5. https://www.redhat.com/en/topics/edge-computing/approach
  6. https://aws.amazon.com/edge/
  7. Kubecon Euope April 2021 session by Ildikó Váncsa (Open Infrastructure Foundation) – ildiko@openinfra.dev and colleague Gergely Csatári (Nokia) – gergely.csatari@nokia.com

Optimizing Cloud features for the Edge Deployments

When it comes to deployment of Cloud and Networking infrastructure at the Edge the requirements are quite different primarily steered by ruggedness and all different form factors that are expected at edge . E.g claiming a Cloud solution is good but how it will fit in to wall mount and on #Australia blue mountains is key architecture question that again in only 4U space . This require holistic analysis of cloud features with infrastructure mapping , as example with #Dell and #RedHat as follows


1. 3 node OCP cluster with shared management and workload nodes
2. Remote compute node support based on RT Kernel
3. Support hybrid workloads with CNV (Container native Virtualization)
4. DPDK and SRIOV for data Apps at the edge including 5G NPN
5. Local storage operator support
6. Keptn support for DevOps for Edge sites including support for workload placement and optimization
7. Tekton Pipelines CI/CD
8. Above all use of SDN through compute infrastructure avoiding use of standalone switches in Edge sites like for
8a. UPI using OVS
8b. IPI using OVN

No alternative text description for this image

Solving Infrastructure Delivery Models for Cloud Era

Delivery of #Infrastructure like a #Cloud and value driven #SaaS model is something most Telco’s wanted to achieve for many years . Based on years of Telco Cloud deployments we can summarize biggest challenges for Cloud , 5G and Edge transformations as follows.
1. Traditional IaaS or PaaS models require a bigger Lag time
2. Telco existing Processes specially sourcing , RFX , Implementation and Operation are not well aligned with Cloud aspects
3. Different delivery model must be tested like one for NFVI readiness and other for Cloud and workloads
4. Infrastructure capacity analytics is not readily available forcing a Telco to procure new infrastructure when optimization of existing resources could be sufficient
5. Above all the delivery not match with Client business KPI’s and targets

#delltechnologies#delltechworld is the right answers to these challenges to win the customer challenges on #5G#Cloud and #Edge , have a try

https://www.delltechnologies.com/en-us/blog/reimagine-it-service-delivery-with-the-new-apex-console/

Infrastructure and Cloud Standardization Key targets 2021

If you are a #Telco accelerating #5G , #Cloud and #Edge towards last mile sites then as #CXO ,#Architect and #Engineer what are the top pressing issues that you expect #infrastructure to solve now and most importantly #how ? Somehow #Digitaltransformation talks historically made many of us too much focus on #innovation and #latestTech vs #how to #deliver such #technology bringing real results for Customers

Here is my understanding on Top issues from #Redhatsummit Day1 and how to enable lever of #infrastructure from warmup #phase to #production phase

1. Ecosystem and integrated Ecosystem is very important , this discussion is more important to solve as we will move to more #challenging and mostly non-standard #Edge sites
2. #Software acceleration takes priority on Hardware acceleration
We must accelerate initiative like #NVIDIA#DPU where #DPU=#CPU + #GPU
3. With 100’s CNF carried over 1000’s services having 100,000’s of node/pod/route relationship the #manageability of clusters is critical two areas we must focus in #2021 are #Resourcetopology and #telemetry

#Cloud#Digitaltransformation # #hardware#DPU#security#aws#azure#DPU#madeinaustralia#redhatopenshift#redhatapac

No alternative text description for this image

Building Realistic Automation Roadmap for Telco Customers

To bring business value from automation the IP Networks Automation should be considered together with Cloud Automation.
Following are the realistic targets of such E2E Automation that consider both domains with a E2E view through Service and E2E Service Orchestration

Targets for Real Automation

Below should be some real targets to achieve the Automation aspiration for a a carrier

  1. Use case Automation (Specially target new revenue Streams)
  2. Process automation
  3. CI/CD in a Telco Environment all the way from Lab,PoC,Staging,Pre-Prod and Production
  4. API exposure to 3rd party

Based on experience below are some key steps to achieve this goal

Steps for Real Automation

To covert targets to reality following are some real steps in sequence and in iterative manner

  1. Align automation tools and approaches to ETSI ZSM architecture and enhance ML/AI use cases through ENI (Enhanced Network Intelligence frameworks)
  2. Every real Telco is a brownfield so integration with existing OSS/BSS is a key
  3. Business process must be automated e.g Jira , ServiceNow, #RedHat FUse ,AMQ
  4. Each client need to define its standard architecture and tool framework , a level of standardization is necessary on OpenSource Tools

Project Phases to Achieve True Automation

Overall there are three phases to deliver such projects
Phase1:Warmup
Deliver the SBI I.M (Information models) including tools for automation like #Ansible and #Python etc

Phase2: Build Phase
Orchestration and Policy enforcement is necessary to bring holistic view from both #Cloud and #Networking to build E2E Automation

Phase3: Extend
Build the control loops

Understanding Openshift-4 installation for Developer and Lab Environments

As Linux is the defacto OS for innovation in the Datacenters sameway the OpenSHift is proving to be a Catalyst for both Enterprise and Telco’s Cloud transformation . In this blog i will like to share my experience with two environments one is minishift that is a home brew environment for developers and others based on Pre-existing infrastructure .

As you know Openshift is a cool platform as a part of these two modes it support a wide variety of deployment options including hosted platforms on

  • AWS
  • Google
  • Azure
  • IBM

However for hosted platforms we will use full installers with out any customization so this is simply not complex provided you must use only Redhat guide for deployment.

Avoid common Mistakes

  • As a pre requisite you must have a bastion host to be used as bootstrap node
  • Linux manifest NTP , registry ,key should be available while for Full installation the DNS is to be prepared before cloud installer kicks in .
  • Making ignition files on your own (Always use and generate manifest from installers)
  • FOr Pre-existing the Control plane is based on Core OS while workers can be RHel or COreOS while for full stack everything including workers must be based on CoreOS
  • Once started installation whole cluster must be spinned within 24hours otherwise you need to generate new keys before proceed as controller will stop ping as license keys have a 24hour validity
  • As per my experience most manifest for full stack installation is created by installers viz. Cluster Node instances , Cluster Networks and bootstrap nodes

Pain points in Openshift3 installation

Since most openshift installation is around complex Ansible Playbooks , roles and detailed Linux files configuration all the way from DNS , CSR etc so it was a dire need to make it simple and easy for customers and it is what RedHat has done by moving to Opinionated installation which make it simple to install with only high level information and later based on each environment the enterprise can scale as per needs for Day2 requirements , such a mode solves three fundamental issues

  • Installer customization needs (At least this was my experience in OCP3)
  • Full automation of environment
  • Implement CI/CD

Components of installation

There are two pieces you should know for OCP4 installation

Installer

Installers is a linux manifest coming from RedHat directly and need very less tuning and customization

Ignition Files

Ignition files are first bootstrap configs needed to configure both the bootstrap , control and compute nodes .If you have managed the Openstack platform before you know we need separate Kickstart and cloud-init files and in Ignition files process RedHat makes simple both steps . For details on Ignition process and cluster installation refer to nice stuff below

Minishift installation:

Pre-requisites:

Download the CDK (RedHat container development Kit) from below :
https://developers.redhat.com/products/cdk/hello-world/#fndtn-windows

  1. copy CDK in directory C:/users/Saad.Sheikh/minishift and in CMD go in that directory
  2. minishift setup-cdk
  3. It will create .minishift in your path C:/users/Saad.Sheikh
  4. set MINISHIFT_USERNAME=snasrullah.c
  5. minishift start –vm-driver virtualbox
  6. Add the directory containing oc.exe to your PATH
    1. FOR /f “tokens=*” %i IN (‘minishift oc-env’) DO @call %i
  7. minishift stop
  8. minishift start
  9. Below message will come just ignore it and enjoy
    error: dial tcp 192.168.99.100:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. – verify you have provided the correct host and port and that the server is currently running.
    Could not set oc CLI context for ‘minishift’ profile: Error during setting ‘minishift’ as active profile: Unable to login to cluster
  10. oc login -u system:admin

The server is accessible via web console at:
https://192.168.99.100:8443/console

You are logged in as:
User: developer
Password:

To login as administrator:
oc login -u system:admin

Openshift installation based on onprem hosting

This mode is also known as UPI (User provided infrastructure) and it has the following the key steps for OCP full installation

Step1: run the redhat installer

Step2: Based on manifests build the ignition files for the bootstrap nodes

Step3: The control node boots and fetches information from the bootstrap server

Step4: The etcd provisioned on control node scales to 3 nodes to build a 3 control nore HA cluster

Finally the bootstrap node is depleted and removed

Following is the scripts i used to spin my OCP cluster

1#@Reboot the machine bootstrap during reboot go to PXE and install CoreOS

2#openshift-install --dir=./ocp4upi

3@rmeove the bootstrap IP's entries from /etc/haproxy/haproxy.cfg 
4# systemctl reload haproxy

5#set the kubeconfig ENV variables 
6# export kubeconfig=~/ocp4upi/auth/kubeconfig

7# verify the installation 
8# oc get pv
9# oc get nodes
10# oc get custeroperator

11#approve any CSR and certificates 
12# oc get csr -o go-template='{{range.items}}{{if no .status}}{{.metadata .name}}{{""\n""}}{{end}} | xargs oc adm certificate approve

13#login to OCP cluster GUI using 
https://localhost:8080

Do try it out and share your experience what you think about OCP4.6 installation .

Disclaimer: All commands and processes i validated in my home lab environment and you need tune and check your environment before apply as some tuning may be needed .

5G Network on AWS , what’s in for me

Latest deal of Dish US to build 5G Open RAN ,5G Core on Edge and Leverage OSS/BSS on AWS gives certain advantages to any Telco business ut what really in AWS or hyperscalers really attracting Telco’s here is what i understand

  1. AWS new offering for EC2 based on Graviton2-based instances (64-bit Arm Neoverse cores) provide up to 40% better price-performance over comparable current-generation x86-based instances) and Amazon EKS to run CNF’s during periods of peak network use. Similarly AWS ML capabilities at the network edge to help improve service by predicting network congestion at specific locations and then automatically taking corrective actions to optimize performance.
  2. AWS Local zones and AWS outpost .AWS Local zones are infrastructure deployment that places AWS compute, storage, database close to applications requiring single-digit millisecond latency, while AWS Outposts extend AWS infrastructure, services, APIs, and tools to virtually any on-premises facility.
  3. Inherent ML and Automation .Leveraging existing AWS API’s and farmworks like Sagemaker will give great advantages for both automation and scale .

To know more

https://www.fiercewireless.com/operators/dish-makes-a-splash-picks-aws-to-host-5g-ran-and-core-industry-voices-chua