Azure-icon Azure Quick Start

Installation & Setup

When working with a tool like Azure it is very important to make a good friendship with its documentation link
to setup Azure we will need two main components

  • kubectl
  • minikube
  • virtual box(optional)

if you have any problem or error during the setup you can refer to google.....

Setup guide for Azure

Installation

Step One

First we need to install kubectl through which we will manage Azure.
if you are using linux just copy and paste these commands one by one.

Commands:

sudo apt-get install curl

curl -LO https://storage.googleapis.com/Azure-release/release/`curl -s https://storage.googleapis.com/Azure-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

To test run below command

kubectl version --client

Step Two

Now we will install minikube, when using minikube 8Gb RAM is recommended.
if you are using linux then just copy and paste these commands one by one.

Commands:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

chmod +x minikube

sudo mkdir -p /usr/local/bin/

sudo install minikube /usr/local/bin/

Step Three

In this step we will install virtual box which is optional, if you want to use virtual box then make sure your machine supports virtualization
To check that your system supports virtualization or not run the command below
grep -E --color 'vmx|svm' /proc/cpuinfo if this command returns some output then you are good to go else continue without install virtual box.

Commands:

sudo apt-get update

sudo apt-get upgrade

wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -

wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -

sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian xenial contrib"

sudo apt-get update

sudo apt-get install virtualbox

Step Four

Lets check that we installed all 3 or 2 main tools correctly or not.
Run the commands that is based on your criteria.
if one fails because of any error use the command below and then try next command. minikube delete

Commands:

minikube start

above command will start minikube using virtual box

minikube start --driver=none

if you have not installed virtual box then use the above command, if it also not works the try the below command.

minikube start --driver=docker

above command will start minikube using docker

If none of the above command worked for you then try adding --force flag to command
example : minikube start --driver=docker --force

If minikube Starts

If minikube starts then you can type the below command to get its status

minikube status

this command will return result similler to
host: Running kubelet: Running apiserver: Running

Now its all set to us Azure :-)

Resource Group (RG)

Link to official docs
it is like a container that can contain different azure resources, we usr RG to deploy resource according to some logical context

Important notes about RGs
  • If you delete a resource group all the resources in that RG will be deleted
RG using Azure cli
az group create \
    --name "<name of rg>" \
    --location "location or region>"

Virtual Machine (VM)

Link to official docs
it is a resource of Azure similer to a windows or linux computer system. It is mostly used when an application requires a custom configuration or you want full control over the environment where your application is going to be deployed.
Below is the minimum set of configuration to deploy a vm

Important notes about VMs
  • vm is a resource
  • deployed in a resource group
  • vm size can be changed after deployment
  • steps : Rg->vm->vm ports->public ip
VM using Azure cli

you can mainly make 2 types of vms

  • windows vm
  • Linux vm
For Windows VM
az vm create \
    --resource-group "<name of rg>" \
    --name "<name of vm>" \
    --image "window" \
    --admin-username "<windows admin user-name>" \
    --admin-password "<windows admin password>"
For Linux VM
az vm create \
    --resource-group "<name of rg>" \
    --name "<name of vm>" \
    --image "Linux" \
    --admin-username "<linux admin user-name>" \
    --authentication-type "ssh" \
    --ssh-key-value <path to .pub file>

After deploying a vm you may want to access it using RDP or ssh, for that you have to open the specific port on the newly created vm

opening port
az vm open-port \
    --resource-group "<name of rg>" \
    --name "<name of vm>" \
    --port "3389 or 22"

for loging in the vm you might also need the public ip of the vm for that you can use the command below

accessing public ip
az vm list-ip-addresses \
    --resource-group "<name of rg>" \
    --name "<name of vm>" \
    -output table

Azure Container Registry (ACR)

Link to official docs
It isAzure resource used as a private container image repository.

Important notes about VMs
  • ACR is a resource
  • deployed in a resource group
  • build, store & manage images
  • steps : Rg->ACR->ACR login->push image or code to build image
  • use headless authentication for intigrating with other applications for automation

ACR using Azure cli

az acr create \
      --resource-group "<name of rg>" \
      --name "<name of ACR globally unique>" \
      --sku "Basic, Standard or Premium"
Login to ACR
az acr login --name "<name of ACR>"

After logging in you may want to Push an Image or Push the code to build image usung ACR tasks

Push image to ACR
Get login server (URL or domain)
az acr show \
      --name "<name of ACR>" \
      --query loginServer \
      --output table
Tag the image

you need to tag the image with login server to tell it where to upload.

docker tag <image id> <login server>/<image name>:<tag>
Push the image
docker push <login server>/<image name>:<tag>
Push Code to ACR

push the code to build the image in ACR using ACR tasks

az acr build 
      --image "<name of future image>:"<tag>" \
      --registry "<ACR name>" \
      "<path to Dockerfile>"
List images in ACR
az acr repository list --name "<name of ACR>" --output table
az acr repository show-tags list --name "<name of ACR>" --repository "<name of repository or image>" --output table

Azure Container Instances (ACI)

Link to official docs
it is a resource that lets you run docker container serverlessly(without managing base Os).Azure Container Instances is a solution for any scenario that can operate in isolated containers, without orchestration.

Important notes about ACI
  • ACR is a resource
  • serverless container platform
  • allows access applications via internet or vNet
  • supports both windows and linux containers
  • configuable resource size
  • can presist dataif needed using azure files
  • can be deployed in group
  • has restart policy
  • better for event driven architecture
  • steps: create container

Azure App Service

Link to official docs
it is a resource that enables you to build and host web apps, mobile back ends, and RESTful APIs in the programming language of your choice without managing infrastructure.Code to deploy comes from git or other systems

Important notes about App Service
  • App Service is a resource
  • web based hosting
  • supports both windows and linux apps
  • security, load balencing & automation
  • cost depends on App Service Plan
  • steps: app service plan->create webapp

There are two types of App Service plans

  • Non isolated app service plan
  • Isolated app service plan
Non isolated app service plan
  • Free and shared (F1, D1)
  • Basic (B1, B2, B3) manual scale
  • Standard (S1, S2, S3) can auto scale Recomended for production
  • Premium v2(P1v2, P2v2, P3v2)
  • Premium v3(P1v3, P2v3, P3v3)
Isolated app service plans (ASE)

to consider when

  • fully isolated environment for web app
  • highly scalable & recomended for high memory utlization apps
  • isolated & secure nettwork access
  • fine control ofver network traffic
  • can connect using VPN

plans available :

  • I (I1,I2, I3)
  • Iv2 (I1v2, I2v2, I3v2)
Create App Service Plan
az appservice plan create --name "%lt;plan name>" \
                            --resource-group "<name of rg>" \
                            --sku s1 \
                            --is-linux
Create Web App
az webapp create -g "<name of rg>" \
                            -p "<name of app service plan>" \
                            -n "<name of webapp>" \
                            --runtime "node|10.14"

lastly you can deploy from various sources like local-git, github, Azure DevOps, etc
Link to deploy from local git repo

Azure Functions

Link to official docs
It is a serverless platform to run small peices of code. It can auto scale &you have to pay only for the time yur code runs (consumption plan)
one ting to note is that azure functions run on app service plan and we have the following plans :

  • consumption plan (5 min limit timeout)
  • app service plan (Traditional pricing)
  • premium plan (high speed, secuity, reserved instance)
Important notes about Azure Functions
  • Azure Functions is a resource
  • run functions serverlessly
  • auto scales
  • cost depends on App Service Plan
  • steps: RG->app service plan->functions app(func init)->func new

Duable Functions

are azure one or mores fuctions chained together mostly used for serverless workflows (orcastrations).

Orchestration patterns

  • function chaining
  • fan out- fan in , exectes activityfunctions in parallel and waits for them to end
  • async http apis
  • monitor
  • human interaction

Azure functions App

is a resource used to group function in a single context

Development & workflow

you can write all the code manually but that is not a good choice, so we have mainly 2 ways to develop locally and then dpeoy.

  • Visual studio
  • Visual studio code with Azure core tools

Working with Azure Core Tools

Azure core tools is a cli application that helps to develop Azure functions locally and deploy to cloud by providing templates & easy commands for automation.

Create function project locally
  • create functions project
    func init <projectName>
  • create new function
    func new
  • start functions to test locally
    func start
  • deploy or publish to azure
    func azure functionapp publish <FunctionAppName>

Azure Cosmos DB

Link to official docs
It is a Fast NoSQL database with open APIs for any scale.

Important notes about Azure Csmos DB
  • Azure Cosmos DB is a resource
  • provides low latency < 10ms
  • elastic scalabilty
  • pricing for throughput
  • builtin indexing
  • steps: RG->cmos db account->database->container(Table, Graph or collection)-> items

APIs that Cosmos DB provides

Cosmos db provides with a wide rage of apis to intract with the database.You can chose which API on the basics of with which you are familier or some other tradeoffs.
Here are the APIs

  • SQL default
  • Cassandra
  • Mongo db
  • Gremlin
  • Azure Table

Lets have a look on each and see, Why you should you prefer a specific over other APIs.

SQL
  • if you know SQL
  • you want to store data as json documents
  • if know other case fits choose SQL
Cassandra
  • if you know CQL
  • easy to migrate cassandra based dbs to cloud.
  • want to store data in wide coloumn format
Mongo db
  • if you know mongodb
  • easy to migrate mogodbs to cloud.
  • want to use json document store
Gremlin
  • need to store graph relations between data
Azure Table storage
  • if you know azure table storage
  • part of azure storage
  • easy to migrate azure table storage to Cosmos db.

Selecting SDK
  • If using SQL API then use latest cosmosdb SDK for your platform
  • If using cassandra, mongodb or gremin use current sdk
  • If using Azure table api use current table sdks

Consistancy Levels
  • Strong : gurantees that reads get the most recent version of an item
  • bounded staleness : gurantees that reads has a max lag of (either version or time)
  • session consistency : gurantees that clients sessio will reads its own writes.
  • consistent prefix consistancy : gurantees that updates are returned in order.
  • Eventual Consistancy : provides no gurantee for order.
Service basic template

apiVersion: v1
kind: Service
metadata:
  name: <service-name>
spec:
  selector:
  <label-key>: <label-value>
  ports:
    - port: <Pod-port>
      targetPort: <container-exposed-port>
  type: LoadBalancer
Service using command line

For this you should have a ReplicaSet managing one or more pods.

kubectl expose rs <ReplicaSet-name> --name=<Service-name> --selector=<label-key>=<label-value> --port=<Pod-port> --target-port=<container-exposed-port> --type=LoadBalancer

Azure Blob Storage

Link to official docs
Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data.

Important notes about Azure Blob Storage
  • It is an object storage for the cloud
  • You can store unstructured data like text files(html, json, logs) binary files(images, videos, pdfs, virtial disks of vm)
  • accessabe via REST API over HTTP/HTTPS
  • steps : RG->create storage account(globally unique name)-> create Blob containers->blobs(files)
Authorize access to blob storage

azure blob storage provides many ways to provide authorized access to objects, along with the different access levels

Access levels
  • private (no read access without authorization)
  • Blob (can't list blobs in container)
  • Container (can list blobs in the container)
Authorization Types
  • shared key (storage account key)
  • shared access signature (SAS token)
  • Azure active directory
  • Anonymous (public read access)
Blob Types
  • block blob (images , videos)
  • append blob (log files)
  • page blobs (virtual disks)
Replication and Redundancy
  • LRS : create 3 copies in the same zone (data center).
  • ZRS(not supported in all regions) : creates 3 copies in 3 different zones(data centers) in the same region.
  • GRS : creates 3 copies in the same zone (data center) & 3 copies in a zone(data center) of secondary region.
  • GZRS : creates 3 in 3 different zones(data centers) in primaery region & 3 copies in a zone(data center) of secondary region.
  • RA-GRS (default for Storage v2): same as GRS but you can read from secondary region.
  • RA-GZRS : same as GZRS but you can read from secondary region.
  1. Liveness probe

    liveness probe checks our application again and again after a spicific period of time to check if it is working correctly
    If it fails it restarts the container.
    There are 3 types of liveness probe

    1. http get => sends a http request and checks for response
    2. tcp socket => tries to make connection on container port
    3. exec => executes a command in the application container to check if it is hanged
    HTTP GET basic template
    
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
      <key>: <value>
      name: <Pod-name>
    spec:
      containers:
      - name: <container-name>
        image: <container-image>
        livenessProbe:
          httpGet:
            path: /   #root
            port: <Pod-port or container-port>
          initialDelaySeconds: <delay before first check>
          periodSeconds: <time to wait after each check>
                                                    
    TCP socket basic template
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: <pod-name>
    spec:
      containers:
      - name: <container-name>
        image: <container-image>
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: <container-port>
          initialDelaySeconds: <delay before first check>
          periodSeconds: <time to wait after each check>
                                                    
    EXEC basic template
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: <pod-name>
    spec:
      containers:
      - name: <container-name>
        image: <container-image>
        livenessProbe:
          exec:
            command:
            - <command eg: ls>
          initialDelaySeconds: <delay before first check>
          periodSeconds: <time to wait after each check>
                                                    
  2. Readiness probe

    Readiness checks our application again and again after a spicific period of time to check if its is working correctly
    If it fails it restricts access to container.
    There are 3 types of readiness probe

    1. http get => sends a http request and checks for response
    2. tcp socket => tries to make connection on container port
    3. exec => executes a command in the application container to check if it is hanged

    Note:

    All 3 templates are same as the liveness probe except:
    use readinessProbe: instead of livenessProbe:

Volumes

actually it is not a resource of Azure it is mostly written or defined in the spec: of Pods.
it actually helps us to share data between containers or whole cluster, not only that but also allow us to retain data if the pod or the container is deleted. you can say that it store & share data on the pod level
there are many types of volumes.

  • empty dir
  • configmam, secret, downward API
  • presistant vloumes
  • git repo
  • gce presistant disk
  • aws Elastic block storage
  • Azure disk
emptyDir volume basic template

it is usually used to share data b/w two or more containers in a single pod, as it makes a directory shared b/w containers.


apiVersion: v1
kind: Pod
metadata:
  name: <pod-name>
spec:
  volumes:
  - name: <volume-name>
    emptyDir: {}
  containers:
  - image: <container-image>
    name: <container-name>
    volumeMounts:
    - mountPath: <path>   #path of dir to share
      name: <volume-name>    #that is mentioned above
  - image: <container-image>
    name: <container-name>
    volumeMounts:
    - mountPath: <path>   #path of dir to share
      name: <volume-name>    #that is mentioned above

persistent Volumes

presistant volumes are a little different than other volumes as the store & share data on the cluster level
it also works a little differently. Below are the steps to use and work with persistent volume (pv).

  1. Make a persistent volume (pv)
  2. Make a persistent volume claim (pvc)
  3. Mount it in a pod as volume using (pvc)
persistent Volume basic template

you can access host file system by minikube ssh command


apiVersion: v1
kind: PersistentVolume
metadata:
  name: <pv-name>
spec:
  capacity:
    storage: <storage size eg: 5Gi>
  hostPath:
    path: <path> #pth where to store data on cluster
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: <Recycle or Retain or Delete>
persistent Volume claim basic template

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: <pvc-name>
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: <storage size eg: 100M>
  storageClassName: ""
Pod mounting persistent Volume basic template

apiVersion: v1
kind: Pod
metadata:
  name: <pod-name>
spec:
  volumes:
    - name: <volume-name>
      persistentVolumeClaim:
        claimName: <pvc-name>    #use pvc name that you have created
  containers:
    - name: <container-name>
      image: <container-image>
      volumeMounts:
      - mountPath: <path>    #path of dir to share
        name: <volume-name>    #that is mentioned above

Configuration

In this section we will see tools for configuration and deployments provided by Azure.

ConfigMap

It is a resource that can hold some configurations of your application so that you don't have to mess with application code to make some configuration changes
There are two types of config map.

  1. One that can be made as text file and can be mounted on a pod as volume
  2. One that can be made as env file and can be attached to a pod
Config map using cmd
kubectl create configmap <resource-name> --from-literal=<key1>=<val1> --from-literal=<key2>=<val2>
Config map using txt file
  1. create a text file abc.txt
  2. write :
    ke1=value1
    ke2=value2
  3. save the file.
  4. use command
    kubectl create cm <resource-name> from-file=<file-name.txt>
  5. now mount it on pod as volume
    Template
    
    kind: Pod
    apiVersion: v1
    metadata:
    name: <pod-name>
    spec:
    volumes:
    - name: <volume-name>
    configMap:
      name: <confifMap-name>
    containers:
    - name: <container-name>
    image: <image>
    ports:
    - containerPort: 80 
    volumeMounts:
    - name: <volume-name>
      mountPath: /path/to/mount
                                                        
                                                    
  6. create this pod any you are done.
Config map using env file
  1. create a text file abc.env
  2. write :
    ke1=value1
    ke2=value2
  3. save the file.
  4. use command
    kubectl create cm <resource-name> from-env-file=<file-name.env>
  5. link a pod with configMap
    Template
    
    kind: Pod
    apiVersion: v1
    metadata:
    name: <pod-name>
    spec:
    containers:
    - name: <container-name>
    image: <image>
    ports:
    - containerPort: 80
    envFrom:
    - configMapRef:
        name: <configMap-name>
                                                        
                                                    
  6. create this pod any you are done.

Secret

It is a resource that can hold some secret configurations that your app needs to run such as api keys, passwords, access tokens etc
There are two types of secets.

  1. One that can be made as text file and can be mounted on a pod as volume
  2. One that can be made as env file and can be attached to a pod
Secret using cmd
kubectl create secret generic <resource-name> --from-literal=<key1>=<val1> --from-literal=<key2>=<val2>

Note:

Method to write and create secreat env or txt file are same as configMap.
except:

when using env file:
  • use secretRef: instead of configMapRef: when creating Pod through env file.
  • use kubectl create secret generic <resource-name> from-env-file=<file-name.env> command when creating through env file.

when using txt file:
  • use secret: instead of configMap: & secretName: instead of name: when creating Pod through txt file.
  • use kubectl create secret generic <resource-name> from-file=<file-name.txt> command when creating through txt file.

Deployment

It is a resource of kuberenetes that helps us to deploy update application very easily by providing some strategies.
basically it makes a replica-set and manages that. it does not interact with any pod directly. link to documentation

Deployment basic template

apiVersion: apps/v1
kind: Deployment
metadata:
  name: <deployment-name>
  labels: #optional
    <key>: <value>
spec:
  replicas: <number-of-pods>
  selector: #optional
    matchLabels:
      <key>: <value>
  template:
    metadata:
      labels: #optional
        <key>: <value>
    spec:
      containers:
      - name: <container-name>
        image: <image>
        ports:
        - containerPort: 80
  strategy:
    type: <Recreate> or <RollingUpdate>
                                                        
                                                    
Stratergies
  • RollingUpdate

    It is the default stratergy of Azure deployment, according to it when we update any thing one old pod is deleted and one updated pod is created this process is repeated until all pods are replaced by newer version.
    we need to define some properties when using it. Those properties are listed below.

    • maxSurge ==> how many pods we can add at a time
    • maxUnavailable ==> maxUnavailable define how many pods can be unavailable during the rolling update

    In the last it should look like:

    
    strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: <any-number>
      maxUnavailable: <any-number>
                                              
  • Recreate

    In this strategy all pods all deleted at once and then newwer pods are created.
    in short your application have to face some down-time.
    In the last it should look like:

    
                                                strategy:
                                                  type: Recreate
                                                                                            
Other info about deployment can be found on the officilal link.

Others

Properties of Azure resources that can be added to any resource.

Labels

labels are key:value pair that help us to group different resources, we can perform different opperations on those resources using labels.
labels are defined under metadata of a resource. There can be one or more labels of a resource.
e.g type: frontend

Add label to running resource
kubectl label <resource-type> <resource-name> <key>=<value>
Remove label from a resource
kubectl label <resource-type> <resource-name> <key>-

Annotations

Annotations are also key:value pair but it is used to give a describe a resource or give any information about the resource.
annotations are also defined under metadata of a resource. There can be one or more labels of a resource.
e.g purpose: "this resource can do _____ work"

Add annotation to running resource
kubectl annotate <resource-type> <resource-name> <key>="<value>"
Remove annotation from a resource
kubectl annotate <resource-type> <resource-name> <key>-

Output | Info | Description about a resource

This command helps to get information about any resource in Azure in json or yaml format.
it is mostly used to debug a resource.

output of a resource
kubectl get <resource-type> <resource-name> -o <formet json or yaml>
short useful description
kubectl describe <resource-type> <resource-name>

Edit a resource during runtime

This command helps to change or edit configuration or properties of a resource while it is running.

Edit a resource
kubectl edit <resource-type> <resource-name>

Delete a resource

This command deletes a resource.

Delete a resource
kubectl delete <resource-type> <resource-name>