This is an old revision of the document!
Table of Contents
Step 6 - Application Configuration
In this step we look at deploying different application images
- doritoes/k8s-php-demo-app:blue
- doritoes/k8s-php-demo-app:green
- doritoes/k8s-php-demo-app:orange
Strategies supported by Kubernetes out of the box:
- rolling deployment (default) - replaces pods running the older version with the new version one-by-one
- recreate deployment - terminates all pods and replaces them with the new version
- ramped slow rollout - very safe and slow rollout
- best-effort controlled rollout - fast and lower overhead but run during a maintenance window
Strategies that requires customization or specialized tools:
- blue/green deployment
- canary deployment
- shadow deployment
- A/B testing
Rolling Deployment
Open a Second Session to Monitor the Process
Open a second terminal or ssh session and run the watch command
watch -d -n 1 'kubectl get pods,deploy'
You will watch the status of your deployment here.
Update the Image in k8s-deployment-web.yml
Update k8s-deployment-web.yml
to use the new image tag
image: doritoes/k8s-php-demo-app:green
Here is the entire modified file:
- k8s-deployment-web.yml
apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 selector: matchLabels: app: web template: metadata: labels: app: web spec: nodeSelector: my-role: worker # restrict scheduling to the nodes with the label my-role: worker containers: - name: web image: doritoes/k8s-php-demo-app:green ports: - containerPort: 8080 livenessProbe: httpGet: path: /liveness.php port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: # Add the readinessProbe section httpGet: path: /readiness.php port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Kick Off the Update
This process will perform a one-by-one replacement. Health checks (liveness and readiness) ensure new pods are healthy before taking old ones offline.
Start the update
kubectl apply -f k8s-deployment-web.yml
You can also:
ansible-playbook deploy-web.yml
Observe the rollout
kubectl rollout status deployment/web-deployment
Watch the status in the other session. What can you see, and what useful information is missing?
Examine the deployment and see which image is running:
kubectl describe deployment/web-deployment
Here is a way to get the images running on all pods:
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ sort
Testing and Troubleshooting
Load the web app now (either by nodePort or by updating the HAProxy config).
You may need to press F5 or Control F5 to refresh the style sheet. The color of the app should now be green
If the rollout is stalled, use these commands to investigate potential issues:
kubectl describe deployment web-deployment kubectl get pods kubectl logs <pod-name>
Rollback
Roll back the update using
kubectl rollout undo deployment/web-deployment
Observe the rollback
kubectl rollout status deployment/web-deployment
Watch the status in the other session.
Examine the deployment and see which image is running:
kubectl describe deployment/web-deployment
Load the web app again and Control-F5/reload to refresh the CSS formatting.
The color should be back to blue.
Improvements
You can customize the rolling deployment behavior within your Deployment spec with the maxSurge
and maxUnavailable
properties.
For example here is adding the ability to add up to 25% of the desired number of pods and ensuring you never drop before the desired number of pods.
spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% # Can exceed the desired number of pods maxUnavailable: 0 # No downtime during the update # ... rest of your deployment specification ...
Replace Deployment
This is the ungraceful method to kill all pods and re-create them with no concern as to update.
Open a Second Session to Monitor the Process
Open a second terminal or ssh session and run the watch command
watch -d -n 1 'kubectl get pods,deploy'
You will watch the status of your deployment here.
Create the Update
We will make a new version of k8s-deployment-web.yml
with a new section
strategy: type: Recreate
We will deploy the “orange” image this time.
Here is the complete file:
- k8s-replace-web.yml
apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 strategy: type: Recreate selector: matchLabels: app: web template: metadata: labels: app: web spec: nodeSelector: my-role: worker # restrict scheduling to the nodes with the label my-role: worker containers: - name: web image: doritoes/k8s-php-demo-app:orange ports: - containerPort: 8080 livenessProbe: httpGet: path: /liveness.php port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: # Add the readinessProbe section httpGet: path: /readiness.php port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Kick Off the Update
Start the update
kubectl apply -f k8s-replace-web-web.yml
Observe the rollout
kubectl rollout status deployment/web-deployment
Watch the process.
kubectl rollout status deployment/web-deployment
After it's done, examine the image that is running.
kubectl describe deployment/web-deployment
Roll back this update
kubectl rollout undo deployment/web-deployment
Observe the rollback
kubectl rollout status deployment/web-deployment
What method is used for the rollback?
View the rollout history
kubectl rollout history deployment/web-deployment
Is this empty? Each time you rolled back.
Modify the deployment file k8s-deployment-web.yml
update the image and apply it a couple of times.
ansible-playbook deploy-web.yml
- doritoes/k8s-php-demo-app:blue
- doritoes/k8s-php-demo-app:green
- doritoes/k8s-php-demo-app
- doritoes/k8s-php-demo-app:orange
- doritoes/k8s-php-demo-app:latest
View the rollout history
kubectl rollout history deployment/web-deployment
Pick a revision number and view the details
kubectl rollout history deployment/web-deployment --revision=<revision-number>
NOTE on Revision Numbering
The reason your revision numbers seem out of order could be due to a few factors:
- Rollbacks: If you performed rollbacks using kubectl rollout undo, the rolled-back version gets assigned a new, higher revision number.
- Manual Manifest Change: Manually editing the deployment's pod template using kubectl edit also generates a new revision
- Failed Deployments: Sometimes, if the deployment process fails, a new revision could be created even if no new pods were successfully launched
Kubernetes Dashboard
The Kubernetes Dashboard is a web-based graphical user interface (GUI) built into Kubernetes. See a comprehensive overview of your cluster and perform basic tasks.
References:
We will be running this commands on your host system.
Install
Deploy the dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Create Service Account
Create a service account and bind cluster-admin role to it
kubectl create serviceaccount dashboard -n kubernetes-dashboard'' kubectl create clusterrolebinding dashboard-admin -n kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard
Create Service Account
kubectl -n kubernetes-dashboard create token dashboard
You will use this token in a moment.
Launch the Dashboard
UI can only be accessed from the machine where the command is executed
kubectl proxy
Authenticate with “Token” and paste in the token the step above.
This is not at full-featured as using kubectl, but simplify things for you.
Try out deleting pods and watch them get deployed.
Next Step
Continue to Review
Or, back to Step 5 - Application Pods or Start
Optional
Ramped Slow Rollout
This strategy gradually updates pods, ensuring consistent availability, and offers granular control over rollout speed.
How it works: New replicas are created while old ones are removed. You directly control the number of pods updated simultaneously.
Key difference: Compared to a standard rolling deployment, you precisely manage the update pace, minimizing risks by updating only a few pods at a time (e.g., 1 or 2).
Configuration:
- maxSurge: 1 Allows only one pod to be added beyond the desired count during the update
- maxUnavailable: 0 Ensures zero downtime; no pods are taken offline before new ones are ready
Example: For a 10-pod deployment, this setup guarantees at least 10 pods are always available throughout the update process.
Best Effort Controlled Rollout
This strategy prioritizes update speed over the zero-downtime guarantee of a ramped rollout. It introduces some risk by allowing a configurable percentage of pods to be temporarily unavailable.
How it works: Rapidly replaces pods as quickly as possible, while ensuring that the downtime stays within a specified limit.
Tradeoff: Offers faster rollout in exchange for some potential downtime. Choose this if time-to-new-features is paramount and your app can handle the defined downtime tolerance
Configuration:
- maxSurge: 0 A Maintains a constant number of pods, optimizing resource usage during the update
- maxUnavailable: 20% A percentage defining the acceptable number of unavailable pods during the update
Blue-Green Deployments
The purpose of Blue-Green deployments are:
- No Downtime: aims to eliminate downtime for updates
- Testing in Production: 'green' enables real-world testing before exposing users
- Beyond Simple Rollouts: introduces the idea of traffic management strategies, contrasting it with basic rolling updates
To do a proper blue-green deployment you need to account for
- separate clusters
- database mirroring