This is an old revision of the document!
Table of Contents
Step 6 - Application Configuration
In this step we look at deploying different application images
- doritoes/k8s-php-demo-app:blue
- doritoes/k8s-php-demo-app:green
- doritoes/k8s-php-demo-app:orange
Strategies supported by Kubernetes out of the box:
- rolling deployment (default) - replaces pods running the older version with the new version one-by-one
- recreate deployment - terminates all pods and replaces them with the new version
Strategies that requires customization or specialized tools:
- ramped slow rollout
- best-effort controlled rollout
- blue/green deployment
- canary deployment
- shadow deployment
- A/B testing
Rolling Deployment
Open a Second Session to Monitor the Process
Open a second terminal or ssh session and run the watch command
watch -d -n 1 'kubectl get pods,deploy'
You will watch the status of your deployment here.
Update the Image in k8s-deployment-web.yml
Update k8s-deployment-web.yml
to use the new image tag
image: doritoes/k8s-php-demo-app:green
Here is the entire modified file:
- k8s-deployment-web.yml
apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 selector: matchLabels: app: web template: metadata: labels: app: web spec: nodeSelector: my-role: worker # restrict scheduling to the nodes with the label my-role: worker containers: - name: web image: doritoes/k8s-php-demo-app:green ports: - containerPort: 8080 livenessProbe: httpGet: path: /liveness.php port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: # Add the readinessProbe section httpGet: path: /readiness.php port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Kick Off the Update
This process will perform a one-by-one replacement. Health checks (liveness and readiness) ensure new pods are healthy before taking old ones offline.
Start the update
kubectl apply -f k8s-deployment-web.yml
Observe the rollout
kubectl rollout status deployment/web-deployment
Watch the status in the other session. What can you see, and what useful information is missing?
Examine the deployment and see which image is running:
kubectl describe deployment/web-deployment
Here is a way to get the images running on all pods:
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ sort
Testing and Troubleshooting
Load the web app now (either by nodePort or by updating the HAProxy config).
You may need to press F5 or Control F5 to refresh the style sheet. The color of the app should now be green
If the rollout is stalled, use these commands to investigate potential issues:
kubectl describe deployment web-deployment kubectl get pods kubectl logs <pod-name>
Rollback
Roll back the update using
kubectl rollout undo deployment/web-deployment
Observe the rollback
kubectl rollout status deployment/web-deployment
Watch the status in the other session.
Examine the deployment and see which image is running:
kubectl describe deployment/web-deployment
Load the web app again and Control-F5/reload to refresh the CSS formatting.
The color should be back to blue.
Improvements
You can customize the rolling deployment behavior within your Deployment spec with the maxSurge
and maxUnavailable
properties.
For example here is adding the ability to add up to 25% of the desired number of pods and ensuring you never drop before the desired number of pods.
spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% # Can exceed the desired number of pods maxUnavailable: 0 # No downtime during the update # ... rest of your deployment specification ...
Replace Deployment
This is the ungraceful method to kill all pods and re-create them with no concern as to update.
Open a Second Session to Monitor the Process
Open a second terminal or ssh session and run the watch command
watch -d -n 1 'kubectl get pods,deploy'
You will watch the status of your deployment here.
Create the Update
We will make a new version of k8s-deployment-web.yml
with a new section
strategy: type: Recreate
We will deploy the “orange” image this time.
Here is the complete file:
- k8s-replace-web-web.yml
apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 2 strategy: type: Recreate selector: matchLabels: app: web template: metadata: labels: app: web spec: nodeSelector: my-role: worker # restrict scheduling to the nodes with the label my-role: worker containers: - name: web image: doritoes/k8s-php-demo-app:orange ports: - containerPort: 8080 livenessProbe: httpGet: path: /liveness.php port: 8080 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: # Add the readinessProbe section httpGet: path: /readiness.php port: 8080 initialDelaySeconds: 5 periodSeconds: 5
Kick Off the Update
Start the update
kubectl apply -f k8s-replace-web-web.yml
Observe the rollout
kubectl rollout status deployment/web-deployment
Watch the process.
kubectl rollout status deployment/web-deployment
After it's done, examine the image that is running.
kubectl describe deployment/web-deployment
Roll back this update
kubectl rollout undo deployment/web-deployment
Observe the rollback
kubectl rollout status deployment/web-deployment
What method is used for the rollback?
View the rollout history
kubectl rollout history deployment/web-deployment
Pick a revision number and view the details
kubectl rollout history deployment/web-deployment --revision=<revision-number>
NOTE on Revision Numbering
The reason your revision numbers seem out of order could be due to a few factors:
- Rollbacks: If you performed rollbacks using kubectl rollout undo, the rolled-back version gets assigned a new, higher revision number.
- Manual Manifest Change: Manually editing the deployment's pod template using kubectl edit also generates a new revision
- Failed Deployments: Sometimes, if the deployment process fails, a new revision could be created even if no new pods were successfully launched
Next Step
Continue to Step 7 - Load Balancing
Or, back to Step 5 - Application Pods or Start