Kubernetes CrashLoopBackOff
A recently deployed microservice is stuck in CrashLoopBackOff on Kubernetes. The deployment passed CI/CD and the container builds fine, but the pods keep restarting. The team is blocked on shipping the new release.

System Overview

A Node.js API service is deployed on Kubernetes via a CI/CD pipeline. The Deployment has 3 replicas, health checks (liveness and readiness probes), and connects to a PostgreSQL database and a Redis cache. A ConfigMap provides environment variables.

Logs

LAST SEEN   TYPE      REASON              OBJECT                          MESSAGE
2m          Normal    Scheduled           pod/api-server-7f8b9c6d4-x2k9p  Successfully assigned default/api-server-7f8b9c6d4-x2k9p to node-3
2m          Normal    Pulled              pod/api-server-7f8b9c6d4-x2k9p  Container image "gcr.io/myproject/api-server:v2.3.1" already present on machine
2m          Normal    Created             pod/api-server-7f8b9c6d4-x2k9p  Created container api-server
2m          Normal    Started             pod/api-server-7f8b9c6d4-x2k9p  Started container api-server
90s         Warning   Unhealthy           pod/api-server-7f8b9c6d4-x2k9p  Readiness probe failed: HTTP probe failed with statuscode: 503
60s         Warning   Unhealthy           pod/api-server-7f8b9c6d4-x2k9p  Liveness probe failed: HTTP probe failed with statuscode: 503
55s         Normal    Killing             pod/api-server-7f8b9c6d4-x2k9p  Container api-server failed liveness probe, will be restarted
50s         Normal    Pulled              pod/api-server-7f8b9c6d4-x2k9p  Container image "gcr.io/myproject/api-server:v2.3.1" already present on machine
45s         Warning   BackOff             pod/api-server-7f8b9c6d4-x2k9p  Back-off restarting failed container
Diagnosis & Plan
Based on the logs, what do you think the root cause is, and what are your proposed next steps?