If you’ve read my article about introduction to Helm, you already understand the fundamental concepts of charts, values, and releases. Now, it’s time to transform that theoretical knowledge into practical skills you can confidently use in any environment.
This guide takes a hands-on approach, using examples primarily focused on PostgreSQL deployment with the popular Bitnami PostgreSQL chart. You’ll learn not just the commands, but more importantly, the ‘why’ behind each decision and the best practices that separate beginners from confident practitioners.
By the end of this guide, you’ll be comfortable installing applications, managing configurations, troubleshooting deployment issues, and handling the complete lifecycle of Helm-managed applications. Each section builds progressively, starting with basic installations and advancing to complex debugging scenarios you’ll encounter in any environment.
Installing Applications
PostgreSQL installation
Let’s deploy a PostgreSQL database using the Bitnami chart. This example demonstrates the core Helm workflow while giving you a real application to work with.
# Add the Bitnami Helm chart repository to your local Helm client.
helm repo add bitnami https://charts.bitnami.com/bitnami
# Update your local chart repositories to ensure you have the latest versions.
# This is crucial before running 'helm install' or 'helm search'.
helm repo update
bitnami: A name (alias) for the repository, which you’ll use when referencing it later.
https://charts.bitnami.com/bitnami: The URL of the Bitnami Helm chart repository.
To explore charts under the ‘bitnami’ repository, use the search command below.
helm search repo bitnami
#asrinandirin@Asrn-Mac-mini Desktop % helm search repo bitnami
#NAME CHART VERSION APP VERSION DESCRIPTION
#bitnami/airflow 24.2.0 3.0.2 Apache Airflow is a tool to express and execute...
#bitnami/apache 11.3.19 2.4.63 Apache HTTP Server is an open-source HTTP serve...
#bitnami/apisix 5.0.4 3.13.0 Apache APISIX is high-performance, real-time AP...
#bitnami/appsmith 6.0.14 1.79.0 Appsmith is an open source platform for buildin...
#bitnami/argo-cd 9.0.25 3.0.9 Argo CD is a continuous delivery tool for Kuber...
#bitnami/argo-workflows 12.0.6 3.6.10 Argo Workflows is meant to orchestrate Kubernet...
#bitnami/aspnet-core 7.0.10 9.0.6 ASP.NET Core is an open-source framework for we...
#bitnami/cadvisor 0.1.10 0.53.0 cAdvisor (Container Advisor) is an open-source ...
#keeps going...
Lets explore which postgreSQL chart & version cached under Helm client.
helm search repo bitnami/postgresql
#NAME CHART VERSION APP VERSION DESCRIPTION
#bitnami/postgresql 16.7.15 17.5.0 PostgreSQL (Postgres) is an open source object-...
#bitnami/postgresql-ha 16.0.16 17.5.0 This PostgreSQL cluster solution includes the P...
Extra
*~/.cache/helm/repository
Helm’s local chart cache directory*
The Chart Version refers to the version of the Helm chart itself. This includes:
- Kubernetes manifests
- Templates
- Default configuration (
values.yaml
) - Chart structure
If they fix a Helm template bug or update documentation, the chart version changes (even if the app doesn’t). For example, if Bitnami improves the Helm chart’s storage configuration logic, they might bump the chart version from 16.7.14 to 16.7.15.
The App Version is the version of the underlying software, like PostgreSQL.
- That’s what gets deployed inside the container.
- Maintained by the app developers (e.g., PostgreSQL team).
- It’s defined in the chart’s
Chart.yaml
underappVersion
.
PostgreSQL 17.5.0
means: The actual version of the PostgreSQL software inside the container.
Release names must be unique within a namespace. However, you can have releases with the same name in different namespaces, which is a major advantage of Helm 3 over its predecessors.
Now let’s install PostgreSQL with a basic configuration:
helm install my-database bitnami/postgresql --create-namespace --namespace databases
#NAME: my-database
#LAST DEPLOYED: Sat Jul 5 16:42:30 2025
#NAMESPACE: databases
#STATUS: deployed
#REVISION: 1
#TEST SUITE: None
#NOTES:
#CHART NAME: postgresql
#CHART VERSION: 16.7.15
#APP VERSION: 17.5.0
#PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:
#my-database-postgresql.databases.svc.cluster.local - Read/Write connection
#To get the password for "postgres" run:
#export POSTGRES_PASSWORD=$(kubectl get secret --namespace databases my-database-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
# And bunch of information about deployed chart...
# You can also get this output (Notes) ! again by running a status command.
helm status my-database --namespace databases
# This command reveals the deployment status, revision number, and importantly, the NOTES section
# that provides connection instructions specific to your deployment
helm install
command tells Helm to install a new release of a chart to current defined context Kubernetes Cluster.
my-database
The name you are assigning to this Helm release (like a unique ID for this deployment).
bitnami/postgresql
Specifies the chart to install. In this case, the PostgreSQL chart from the Bitnami repository.
-create-namespace
Automatically creates thedatabases
namespace if it doesn’t exist.-namespace databases
Installs the chart into the Kubernetes namespacedatabases
(instead of the default one).
For a broader view of all your releases, use the commands below:
# List releases in the current namespace
helm list
# List releases accross all namespaces
helm list --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-database databases 1 2025-07-05 16:42:30.034773 +0300 +03 deployed postgresql-16.7.15 17.5.0
# Filter by status
helm list --failed--all-namespaces
Let’s verify our database is actually functional. First, retrieve the password with ;
# First export postgreSQL database password.
export POSTGRES_PASSWORD=$(kubectl get secret --namespace databases my-database-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
# Verify that your password in current bash.
echo $POSTGRES_PASSWORD
# VzUHrVHlnt
kubectl run postgresql-client --rm --tty -i --restart='Never' \
--namespace databases \
--image bitnami/postgresql:latest \
--env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host my-database-postgresql -U postgres -d postgres -p 5432
# If you don't see a command prompt, try pressing enter.
# postgres=#
# postgres=# \l
List of databases
# Name | Owner | Encoding | Locale Provider | Collate | Ctype | Locale | ICU Rules | Access privileges
#-----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------
# postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
# template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
# | | | | | | | | postgres=CTc/postgres
# template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
# | | | | | | | | postgres=CTc/postgres
#(3 rows)
If you see a PostgreSQL prompt, you’ve successfully deployed your first application with Helm.
Working with Values and Configuration
Before customizing any chart, you need to understand what’s configurable. The helm show values
command reveals all available options. You can also pipe the output to a file for easier browsing.
helm show values bitnami/postgresql
# This command outputs the complete values.yaml file, which can
# be overwhelming at first. Here's a sample of the key sections you'll see:
## PostgreSQL Authentication parameters
#auth:
# postgresPassword: ""
# username: ""
# password: ""
# database: ""
## PostgreSQL Primary configuration
#primary:
# persistence:
# enabled: true
# storageClass: ""
# accessModes:
# - ReadWriteOnce
# size: 8Gi
# resources:
# limits: {}
# requests: {}
CLI Overrides with set flags
For more practical setups, especially in CI/CD pipelines or one-off custom deployments, you can override Helm chart values directly from the command line using the --set
flag. This approach is ideal for quick modifications and automation where maintaining a full values.yaml file isn’t necessary.
helm install my-database bitnami/postgresql \
--set auth.postgresPassword=mySecretPassword \
--set auth.database=myapp \
--namespace databases
For more complex use cases, Helm provides several variations of the set flag to pass structured or specialized values directly through the CLI.
helm install my-database bitnami/postgresql \
--set auth.postgresPassword=mySecretPassword \
--set primary.persistence.size=20Gi \
--set primary.resources.requests.memory=512Mi \
--namespace databases
helm install my-database bitnami/postgresql \
--set-json 'primary.nodeSelector={"kubernetes.io/arch":"amd64"}' \
--namespace databases
Custom Values File and Precedence
Rather than passing many values through --set
, it’s best to define them in a dedicated file for clarity and maintainability. For production environments, managing charts configuration with values files is essential.
- It allows you to track configuration changes in version control system (Git)
- Provides clear visibility into what’s being deployed.
- Ensures consistent deployments across environments.
Example production-values.yaml
auth:
postgresPassword: "mySecretPassword"
username: "appuser"
password: "appUserPassword"
database: "myapp"
primary:
# Resource limits for production workloads
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
# Persistence configuration
persistence:
enabled: true
storageClass: "ssd"
size: 100Gi
accessModes:
- ReadWriteOnce
# PostgreSQL configuration tuning
extendedConfiguration: |
max_connections = 200
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 4MB
# Security context
podSecurityContext:
fsGroup: 1001
runAsUser: 1001
runAsNonRoot: true
# Enable monitoring
metrics:
enabled: true
serviceMonitor:
enabled: true
interval: 30s
Deploy using this values file ;
helm install my-database bitnami/postgresql \
--values production-values.yaml \
--namespace databases \
--create-namespace
Combining set flag and custom values file
In more advanced scenarios, you may need to use a hybrid approach — combining a custom values.yaml
file with additional CLI --set
flags. This is especially useful when some values come from version-controlled configuration files, while others (like secrets or environment-specific tweaks) are injected at runtime. To use this combination properly, it’s important to understand Helm’s value precedence hierarchy:
- Chart defaults — Values defined inside the chart’s
values.yaml
file. - Parent chart values (for subcharts) — Values passed from a parent chart to a dependency/subchart.
- Custom values file (
f
) - Files you provide during installation or upgrade (e.g.,production-values.yaml
). - CLI overrides (
-set
,-set-string
,-set-json
) - Highest priority. These always override values from files or chart defaults.
Example ;
helm install my-database bitnami/postgresql \
-f production-values.yaml \
--set auth.postgresPassword=runtimeSecret \
--namespace databases
In this case, most settings come from production-values.yaml
. But auth.postgresPassword
is overridden by the CLI flag, ensuring secrets can be passed securely at runtime.
Environment Specific Configurations
A common and highly recommended pattern is maintaining separate values files for different environments.
# Development environment
helm install my-database bitnami/postgresql \
--values values-dev.yaml \
--namespace dev
# Staging environment
helm install my-database bitnami/postgresql \
--values values-staging.yaml \
--namespace staging
# Production environment
helm install my-database bitnami/postgresql \
--values values-prod.yaml \
--namespace production
Each values file may specify environment-specific differences such as:
- Resource limits
- Storage class and size
- PostgreSQL tuning parameters
- Security and access controls
- Monitoring or logging options
Release Management and History
Helm comes with built in release tracking and history system. It makes upgrades and rollbacks safe and predictable.
Upgrades
The Helm upgrade command is the primary tool for upgrading a running application. Unlike kubectl apply, Helm performs upgrades in an atomic and tracked manner. If an upgrade fails, it can be rolled back automatically or manually to the previous working state.
Let’s say you want to increase the memory limits for a PostgreSQL instance:
helm upgrade my-database bitnami/postgresql \
--set primary.resources.limits.memory=2Gi \
--namespace databases
# Release "my-database" has been upgraded. Happy Helming!
# NAME: my-database
# LAST DEPLOYED: Sun Jul 6 15:51:07 2025
# NAMESPACE: databases
# STATUS: deployed
# REVISION: 2
# TEST SUITE: None
# NOTES:
# CHART NAME: postgresql
# CHART VERSION: 16.7.15
# APP VERSION: 17.5.0
Notice that the revision number has incremented to 2. Helm automatically keeps track of every deployment or upgrade with a unique revision number.
To view release history:
helm history my-database -n databases
# REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
# 1 Sun Jul 6 15:50:49 2025 superseded postgresql-16.7.15 17.5.0 Install complete
# 2 Sun Jul 6 15:51:07 2025 deployed postgresql-16.7.15 17.5.0 Upgrade complete
superseded
: This means that this revision was previously the active one, but it has since been replaced by a newer revision (an upgrade or rollback).deployed
: This means this is the currently active and deployed revision of the release.
To roll back to a previous version:
controlplane:~$ helm history my-database -n databases
# REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
# 1 Sun Jul 6 15:50:49 2025 superseded postgresql-16.7.15 17.5.0 Install complete
# 2 Sun Jul 6 15:51:07 2025 superseded postgresql-16.7.15 17.5.0 Upgrade complete
# 3 Sun Jul 6 15:56:30 2025 deployed postgresql-16.7.15 17.5.0 Rollback to 1
This restores the PostgreSQL release to revision 1, undoing the latest upgrade.
Atomic Flag
When deploying or upgrading critical applications, you want to avoid partial failures for cases where some resources are updated while others fail. This leaves the system in an inconsistent or broken state. That’s where Helm’s atomic flag comes in. The atomic flag ensures that the entire Helm upgrade (or install) either succeeds completely or fails and rolls back automatically.
Behavior with --atomic
- If the upgrade succeeds → Helm deploys everything as expected.
- If the upgrade fails (due to resource limits, timeouts, misconfigurations, etc.) → Helm automatically rolls back to the previous working version.
Rollback Procedures for Failed Deployments
When deployments go wrong, Helm’s rollback capability is crucial for maintaining system stability. However, real-world scenarios often involve multiple interconnected services and cascading failures that require strategic thinking.
Rollback Commands
# Roll back to the previous revision
helm rollback my-database --namespace databases
# Roll back to a specific revision
helm rollback my-database 1 --namespace databases
# Roll back with additional safety options
helm rollback my-database 1 \
--cleanup-on-fail \
--wait \
--timeout 10m \
--namespace databases
Example Complex Multi-Service Rollback Scenario
Your e-commerce platform consists of multiple interconnected services:
- my-database (PostgreSQL): Core customer and order data
- payment-service: Handles transactions, depends on database
- inventory-service: Stock management, also depends on database
- api-gateway: Routes traffic to all services During a routine maintenance window, you upgraded all services to improve performance. However, the deployment introduced several cascading issues:
- Database schema migration failed (revision 3 → 4)
- Payment service can’t connect due to connection pool changes
- Inventory service experiencing memory leaks from new caching layer
- API gateway returning 502 errors for 30% of requests
Example Solution Workflow
Step 1: Assess the Damage
# Check status of all affected services
helm list --namespace production --filter "my-database|payment-service|inventory-service|api-gateway"
# Get detailed status of each service
helm status my-database --namespace production
helm status payment-service --namespace production
helm status inventory-service --namespace production
helm status api-gateway --namespace production
# Check pod health across all services
kubectl get pods -n production -l "app in (database,payment,inventory,gateway)" -o wide
Step 2: Identify the Root Cause
# Review the problematic database revision
helm get values my-database --revision 4 --namespace production
# Compare with the last working revision
helm get values my-database --revision 3 --namespace production
# Check if schema migration completed
kubectl logs -n production deployment/my-database-migration --tail=50
# Examine database connectivity from dependent services
kubectl exec -n production deployment/payment-service -- pg_isready -h my-database-service
Step 3: Strategic Rollback Decision
Since the database is the foundation service, you need to determine if rolling it back will break dependent services that expect the new schema:
# Check which services were upgraded and their dependencies
helm history my-database --namespace production
helm history payment-service --namespace production
helm history inventory-service --namespace production
# Verify schema compatibility
kubectl exec -n production deployment/my-database -- psql -d app_db -c "\d+ orders"
Step 4: Coordinated Rollback Execution
# First, roll back dependent services to avoid orphaned connections
helm rollback payment-service 2 --namespace production --wait --timeout 5m
helm rollback inventory-service 3 --namespace production --wait --timeout 5m
# Then roll back the database (most critical)
helm rollback my-database 3 \
--namespace production \
--cleanup-on-fail \
--wait \
--timeout 15m \
--force
# Finally, roll back the gateway to handle routing properly
helm rollback api-gateway 1 --namespace production --wait --timeout 3m
Uninstalling and Cleanup
Proper Application Removal
Uninstalling Helm releases requires careful consideration, especially for stateful applications like PostgreSQL. The helm uninstall
command removes the release and most associated resources, but persistent data requires special handling.
Basic Uninstall
# Basic uninstall
helm uninstall my-database --namespace databases
# release "my-database" uninstalled
# Important: This basic uninstall intentionally leaves behind Persistent Volume Claims (PVCs) and Secrets to prevent accidental data loss.
Comprehensive PostgreSQL Cleanup
Step 1: Create Backup (Not recommended for this lab)
# Export database before removal
kubectl exec my-database-postgresql-0 --namespace databases -- \
pg_dumpall -U postgres > postgresql-backup-$(date +%Y%m%d).sql
Step 2: Uninstall with Options
# Uninstall while keeping history for potential rollback
helm uninstall my-database --keep-history --namespace databases
# Or complete uninstall without history
helm uninstall my-database --namespace databases
Step 3: Check Remaining Resources
# List what's left behind
kubectl get pvc,secrets -l app.kubernetes.io/instance=my-database --namespace databases
Step 4: Clean Up Persistent Resources
# ⚠️ WARNING: This destroys data permanently
kubectl delete pvc data-my-database-postgresql-0 --namespace databases
kubectl delete secret my-database-postgresql --namespace databases
# Clean up persistent volumes (if manually provisioned)
kubectl get pv | grep my-database
kubectl delete pv my-database-pv
Automated Cleanup Script
For production environments, use this comprehensive cleanup script:
#!/bin/bash
RELEASE_NAME="my-database"
NAMESPACE="databases"
echo "Starting cleanup for release: $RELEASE_NAME in namespace: $NAMESPACE"
# Check if release exists
if helm status $RELEASE_NAME --namespace $NAMESPACE &> /dev/null; then
echo "Found release $RELEASE_NAME"
# Optional backup
read -p "Create backup before cleanup? (y/n): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
kubectl exec $RELEASE_NAME-postgresql-0 --namespace $NAMESPACE -- \
pg_dumpall -U postgres > "${RELEASE_NAME}-backup-$(date +%Y%m%d-%H%M%S).sql"
echo "Backup created"
fi
# Uninstall release
helm uninstall $RELEASE_NAME --namespace $NAMESPACE
echo "Release uninstalled"
else
echo "Release $RELEASE_NAME not found"
fi
# Clean up persistent resources
echo "Cleaning up persistent resources..."
kubectl delete pvc -l app.kubernetes.io/instance=$RELEASE_NAME --namespace $NAMESPACE --ignore-not-found
kubectl delete secrets -l app.kubernetes.io/instance=$RELEASE_NAME --namespace $NAMESPACE --ignore-not-found
# Verify cleanup
echo "Cleanup verification:"
kubectl get all,pvc,secrets -l app.kubernetes.io/instance=$RELEASE_NAME --namespace $NAMESPACE
Verification Commands
After cleanup, verify complete removal:
# Check for remaining resources
kubectl get all,pvc,secrets --namespace databases
# Verify Helm release history
helm list --all --namespace databases
helm list --uninstalled --namespace databases
Production Safety Tips
- Always backup before uninstalling stateful applications
- Use
-keep-history
flag for potential rollback scenarios - Test cleanup procedures in staging environments first
- Double-check resource labels before bulk deletions
- Have a recovery plan ready before destroying persistent data
⚠️ Critical Warning: Deleting Persistent Volume Claims destroys your data permanently. Always verify you have backups and confirm the correct resources before deletion. In production, consider using *kubectl describe*
to inspect resources before deletion.