
Deploying Your First Application on Kubernetes: A Practical Guide
Kubernetes has become the industry-standard powerhouse for container orchestration, enabling developers and operations teams to build scalable, resilient, and self-healing systems. While its power is undeniable, taking the first step—deploying an application—can seem daunting. This guide demystifies the process, providing a clear, step-by-step walkthrough to get your application running on a Kubernetes cluster.
Whether you’re a developer, a DevOps engineer, or just starting your cloud-native journey, understanding this fundamental workflow is essential. We will cover everything from the prerequisites to verifying your live application.
Before You Begin: The Prerequisites
To successfully deploy an application, you need a few key components in place. Ensure you have the following ready:
- A Running Kubernetes Cluster: This could be a local cluster for development like Minikube or kind, or a managed cluster from a cloud provider like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Azure Kubernetes Service (AKS).
- kubectl Installed and Configured: The
kubectl
command-line tool is your primary interface for interacting with your cluster. Make sure it’s installed and configured to communicate with your cluster. - A Containerized Application: Your application must be packaged into a container image, most commonly a Docker image. This image should be pushed to a container registry that your Kubernetes cluster can access, such as Docker Hub, Google Container Registry (GCR), or Amazon Elastic Container Registry (ECR).
Step 1: The Core Concept – Declarative Configuration with YAML
Kubernetes operates on a declarative model. This means you don’t tell Kubernetes how to do something; you tell it the desired state you want to achieve. You define this state in YAML manifest files. Kubernetes then works tirelessly in the background to make the current state of the cluster match your declared desired state.
For a basic application deployment, we need to define two primary objects: a Deployment and a Service.
Step 2: Creating the Deployment Manifest
A Deployment is a Kubernetes resource that manages a set of identical Pods. A Pod is the smallest deployable unit in Kubernetes and typically contains a single application container (though it can contain more). The Deployment ensures that a specified number of Pods are always running and handles updates and rollbacks gracefully.
Let’s create a file named deployment.yaml
. This file defines a Deployment that will run three instances (replicas) of our application using a specified container image.
apiVersion: apps/v1
kind: Deployment
metadata:
# A unique name for your Deployment
name: my-first-app-deployment
spec:
# The desired number of Pods to run
replicas: 3
selector:
matchLabels:
app: my-first-app
template:
metadata:
labels:
# This label connects the Pods to the Deployment's selector
app: my-first-app
spec:
containers:
- name: my-app-container
# Replace this with your actual container image from your registry
image: your-registry/your-app-image:latest
ports:
- containerPort: 8080 # The port your application listens on inside the container
Key takeaways from this file:
kind: Deployment
: Tells Kubernetes we are defining a Deployment resource.replicas: 3
: Instructs Kubernetes to maintain three running Pods for this application. If one fails, Kubernetes will automatically start a new one.selector
andtemplate.metadata.labels
: This is how the Deployment knows which Pods to manage. Theselector
looks for Pods with the labelapp: my-first-app
.image
: This is the most critical part to change. You must replaceyour-registry/your-app-image:latest
with the path to your container image in your registry.
Step 3: Exposing Your Application with a Service
Our Deployment is now defined, but the Pods are only accessible within the cluster’s internal network. To expose them to the outside world, we need a Service. A Kubernetes Service provides a stable network endpoint (a single IP address and DNS name) to access a group of Pods. It also acts as a load balancer, distributing traffic across all the Pods managed by the Deployment.
Let’s create a second file, service.yaml
. We will use a LoadBalancer
type, which is the easiest way to get external traffic into your cluster on most cloud platforms.
apiVersion: v1
kind: Service
metadata:
# A unique name for your Service
name: my-first-app-service
spec:
# This tells the Service to look for Pods with the label 'app: my-first-app'
selector:
app: my-first-app
ports:
- protocol: TCP
# The port the Service will be exposed on
port: 80
# The port on the Pod that traffic should be forwarded to
targetPort: 8080
# This type creates an external load balancer in supported cloud environments
type: LoadBalancer
Key takeaways from this file:
kind: Service
: Defines this resource as a Service.selector
: This must match the labels of the Pods (app: my-first-app
) so the Service knows where to send traffic.ports
: We define that traffic coming into the load balancer on port 80 should be forwarded totargetPort
8080 on the containers.type: LoadBalancer
: When deployed in a cloud environment, the provider will automatically provision an external load balancer with a public IP address.
Step 4: Applying the Manifests and Verifying the Deployment
With our two manifest files ready, it’s time to tell Kubernetes to create these resources. We do this using the kubectl apply
command.
Apply the configuration:
Open your terminal and run the following command in the directory containing your YAML files:kubectl apply -f deployment.yaml kubectl apply -f service.yaml
You should see a confirmation that the deployment and service were created.
Check the status of your Deployment:
To see if your Pods are being created, run:kubectl get deployments
You should see your
my-first-app-deployment
with the “READY” column showing3/3
.Check the status of your Pods:
For more detail on the individual Pods, run:kubectl get pods
You should see three Pods with a status of
Running
.Get the public IP address of your Service:
This is the final step to access your application. Run:
bash
kubectl get services
Look formy-first-app-service
. In theEXTERNAL-IP
column, you will see an IP address. It may show as<pending>
for a minute or two while the cloud provider provisions the load balancer. Once the IP is visible, you can access your application by navigating to that IP address in your web browser.
Essential Security and Reliability Tips
Simply deploying an application is just the beginning. To run a production-ready workload, consider these best practices:
- Use Liveness and Readiness Probes: Add these probes to your Deployment specification. A readiness probe tells Kubernetes when your app is ready to accept traffic, and a liveness probe helps Kubernetes know if your app has crashed and needs to be restarted.
- Set Resource Requests and Limits: Define CPU and memory
requests
(what the container needs to run) andlimits
(the maximum it can consume). This prevents a single container from starving other applications on the same node. - Run as a Non-Root User: Enhance security by specifying a
securityContext
in your container spec to ensure your application doesn’t run with root privileges inside the container. - Organize with Namespaces: For larger projects, use Kubernetes Namespaces to create virtual clusters within your physical cluster. This helps isolate resources, manage access control, and prevent naming conflicts.
By following these steps, you have successfully moved from a container image to a globally accessible, scalable application running on Kubernetes. This foundational skill opens the door to more advanced concepts like automated scaling, configuration management, and robust CI/CD pipelines.
Source: https://kifarunix.com/step-by-step-guide-on-deploying-an-application-on-kubernetes-cluster/