The example shows how to build a simple multi-tier web application using Kubernetes and Docker

Guestbook Example

This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services.

If you are running a cluster in Google Container Engine (GKE), instead see the Guestbook Example for Google Container Engine.

Table of Contents

Step Zero: Prerequisites

This example assumes that you have a working cluster. See the Getting Started Guides for details about creating a cluster.

Tip: View all the kubectl commands, including their options and descriptions in the kubectl CLI reference.

Step One: Create the Redis master pod

Use the examples/guestbook-go/redis-master-controller.json file to create a replication controller and Redis master pod. The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).

  1. Use the redis-master-controller.json file to create the Redis master replication controller in your Kubernetes cluster by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/redis-master-controller.json
    
  2. To verify that the redis-master controller is up, list the replication controllers you created in the cluster with the kubectl get rc command(if you don't specify a --namespace, the default namespace will be used. The same below):

    $ kubectl get rc
    CONTROLLER             CONTAINER(S)            IMAGE(S)                    SELECTOR                         REPLICAS
    redis-master           redis-master            gurpartap/redis             app=redis,role=master            1
    ...

    Result: The replication controller then creates the single Redis master pod.

  3. To verify that the redis-master pod is running, list the pods you created in cluster with the kubectl get pods command:

    $ kubectl get pods
    NAME                        READY     STATUS    RESTARTS   AGE
    redis-master-xx4uv          1/1       Running   0          1m
    ...

    Result: You'll see a single Redis master pod and the machine where the pod is running after the pod gets placed (may take up to thirty seconds).

  4. To verify what containers are running in the redis-master pod, you can SSH to that machine with gcloud compute ssh --zone zone_name host_name and then run docker ps:

    me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-bz1p
    
    me@kubernetes-node-3:~$ sudo docker ps
    CONTAINER ID        IMAGE     COMMAND                  CREATED             STATUS
    d5c458dabe50        redis     "/entrypoint.sh redis"   5 minutes ago       Up 5 minutes

    Note: The initial docker pull can take a few minutes, depending on network conditions.

Step Two: Create the Redis master service

A Kubernetes service is a named load balancer that proxies traffic to one or more pods. The services in a Kubernetes cluster are discoverable inside other pods via environment variables or DNS.

Services find the pods to load balance based on pod labels. The pod that you created in Step One has the label app=redis and role=master. The selector field of the service determines which pods will receive the traffic sent to the service.

  1. Use the redis-master-service.json file to create the service in your Kubernetes cluster by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/redis-master-service.json
    
  2. To verify that the redis-master service is up, list the services you created in the cluster with the kubectl get services command:

    $ kubectl get services
    NAME              CLUSTER_IP       EXTERNAL_IP       PORT(S)       SELECTOR               AGE
    redis-master      10.0.136.3       <none>            6379/TCP      app=redis,role=master  1h
    ...

    Result: All new pods will see the redis-master service running on the host ($REDIS_MASTER_SERVICE_HOST environment variable) at port 6379, or running on redis-master:6379. After the service is created, the service proxy on each node is configured to set up a proxy on the specified port (in our example, that's port 6379).

Step Three: Create the Redis slave pods

The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.

  1. Use the file redis-slave-controller.json to create the replication controller by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/redis-slave-controller.json
    
  2. To verify that the redis-slave controller is running, run the kubectl get rc command:

    $ kubectl get rc
    CONTROLLER              CONTAINER(S)            IMAGE(S)                         SELECTOR                    REPLICAS
    redis-master            redis-master            redis                            app=redis,role=master       1
    redis-slave             redis-slave             k8s.gcr.io/redis-slave:v2        app=redis,role=slave        2
    ...

    Result: The replication controller creates and configures the Redis slave pods through the redis-master service (name:port pair, in our example that's redis-master:6379).

    Example: The Redis slaves get started by the replication controller with the following command:

    redis-server --slaveof redis-master 6379
  3. To verify that the Redis master and slaves pods are running, run the kubectl get pods command:

    $ kubectl get pods
    NAME                          READY     STATUS    RESTARTS   AGE
    redis-master-xx4uv            1/1       Running   0          18m
    redis-slave-b6wj4             1/1       Running   0          1m
    redis-slave-iai40             1/1       Running   0          1m
    ...

    Result: You see the single Redis master and two Redis slave pods.

Step Four: Create the Redis slave service

Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the Redis slave service provides transparent load balancing to clients.

  1. Use the redis-slave-service.json file to create the Redis slave service by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/redis-slave-service.json
    
  2. To verify that the redis-slave service is up, list the services you created in the cluster with the kubectl get services command:

    $ kubectl get services
    NAME              CLUSTER_IP       EXTERNAL_IP       PORT(S)       SELECTOR               AGE
    redis-master      10.0.136.3       <none>            6379/TCP      app=redis,role=master  1h
    redis-slave       10.0.21.92       <none>            6379/TCP      app-redis,role=slave   1h
    ...

    Result: The service is created with labels app=redis and role=slave to identify that the pods are running the Redis slaves.

Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.

Step Five: Create the guestbook pods

This is a simple Go net/http (negroni based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis slave pods, these pods are also managed by a replication controller.

  1. Use the guestbook-controller.json file to create the guestbook replication controller by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/guestbook-controller.json
    

Tip: If you want to modify the guestbook code open the _src of this example and read the README.md and the Makefile. If you have pushed your custom image be sure to update the image accordingly in the guestbook-controller.json.

  1. To verify that the guestbook replication controller is running, run the kubectl get rc command:

    $ kubectl get rc
    CONTROLLER            CONTAINER(S)         IMAGE(S)                               SELECTOR                  REPLICAS
    guestbook             guestbook            k8s.gcr.io/guestbook:v3  app=guestbook             3
    redis-master          redis-master         redis                                  app=redis,role=master     1
    redis-slave           redis-slave          k8s.gcr.io/redis-slave:v2              app=redis,role=slave      2
    ...
  2. To verify that the guestbook pods are running (it might take up to thirty seconds to create the pods), list the pods you created in cluster with the kubectl get pods command:

    $ kubectl get pods
    NAME                           READY     STATUS    RESTARTS   AGE
    guestbook-3crgn                1/1       Running   0          2m
    guestbook-gv7i6                1/1       Running   0          2m
    guestbook-x405a                1/1       Running   0          2m
    redis-master-xx4uv             1/1       Running   0          23m
    redis-slave-b6wj4              1/1       Running   0          6m
    redis-slave-iai40              1/1       Running   0          6m
    ...

    Result: You see a single Redis master, two Redis slaves, and three guestbook pods.

Step Six: Create the guestbook service

Just like the others, we create a service to group the guestbook pods but this time, to make the guestbook front end externally visible, we specify "type": "LoadBalancer".

  1. Use the guestbook-service.json file to create the guestbook service by running the kubectl create -f filename command:

    $ kubectl create -f examples/guestbook-go/guestbook-service.json
  2. To verify that the guestbook service is up, list the services you created in the cluster with the kubectl get services command:

    $ kubectl get services
    NAME              CLUSTER_IP       EXTERNAL_IP       PORT(S)       SELECTOR               AGE
    guestbook         10.0.217.218     146.148.81.8      3000/TCP      app=guestbook          1h
    redis-master      10.0.136.3       <none>            6379/TCP      app=redis,role=master  1h
    redis-slave       10.0.21.92       <none>            6379/TCP      app-redis,role=slave   1h
    ...

    Result: The service is created with label app=guestbook.

Step Seven: View the guestbook

You can now play with the guestbook that you just created by opening it in a browser (it might take a few moments for the guestbook to come up).

  • Local Host: If you are running Kubernetes locally, to view the guestbook, navigate to http://localhost:3000 in your browser.

  • Remote Host:

    1. To view the guestbook on a remote host, locate the external IP of the load balancer in the IP column of the kubectl get services output. In our example, the internal IP address is 10.0.217.218 and the external IP address is 146.148.81.8 (Note: you might need to scroll to see the IP column).

    2. Append port 3000 to the IP address (for example http://146.148.81.8:3000), and then navigate to that address in your browser.

    Result: The guestbook displays in your browser:

    Guestbook

    Further Reading: If you're using Google Compute Engine, see the details about limiting traffic to specific sources at Google Compute Engine firewall documentation.

Step Eight: Cleanup

After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services.

Delete all the resources by running the following kubectl delete -f filename command:

$ kubectl delete -f examples/guestbook-go
guestbook-controller
guestbook
redid-master-controller
redis-master
redis-slave-controller
redis-slave

Tip: To turn down your Kubernetes cluster, follow the corresponding instructions in the version of the Getting Started Guides that you previously used to create your cluster.

Analytics

Owner
Similar Resources

Golang multi lingual web application development

MultiLingualWeb golang multi lingual web application development Command execute

Dec 31, 2021

Go WhatsApp Multi-Device Implementation in REST API with Multi-Session/Account Support

Go WhatsApp Multi-Device Implementation in REST API This repository contains example of implementation go.mau.fi/whatsmeow package with Multi-Session/

Dec 3, 2022

How to get a Go / Golang app using the Gin web framework running natively on Windows Azure App Service WITHOUT using a Docker container

Go on Azure App Service View the running app - https://go-azure-appservice.azurewebsites.net 😎 This is an example repo of how to get a Go / Golang a

Nov 28, 2022

Go-gke-pulumi - A simple example that deploys a GKE cluster and an application to the cluster using pulumi

This example deploys a Google Cloud Platform (GCP) Google Kubernetes Engine (GKE) cluster and an application to it

Jan 25, 2022

A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging docker container and kubernetes with web UI.

A toolbox for debugging Docker container and Kubernetes with visual web UI. You can start the debugging journey on any docker container host! You can

Oct 20, 2022

Tool to convert docker-compose files to set of simple docker commands

docker-decompose Tool to convert docker-compose files to set of simple docker commands. Install Use go get to install the latest version of the librar

Apr 12, 2022

Kubernetes IN Docker - local clusters for testing Kubernetes

Kubernetes IN Docker - local clusters for testing Kubernetes

kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Jan 5, 2023

Kubernetes IN Docker - local clusters for testing Kubernetes

Kubernetes IN Docker - local clusters for testing Kubernetes

Please see Our Documentation for more in-depth installation etc. kind is a tool for running local Kubernetes clusters using Docker container "nodes".

Feb 14, 2022

Sample multi docker compose environment setup

Instructions This is a demonstration of a Multi Docker Compose. The purpose of this repositoy is ongoing research on "Docker compose" architecture des

Oct 21, 2022
Using the Golang search the Marvel Characters. This project is a web based golang application that shows the information of superheroes using Marvel api.
Using the Golang search the Marvel Characters. This project is a web based golang application that shows the information of superheroes using Marvel api.

marvel-universe-web using the Golang search the Marvel Universe Characters About The Project This project is a web based golang application that shows

Oct 10, 2021
A demo repository that shows CI/CD integration using DroneCI + ArgoCD + Kubernetes.
A demo repository that shows CI/CD integration using DroneCI + ArgoCD + Kubernetes.

CI/CD Demo This is the demo repo for my blog post. This tutorial shows how to build CI/CD pipeline with DroneCI and ArgoCD. In this demo, we use Drone

Oct 18, 2022
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration Karmada (Kubernetes Armada) is a Kubernetes management system that enables

Dec 30, 2022
Go-http-server-docker - Simple sample server using docker and go

go-http-server-docker Simple sample webserver using docker and go.

Jan 8, 2022
Multi cluster kubernetes dashboard with batteries included. Build by developers, for developers.

kubetower Multi cluster kubernetes dashboard with batteries included. Built by developers, for developers. Features Restart deployments with one click

Nov 28, 2022
Simple example using Git actions + Argo CD + K8S + Docker and GO lang

CICD-simple_example Simple example using Git actions + Argo CD + K8S + Docker and GO lang Intro Pre reqs Have an ArgoCD account and Installed. Docker

Oct 28, 2021
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Dec 30, 2022
Dotnet-appsettings-env - Convert .NET appsettings.json file to Kubernetes, Docker and Docker-Compose environment variables

dotnet-appsettings-env Convert .NET appsettings.json file to Kubernetes, Docker

Feb 16, 2022
Example used to try a compose application with Docker Dev Environments

compose-dev-env Example used to try a Compose application with Docker Dev Environments. This example is based on the nginx-golang-mysql sample of awes

Dec 29, 2022
Pulumi-k8s-operator-example - OpenGitOps Compliant Pulumi Kubernetes Operator Example

Pulumi GitOps Example OpenGitOps Compliant Pulumi Kubernetes Operator Example Pr

May 6, 2022