An Engineer's Introduction to Pivotal Container Service

Keun Lee

3.07.2018

This introduction is a guide to getting started with Pivotal Container Service (PKS) in Google Cloud Platform (GCP). Before you begin, below are a few disclaimers and pre-requisites to getting started. 

Disclaimer: This guide was created using PKS v0.8.0

Assumptions: You are running all your development from a Mac or other Unix flavored OS

Pre-requisites:


Overview

  • Notes and suggestions for developers
  • Instructions for provisioning additional clusters in PKS
  • Instructions for managing and working with clusters in PKS via kubectl CLI

Developer Notes

Minikube: Local K8S Cluster on your Machine

It is highly recommended that you become familiar with navigating and managing a k8s cluster before doing the real thing on a deployed cluster.

You can do this locally by installing minikube, which will install a local k8s cluster for you to manage.

see more about minikube here: https://github.com/kubernetes/minikube

Watch Command

This CLI app is not required, however, it is highly useful for monitoring your k8s resources while provisioning and configuring them.

example:

watch -n 1 kubectl get storageclass,deployments,services,pods,statefulset,configmap

resulting output:

Every 1.0s: kubectl get storageclass,deployments,services,pods,statefulse...  alphamind48105.local: Sat Feb 10 21:56:34 2018

NAME                        PROVISIONER
storageclasses/es-storage   kubernetes.io/gce-pd
storageclasses/standard     kubernetes.io/gce-pd

NAME                TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                         AGE
svc/elasticsearch   LoadBalancer   10.100.200.71   xxx.xxx.xxx.xxx   9200:30271/TCP,9300:32355/TCP   16d
svc/kubernetes      ClusterIP      10.100.200.1    <none>           443/TCP                         17d

NAME          READY     STATUS    RESTARTS   AGE
po/es-7666f   1/1       Running   0          16d

To install, follow the instructions as described in the link below:

http://osxdaily.com/2010/08/22/install-watch-command-on-os-x/

More information on what this command does: https://en.wikipedia.org/wiki/Watch_(Unix)

Resources

GCP

Login to GCP: https://console.cloud.google.com

In the project dropdown, select the project, where you have Ops Manager installed.

The GCP dashboard should look similar as show below, if all is well:

Pivotal Ops Manager

W/in your GCP project you should have already installed Pivotal's Op Manager, and should be able to locate a url to login to Ops Manager.

More info on Ops Manager found here: http://docs.pivotal.io/pivotalcf/2-0/customizing/ops-man.html

Login to Ops Manager and verify that atleast the following are installed:

Here we are able to install additional packages to the platform:

PKS - Pivotal Container Service

PKS by itself, does NOT have a web ui dashboard that you can use to administer k8s clusters. Cluster creation and deletion, are handled via a CLI application. The PKS CLI Application can be obtained via the following the instructions here: https://docs.pivotal.io/runtimes/pks/1-0/installing-pks-cli.html

Kubectl CLI Application:

Follow installation instructions here: https://kubernetes.io/docs/tasks/tools/install-kubectl/

PKS allows for the creation and management of k8s clusters. You interface with these clusters just like you would any k8s cluster, with the kubectl CLI Application.

For more information on k8s, see: https://kubernetes.io/docs

Official PKS Documentation: https://docs.pivotal.io/runtimes/pks/1-0/index.html


Logging into PKS

Login to PKS with the following login command.

export PKS_API=https://PKS_API_URL
pks login -a $PKS_API -u "pks_username" -p "pks_password" -k

For more specifics on where and how to obtain login credentials and PKS API url, see PKS Prerequisites: https://docs.pivotal.io/runtimes/pks/1-0/using-prerequisites.html


Navigating PKS Clusters

Listing Clusters

The following command will list available clusters in your PKS instance

pks list-clusters

resulting output:

Name           Plan Name  UUID                                  Status     Action
my-cluster                9f484bd4-9eab-41ef-a0d1-a81f4fd3e061  succeeded  CREATE
kafka-cluster             332e6784-577b-4250-ae55-7ee44516b975  succeeded  CREATE

Selecting a Cluster to Manage

The following command will select a cluster for management by kubectl

pks get-credentials my-cluster

resulting output:

Fetching credentials for cluster my-cluster.
Context set for cluster my-cluster.

You can now switch between clusters by using:
$kubectl config set-context <cluster-name>

When you have multiple clusters, you can switch between clusters by using the PKS command above or by using kubectl as stated above.

At this point, once you've selected a cluster to manage, you can run kubectl commands to manage your cluster as you would any k8s cluster.

See for more documentation on how to orchestrate containers in k8s: https://kubernetes.io/docs

Accessing a PKS Cluster's K8S Dashboard via Proxy

The following command will create a proxy for you to view the cluster's k8s dashboard

kubectl proxy

resulting output:

Starting to serve on 127.0.0.1:8001

Navigate to the following url: http://localhost:8001/ui

You'll get the following dashboard

Creating a new Cluster with PKS

NOTE: This process will take up additional GCP resources after you are done.

The following illustrates step by step instructions for creating a new cluster for use with PKS.

When creating a new cluster with PKS, you must specify an external host for which PKS will provision as a new cluster. Before creating a new PKS cluster, you will need to provision the following in GCP:

  • Load Balancer
  • Firewall Rule

0 - Login

Login to GCP and make sure to select the project: pks20-project

1 - Create a Load Balancer in GCP

From the main GCP menu, select: Network Services --> Load Balancing

You will be taken to a screen that looks like the following:

  • Click the "Create Load Balancer" button
  • Click the "Start configuration" button under "TCP Loading Balancing"

  • Click the "continue" button on the next screen

  • Fill in the name of the new Load Balancer (i.e pks-cluster-3)
  • Click on Backend Configuration. Fill in the values so that they look like the following:

  • Click on Frontend Configuration. Fill in the values so that they look like the following:

  • Click on the IP Dropdown and select the Create IP Address option. You will get the following screen. Please fill in as illustrated and click the Reserve button, then the Done button.

  • Click the create button to create the new Load Balancer. You'll get the following screen.

  • Make a note of the IP address created. This will be the external hostname ip that you will pass into the PKS CLI when creating a new cluster. see illustration below:

At this point you will have created a new Load Balancer which will act as the PKS Cluster External Host. You will provide the IP you reserved in this Load Balancer to the PKS create cluster command.

2 - Create a Fire Rule in GCP

From the main GCP menu, select: VPC Network --> Firewall rules

Once selected, you will be taken to the following screen.

  • Click on the Create Firewall Rule button, and you will be brought to a form.
  • Fill out the form so that it looks like the following (note: you'll want to use a different name then illustrated) and then click the 'create' button

At this point you will have created a new Firewall Rule which will allow you to connect to the cluster remotely via kubectl in later steps.

3 - Create a new Cluster with PKS CLI

At the commandline terminal enter the following:

# set PKS_API url
export PKS_API=https://PKS_API_URL
# login to PKS
pks login -a $PKS_API -u "pks_username" -p "pks_password" -k
# create a cluster named "sandbox-cluster", passing in an external hostname ip, and specifying a "default" plan
pks create-cluster sandbox-cluster --external-hostname=<hostname_ip_from_step_2> --plan default

note: The external hostname is the IP Address we took note of earlier when creating the Load Balancer.

Once executed above, you can watch the status of the cluster being created by executing the following:

watch -n 1 pks show-cluster sandbox-cluster

with the resulting output (which updates every 1 second):

ery 1.0s: pks show-cluster sandbox-cluster  alphamind48105.local: Tue Feb 13 03:11:22 2018


Name:                     sandbox-cluster
Plan Name:
UUID:                     0c1d58e9-527e-41c3-aa65-56eaa26c42ef
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Instance provisioning in progress
Kubernetes Master Host:   xxx.xxx.xxx.xxx
Kubernetes Master Port:   8443
Worker Instances:         set via plan default

After some time, the above status, will let you know if the operation was successful or not. A successful operation will look like the following:

Name:                     sandbox-cluster
Plan Name:
UUID:                     0c1d58e9-527e-41c3-aa65-56eaa26c42ef
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   xxx.xxx.xxx.xxx
Kubernetes Master Port:   8443
Worker Instances:         set via plan default

note: If the above operation fails, check to see that you have not exceeded your GCP quotas. If you have exceeded the quotas, you will need to request an increase in quota or delete a cluster you are not using.

4 - Update Master VM network tags in GCP

You are not out of the woods yet :)

Goto Compute Engine --> VM Instances

Here you will want to look for a VM Instance with the following attributes:

  • Is NOT in use by a Load Balancer
  • Has as a label - job: master

If you click on the "correct" VM Instance with the above criteria, it should look like the following:

Once you've found this VM, make note of the VM Instance name. You will need this for the next step.

We'll need to update this VM's network tags.

  • Click on the 'edit' button.
  • Scroll down to the Network Tags box and add the network tag, which will be the name of the load balancer created earlier). In this case, it will be: pks-cluster-3. It shoud look like the following:

  • Scroll down, and click on the `save' button

Go on to the next step

5 - Associate Newly Created Master VM Instance to Load Balancer in GCP

  • On the left navigation, Click on: Network services --> Load balancing
  • Click the edit button of the load balancer you created earlier. In this case pks-cluster-3
  • Click Backend Configuration and Select existing instances. Select the VM instance you made a note of earlier. When done, click the update button. See illustration.

  • On the left navigation, Click on: Compute Engine --> VM Instances. Select the VM you just attached to the load balancer and verify the VM is in use by the Load Balancer you attached it to. See Screen below.

6 - Validate PKS Cluster

At the command line terminal, enter the following:

pks list-clusters

resulting output:

Name             Plan Name  UUID                                  Status     Action
my-cluster                  9f484bd4-9eab-41ef-a0d1-a81f4fd3e061  succeeded  CREATE
kafka-cluster               332e6784-577b-4250-ae55-7ee44516b975  succeeded  CREATE
sandbox-cluster             0c1d58e9-527e-41c3-aa65-56eaa26c42ef  succeeded  CREATE

You should see the newly created cluster you just provisioned. Now let's try to connect to it. Enter the following:

pks get-credentials sandbox-cluster

resulting output:

Fetching credentials for cluster sandbox-cluster.
Context set for cluster sandbox-cluster.

You can now switch between clusters by using:
$kubectl config set-context <cluster-name>

At this point, we have set the kubectl credentials for the cluster. Now we can try to see if we can proxy the k8s dashboard of the cluster. At the terminal start the proxy by entering the following:

kubectl proxy

resulting output:

Starting to serve on 127.0.0.1:8001

Navigate to the following url: http://localhost:8001/ui

You should get the following screen: