Nodes
A Node is a worker machine provisioned to run Kubernetes. Each Node is managed by the Kubernetes master.
Pods
A Pods is a logical, tightly-coupled group of application containers that run on a Node. Containers in a Pod are deployed together and share resources (like data volumes and network addresses). Multiple Pods can run on a single Node.
---
### Kubernetes Concepts (continued...)
--
**Services**
A [Service](https://kubernetes.io/docs/concepts/services-networking/service/) is a logical set of Pods that perform a similar function. It enables load balancing and service discovery. It's an abstraction layer over the Pods; Pods are meant to be ephemeral while services are much more persistent.
--
**Deployments**
[Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) are used to describe the desired state of Kubernetes. They dictate how Pods are created, deployed, and replicated.
--
---
### Kubernetes Concepts (continued...)
--
**Label**
[Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) are key/value pairs that are attached to resources (like Pods) which are used to organize related resources. You can think of them like CSS selectors. For example:
1. *Environment* - `dev`, `test`, `prod`
1. *App version* - `beta`, `1.2.1`
1. *Type* - `client`, `server`, `db`
--
**Volumes**
[Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) are used to persist data beyond the life of a container. They are especially important for stateful applications like Redis and Postgres.
1. **[PersistentVolume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** defines a storage volume independent of the normal Pod-lifecycle. It's managed outside of the particular Pod that it resides in.
1. **[PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)** is a request to use the PersistentVolume by a user.
---
### Creating Objects
To create a new [object](https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/) in Kubernetes, you must provide a "spec" that describes its desired state. We'll be using YAML files for this. Example:
--
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node
spec:
replicas: 1
template:
metadata:
labels:
app: node
spec:
containers:
- name: node
image: node-kubernetes:v0.0.1
```
**Required Fields**
1. `apiVersion` - [Kubernetes API](https://kubernetes.io/docs/reference/#api-reference) version
1. `kind` - the type of object you want to create
1. `metadata` - info about the object so that it can be uniquely identified
1. `spec` - desired state of the object
---
class: center, middle
## Practice
---
### App Overview
--
Node/Express + Postgres Todo CRUD App
http://github.com/testdrivenio/node-kubernetes
--
**Routes**
| URL | HTTP Verb | Action |
|-------------|-----------|---------------------|
| / | GET | Sanity Check |
| /todos | GET | Get all todos |
| /todos/:id | GET | Get a single todo |
| /todos | POST | Add a todo |
| /todos/:id | PUT | Update a todo |
| /todos/:id | DELETE | Delete a todo |
---
### Google Cloud Platform
--
#### Steps
--
- Configure the [Google Cloud SDK](https://cloud.google.com/sdk).
(install, configure your account and access credentials, set up a project)
--
- Install [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/):
```sh
$ gcloud components install kubectl
```
--
- Create a cluster on [GKE](https://cloud.google.com/kubernetes-engine/):
```sh
# create
$ gcloud container clusters create node-kubernetes \
--num-nodes=3 --zone us-central1-a --machine-type f1-micro
# check status
$ kubectl get nodes
# point kubectl at the cluster
$ gcloud container clusters get-credentials node-kubernetes \
--zone us-central1-a
```
---
### Volume - Persistent Volume
---
### Volume - Persistent Volume
--
*kubernetes/volume.yaml*:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
name: postgres-pv
spec:
capacity:
storage: 50Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: pg-data-disk
fsType: ext4
```
--
Create the volume:
```sh
$ gcloud compute disks create pg-data-disk --size 50GB --zone us-central1-a
$ kubectl apply -f ./kubernetes/volume.yaml
$ kubectl get pv # view details
```
---
### Volume - Persistent Volume Claim
---
### Volume - Persistent Volume Claim
--
*kubernetes/volume-claim.yaml*:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
volumeName: postgres-pv
```
--
Create the volume claim:
```sh
$ kubectl apply -f ./kubernetes/volume-claim.yaml
```
--
View details:
```sh
$ kubectl get pvc
```
---
### Secrets
---
### Secrets
--
[Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) are used to hold sensitive data such as passwords, API tokens, or SSH keys.
--
*kubernetes/secret.yaml*:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: c2FtcGxl
password: cGxlYXNlY2hhbmdlbWU=
```
--
The `user` and `password` fields are base64 encoded strings ([security via obscurity](https://en.wikipedia.org/wiki/Security_through_obscurity)):
```sh
$ echo -n "
" | base64
```
--
Add the secrets:
```sh
$ kubectl apply -f ./kubernetes/secret.yaml
```
---
### Postgres Deployment
---
### Postgres Deployment
--
*kubernetes/postgres-deployment.yaml*:
[https://github.com/testdrivenio/node-kubernetes/blob/master/kubernetes/postgres-deployment.yaml](https://github.com/testdrivenio/node-kubernetes/blob/master/kubernetes/postgres-deployment.yaml)
1. Spin up pod via the `postgres:10.5-alpine` image
1. Use the secret to define the database credentials
1. Mount "/var/lib/postgresql/data" to the persistent disc
--
Create the Deployment:
```sh
$ kubectl create -f ./kubernetes/postgres-deployment.yaml
```
---
### Postgres Service
---
### Postgres Service
--
*kubernetes/postgres-service.yaml*:
```yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
service: postgres
spec:
selector:
service: postgres
type: ClusterIP
ports:
- port: 5432
```
--
Create the service:
```sh
$ kubectl create -f ./kubernetes/postgres-service.yaml
```
--
Create the database:
```sh
$ kubectl get pods
$ kubectl exec postgres- \
--stdin --tty -- createdb -U sample todos
```
---
### Node Deployment
---
### Node Deployment
--
*kubernetes/node-deployment-updated.yaml*:
[https://github.com/testdrivenio/node-kubernetes/blob/master/kubernetes/node-deployment-updated.yaml](https://github.com/testdrivenio/node-kubernetes/blob/master/kubernetes/node-deployment-updated.yaml)
--
Build and push the image to the [Container Registry](https://cloud.google.com/container-registry/):
```sh
$ gcloud auth configure-docker
$ docker build -t gcr.io/node-kubernetes-1337/node-kubernetes:v0.0.1 .
$ docker push gcr.io/node-kubernetes-1337/node-kubernetes:v0.0.1
```
--
Create the Deployment:
```sh
$ kubectl create -f ./kubernetes/node-deployment-updated.yaml
```
Apply the migration and seed the database:
```sh
$ kubectl get pods
$ kubectl exec knex migrate:latest
$ kubectl exec knex seed:run
```
---
### Node Service
---
### Node Service
--
*kubernetes/node-service.yaml*:
```yaml
apiVersion: v1
kind: Service
metadata:
name: node
labels:
service: node
spec:
selector:
app: node
type: LoadBalancer
ports:
- port: 3000
```
--
Create the service:
```sh
$ kubectl create -f ./kubernetes/node-service.yaml
```
--
Grab the external IP and try it out!
```sh
$ kubectl get service node
```
---
### Sanity Check the Volume
--
How can we tell that the volume is *actually* working?
--
1. Make a change to the todos data
1. Delete the Postgres pod (take note that a new pod immediately spins up)
1. Wait a few moments for the new pod to spin up and the old pod to spin down
1. The todos data should have the same state
```sh
kubectl get pods
NAME READY STATUS RESTARTS AGE
node-54fc49774c-l6qvn 1/1 Running 0 19m
postgres-798c7ccc96-jqlcm 1/1 Terminating 0 1h
postgres-798c7ccc96-w6jq7 0/1 ContainerCreating 0 25s
```
---
### Remove Resources
--
Remove the resources once done:
```sh
$ kubectl delete -f ./kubernetes/node-service.yaml
$ kubectl delete -f ./kubernetes/node-deployment-updated.yaml
$ kubectl delete -f ./kubernetes/secret.yaml
$ kubectl delete -f ./kubernetes/volume-claim.yaml
$ kubectl delete -f ./kubernetes/volume.yaml
$ kubectl delete -f ./kubernetes/postgres-deployment.yaml
$ kubectl delete -f ./kubernetes/postgres-service.yaml
$ gcloud container clusters delete node-kubernetes
$ gcloud compute disks delete pg-data-disk
$ gcloud container images delete \
gcr.io/node-kubernetes-1337/node-kubernetes:v0.0.1
```
---
### That's it!
What's next?
--
##### Check your understanding
1. [Encrypt the secret data](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)
1. [Configure Logging](https://cloud.google.com/kubernetes-engine/docs/how-to/logging)
1. [Dive deeper into health checks with liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
--
##### Resources
1. Slides - https://mherman.org/presentations/node-kubernetes
1. Repo - https://github.com/testdrivenio/node-kubernetes
1. Blog post - https://testdriven.io/deploying-a-node-app-to-google-cloud-with-kubernetes
--
##### New to Kubernetes?
1. Learn the Docker basics
1. Dockerize a number of apps
1. Learn about container orchestration and the Kubernetes basics
1. Deploy some apps with GCP