This project uses Terraform to provision Kubernetes on DigitalOcean, with Container Linux.
Note that the droplets and volumes created as part of this project aren't free.
To get started, clone this repository:
$ git clone git@github.com:ihcsim/kubernetes-terraform-secured.git
Initialize the project:
$ cd kubernetes-terraform-secured
$ terraform init
The above command will fail with errors complaining about the missing Config Transpiler provider. Since at the time of this writing, the Linux Container's Config Transpiler provider isn't included in the official Terraform Providers repository, you must manually copy it into your local kubernetes-terraform-secured/.terraform
folder.
Use the following commands to install the Config Transpiler provider:
$ go get -u github.com/coreos/terraform-provider-ct
$ cp $GOPATH/bin/terraform-provider-ct .terraform/plugins/<os_arch>/
Re-initialize the project:
$ terraform init
Create a copy of the terraform.tfvars
file from the provided terraform.tfvars.sample
file. Provide all the required values. The description of all these variables can be found in the vars.tf
file.
Provision the Kubernetes cluster:
$ terraform apply
Once Terraform completes provisioning the cluster, the kubeconfig
data of the new Kubernetes cluster will be output to a local git-ignored .kubeconfig
file. Use it to interact with the cluster:
$ kubectl --kubeconfig=.kubeconfig get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
$ kubectl --kubeconfig=.kubeconfig get no
NAME STATUS AGE VERSION
k8s-worker-00 Ready 44s v1.7.0
k8s-worker-01 Ready 46s v1.7.0
k8s-worker-02 Ready 48s v1.7.0
By default, this project provisions a cluster that is comprised of:
- 3 etcd3 droplets
- 1 Kubernetes Master droplet and
- 3 Kubernetes Workers droplets
All droplets are initialized using CoreOS' Container Linux Config. These configurations are defined in the config.yaml
files found in the etcd/
and k8s/
folders. They are interpolated using the Terraform's Config Transpiler provider.
CoreDNS is used to provide droplet-level hostname resolution.
Note that this setup uses the Terraform TLS Provider to generate RSA private keys, CSR and certificates for development purposes only. The resources generated will be saved in the Terraform state file as plain text. Make sure the Terraform state file is stored securely.
By default, a 3-node static cluster of etcd instances are provisioned. The number of instances in the cluster can be altered using the etcd_count
Terraform variable. The etcd_initial_cluster
variable in the etcd.tf
file must also be updated to reflect the initial cluster membership.
All peer-to-peer and client-to-server communications are encrypted and authenticated using the self-signed CA, private key and TLS certificate.
Every etcd instance's data directory (defaulted to /var/lib/etcd
) is mounted as a volume to a DigitalOcean block storage.
For testing purposes, the etcdctl
v3 client on every etcd droplet is configured to target the etcd cluster. For example,
$ doctl compute ssh etcd-00
Last login: <redacted>
Container Linux by CoreOS stable (1465.7.0)
$ etcdctl cluster-health
member 1ac8697e1ee7cb22 is healthy: got healthy result from https://xx.xxx.xxx.xxx:xxxx
member 8e62221f3a6bf84d is healthy: got healthy result from https://xx.xxx.xxx.xxx:xxxx
member 9350aa2a45b92d34 is healthy: got healthy result from https://xx.xxx.xxx.xxx:xxxx
cluster is healthy
The following componenets are deployed in the Kubernetes cluster:
- Kubernetes Master: kube-apiserver, kube-controller-manager, kube-scheduler
- Kubernetes Workers: kubelet, kube-proxy
The number of Kubernetes workers can be altered using the k8s_workers_count
Terraform variable.
The API Server is started with the following admission controllers:
- NamespaceLifecycle
- LimitRanger
- ServiceAccount
- PersistentVolumeLabel
- DefaultStorageClass
- ResourceQuota
- DefaultTolerationSeconds
- NodeRestriction
All API requests to the API Server are authenticated using X.509 TLS certificates. The API Server is started with the --anonymous-auth=false
flag in order to disable anonymous requests. Refer to the Kubernetes Authentication documentation for more information on how the authentication scheme works. All the TLS artifacts are declared in the ca.tf
, k8s-master.tf
and k8s-workers
files. The Controller Manager and Scheduler communicate with the API Server via its insecure network interface, since they resides on the same host as the API Server. The Controller Manager uses the CA cert and key declared in ca.tf
to serve cluster-scoped certificates-issuing requests.
The Kubelets are also started with the --anonymous-auth=false
option. They use the --authentication-token-webhook
and --authorization-mode-Webhook
options to enable authentication and authorization. For more information on Kubelet authentication and authorization, refer to this documentation.
Inter-pod communication is supported by an overlay network using flannel. The k8s/network/flannel.yaml
manifest defines the relevant DaemonSet, ConfigMap and RBAC resources. These resources are deployed to the kube-system
namespace. CoreDNS is used to provide in-cluster DNS resolution. The manifest can be found in k8s/network/coredns/yaml
.
All add-ons are deployed using Helm charts.
CoreOS locksmith is enabled to perform updates on Container Linux. By default, locksmithd
is configured to use the etcd-lock
reboot strategy during updates. The reboot window is set to a 2 hour window starting at 1 AM on Sundays.
The following Terraform variables can be used to configure the reboot strategy and maintenance window:
droplet_maintenance_window_start
droplet_maintenance_window_length
droplet_update_channel
The default update group is stable
.
For more information, refer to the Container Linux documentation on Update Strategies.
See the LICENSE file for the full license text.
- Kubernetes The Hard Way
- Running etcd on Container Linux
- Container Linux Config Spec
- CoreDNS
- How To Install And Configure Kubernetes On Top Of A CoreOS Cluster
- CoreOS + Kubernetes Step By Step
- Kubernetes API Authentication
- Kubernetes API Authorization
- Kubelet Authentication & Authorization
- Kubernetes Master Node Communication for details.
- Using Flannel with Kubernetes
- Etcd Clustering