Cloudnative Buildpacks are a new way of packing and Maintaining Docker Images using the Buildpack Technology.
let´s re-iterate how a Devloper would use CloudFoundry´s cf push
experience to Publish an Application to the Cloud.
Having the Source Code Ready, the only thing a Developer in Cloudfoundry Land would need to do is a cf push
the Buildpack Technology and CAPI would take for all necessary steps from selecting the "runtime", creating a "container", to run the "app" and create required routing / endpoints:
so just one simple command to push your app to the cloud
we will leverage Cloudnative Buildpacks now to leverage some Parts of that approach to create OCI compliant Images and run them on Docker and Kubernetes
install docker desktop if not already done:
brew cask install docker
open /Applications/Docker.app
on your computer, install pack:
brew tap buildpack/tap
brew install pack
login to pks and get your kube credentials:
PKS_CLUSTER=k8s1 # shortcut you cluster here
pks login -a api.${PKS_SUBDOMAIN_NAME}.${PKS_DOMAIN_NAME} -u k8sadmin -p ${PIVNET_UAA_TOKEN} -k
pks get-credentials ${PKS_CLUSTER}
we will use as cf nodejs demo application, but you can use you own, for sure
clone into cf-sample-app-nodejs:
git clone https://github.com/cloudfoundry-samples/cf-sample-app-nodejs
cd cf-sample-app-nodejs
see all you current docker images:
docker images -a
in my example, only the microsoft/azure-cli is available locally
now we set the 'default builder' ( the base image to be used, including buildpack, run and buildimage. compare the run -image with the 'stemcell' os kind' )
pack set-default-builder cloudfoundry/cnb:cflinuxfs3
this creates n entry in ~/.pack/config.toml
for the default builder to use
build first OCI Image
pack build node-demo:v1
this will do several things:
- download all new layers for the default builder stack
- download the run layer for the runtime
then, the build process starts. first, the 'Detector' will be executed to identify the runtime(s) to be used
- Restoring and analyzing will restore cached packages from the downloaded image, and check for any versions of the v1 app in our local docker filesystem
- the application will be build using NPM and Node:
- finally, the image will be exported locally and
verify the image has been created locally on docker
docker images --no-trunc
run the image nodejs demo runs on 4000, so we expose that port to 4000
docker run --rm -p 4000:4000 node-demo:v1
browse to localhost:4000 to view the nodejs demo app
(i assume that you have installed and configured harbor correctly or used pks-jump-azure )
# this is for pks-jump-azure users that have a valid .env
docker login "https://harbor.${PKS_SUBDOMAIN_NAME}.${PKS_DOMAIN_NAME}" --username admin --password ${PIVNET_UAA_TOKEN}
publish a version 2 app to harbor, this time with the bionic image
pack build harbor.${PKS_SUBDOMAIN_NAME}.${PKS_DOMAIN_NAME}/library/node-demo:v2 --publish --builder cloudfoundry/cnb:bionic
the newer bionic stack layer needs to be downloaded:
but the run - image will be re-used:
now we can run the mage locally off from you harbor registry
docker run --rm -p 4001:4000 harbor.pksazure.labbuildr.com/library/node-demo:v2
only the diff layers (the app itself) are downloaded now.
this is one of the strength of the layered approach of Cloudnative Buildpacks, where we spilt the stack from middleware and apps.
browse to localhost:4001 to view the nodejs demo app
deploying to pks also requires the correct psp´s in place.
we are using the psp´s form the nginx demo)
cat <<EOF | kubectl apply --namespace ingress-ns -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app
spec:
selector:
matchLabels:
app: nodejs-app
replicas: 3
template:
metadata:
labels:
app: nodejs-app
spec:
serviceAccountName: nginx-sa
containers:
- name: nodejs-v2
image: harbor.${PKS_SUBDOMAIN_NAME}.${PKS_DOMAIN_NAME}/library/node-demo:v2
ports:
- containerPort: 4000
EOF
view the deployment:
kubectl describe deployments/nodejs-app --namespace ingress-ns
in order to reach the service, we need to create an lb endpoint
create a service of tyle loadbalancer
that mapos port 4000 to 80:
kubectl create service loadbalancer nodejs-app --tcp=80:4000 -n ingress-ns
wait a view moments until the loadbalancer has been provisioned. you can view the progress with
kubectl get service nodejs-app -n ingress-ns
wait for the EXTERNAL-IP
to switch from pending
into an extern ip.
now browse to the external ip:80 to view the nodejs app.