- Rajesh Singh
Cloud Native Buildpacks provide capability to transform application source code to images which is capable to run on any cloud. Its a sandbox project in the Cloud Native Computing Foundation (CNCF) that provides a high-level, performant abstraction for building container images. Being kubernetes-native, buildpacks provide an intuitive & robust image build solution. Buildpacks were first conceived by Heroku in 2011. Later, Pivotal and Heroku teamed up to create Cloud Native Buildpack for Kubernetes. Then, Pivotal enhanced the capabilities and shared the product to open-source community as “kpack”. So, kpack started as an experimental build service Kubernetes resource controllers but soon going to be the key to app build and modernization process.
A buildpack’s job is to gather everything required by the app to build and run the containerized application. That said, while buildpacks are often a behind-the-scenes detail, they are at the heart of the transformation. When the image-build process is performed using Cloud Native Buildpacks, then various scripts are executed in the build environment.
Conceptually, Cloud Native Buildpacks consists of following components, which work together to perform the transformation.
Builder Component is an image bundle, holding all bits and information on how to build the apps, such as buildpacks and build-time image.
In a nutshell, the builder consist of :
- Stack’s build image
Note: All these above components are specified, configured, and structured within builder config file (builder.toml).
A buildpack consist of scripts that inspects the app source code and layers the plan to build and run the application. Inside the buildpack, we have the following items :
- buildpack.toml – It provides the metadata about current buildpack
- bin/detect – During build process, these scripts are used to sequentially test the group of buildpacks on the provisioned source-code. If the required conditions are passed, the buildpack is selected and would be used during the next app-build phase.
- bin/build – These scripts are run to build the final application image. The scripts contribute to setting some environment variables within the image, creating a layer containing a binary (e.g: node, python, or ruby), or adding app dependencies.
Note: The set of Buildpacks can be packaged as OCI images and these get further referred within the builder image.
The lifecycle orchestrates buildpack execution, then assembles the resulting artifacts into a final app image. The lifecycle scripts comprise of following vital phases :
- Detection – Finds an ordered group of buildpacks to use during the build phase.
- Analysis – Restores files that buildpacks may use to optimize the build and export phases.
- Build – Transforms application source code into runnable artifacts that can be packaged into a container.
- Export – Creates the final OCI image containing the runnable application.
The builder image hosts the stack component, which provides the buildpack lifecycle with build-time and run-time environments in the form of images.
A stack designates 2 sub components :
- build image – During the build process, this becomes the environment in which buildpacks are executed
- run image – This becomes the base for the final app image
Build is the process of executing one or more buildpacks against the app’s source code to produce a runnable OCI image. Each buildpack inspects the source code and provides relevant dependencies. An image is then generated from the app’s source code and these dependencies.
With buildpacks, developers and operators can create differentiating software while automating the repetitive building, patching and repackaging tasks more suited to a machine than to a human.
kpack as an implementation of Cloud Native Buildpacks, which provides a declarative image type that builds an image and schedules image rebuilds on relevant buildpack and source changes. The Custom Resource Definitions (CRDs) of kpack are coordinated by Custom Controllers to automate the entire image builds by running the process into containerized environment and keep the images up to date based on user-provided configuration and source-code. kpack also provides a build type to execute a single Cloud Native Buildpack image build.
Image creation process needs certain pre-requisites :
- Docker Registry
- Git Based Source-code version control tool ( eg : GitHub, Gitlab, Enterprise Gitlab)
- Configure the credentials for Docker registry & Git Repo
- Install the Custom Builder resources provisioned by kpack
- Kubernetes cluster ( i.e: kubeconfig should be configured with accessible cluster and user context)
- logs utility
- kubectl CLI
Once the environment is prepared, image build and monitoring can be done following below steps:
- Select a Builder to provision buildpack resources for image build.
- Create service-account with secret token for Docker Registry & Git Repo
- Create and apply the image resource in kubernetes environment
- Monitor the image-build progress via logs utility tool from local workstation
- After successful build status, verify the generated image in docker registry.
As a result, the image-build process, would generate the runnable OCI app-image. The export phase would also take action to push the app image to the configured docker registry.
Note: When we run the image build process, the first-time, it might take longer then usual. That’s because, all the dependencies get downloaded and put into the cache to assist in subsequent build phases as well as next build iterations.
Note: A detailed walkthrough of image creation poc has been shared in another post kpack – Cloud native way to build containerized app at txconsole.
To summarize, we can observe that, the image build process can be automated & optimized with buildpacks. We had a brief on the components of buildpack specification and the steps performed during image build with CNB (kpack).