Recommended image base for spring-boot service coded in Kotlin - docker

I have a spring-boot based service which is coded in Kotlin and i want to create a Dockerfile for it and i'm not sure what to base it off in the opening FROM command when creating the Dockerfile. Any recommendations???

Related

How to use docker images when building artefacts in Actions?

TL;DR: I would like to use on a self-hosted Actions runner (itself a docker container on my docker engine) specific docker images to build artefacts that I would move between the build phases, and end with a standalone executable (not a docker container to be deployed). I do not know how to use docker containers as "building engines" in Actions.
Details: I have a home project consisting of a backend in Go (cross compiled to a standalone binary) and a frontend in Javascript (actually a framework: Quasar).
I develop on my laptop in Windows and use GitHub as the SCM.
The manual steps I do are:
build a static version of the frontend which lands in a directory spa
copy that directory to the backend directory
compile the executable that embeds the spa directory
copy (scp) this executable to the final destination
For development purposes this works fine.
I now would like to use Actions to automate the whole thing. I use docker based self-hosted runners (tcardonne/github-runner).
My problem: the containers do a great job isolating the build environment from the server they run on. They are however reused across build jobs and this may create conflicts. More importantly, the default versions of software provided by these containers is not the right (usually - latest) one.
The solution would be to run the build phases in disposable docker containers (that would base on the right image, shortening the build time as a collateral nice to have). Unfortunately, I do not know how to set this up.
Note: I do not want to ultimately create docker containers, I just want to use them as "building engines" and extract the artefacts from them, and share between the jobs (in my specific case - one job would be to build the front with quasar and generate a directory, the other one would be a compilation ending up with a standalone executable copied elsewhere)
Interesting premise, you can certainly do this!
I think you may be slightly mistaken with regards to:
They are however reused across build jobs and this may create conflicts
If you run a new container from an image, then you will start with a fresh instance of that container. Files, software, etc, all adhering to the original image definition. Which is good, as this certainly aids your efforts. Let me know if I have the wrong end of the stick in regards to the above though.
Base Image
You can define your own image for building, in order to mitigate shortfalls of public images that may not be up to date, or suit your requirements. In fact, this is a common pattern for CI, and Google does something similar with their cloud build configuration. For either approach below, you will likely want to do something like the following to ensure you have all the build tools you may
As a rough example:
FROM golang:1.16.7-buster
RUN apt update && apt install -y \
git \
make \
...
&& useradd <myuser> \
&& mkdir /dist
USER myuser
You could build and publish this with the following tag:
docker build . -t <containerregistry>:buildr/golang
It would also be recommended that you maintain a separate builder image for other types of projects, such as node, python, etc.
Approaches
Building with layers
If you're looking to leverage build caching for your applications, this will be the better option for you. Caching is only effective if nothing has changed, and since the projects will be built in isolation, it makes it relatively safe.
Building your app may look something like the following:
FROM <containerregistry>:buildr/golang as builder
COPY src/ .
RUN make dependencies
RUN make
RUN mv /path/to/compiled/app /dist
FROM scratch
COPY --from=builder /dist /dist
The gist of this is that you would start building your app within the builder image, such that it includes all the build deps you require, and then use a multi stage file to publish a final static container that includes your compiled source code, with no dependencies (using the scratch image as the smallest image possible ).
Getting the final files out of your image would be a bit harder using this approach, as you would have to run an instance of the container once published in order to mount the files and persist it to disk, or use docker cp to retrieve the files from a running container (not image) to your disk.
In Github actions, this would look like running a step that builds a Docker container, where the step can occur anywhere with docker accessibility
For example:
jobs:
docker:
runs-on: ubuntu-latest
steps:
...
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
tags: user/app:latest
Building as a process
This one can not leverage build caching as well, but you may be able to do clever things like mounting a host npm cache into your container to aid in actions like npm restore.
This approach differs from the former in that the way you build your app will be defined via CI / a purposeful script, as opposed to the Dockerfile.
In this scenario, it would make more sense to define the CMD in the parent image, and mount your source code in, thus not maintaining a image per project you are building.
This would shift the responsibility of building your application from the buildtime of the image, to the runtime. Retrieving your code from the container would be doable through volume mounting for example:
docker run -v /path/to/src:/src /path/to/dist:/dist <containerregistry>:buildr/golang
If the CMD was defined in the builder, that single script would execute and build the mounted in source code, and subsequently publish to /dist in the container, which would then be persisted to your host via that volume mapping.
Of course, this applies if you're building locally. It actually becomes a bit nicer in a Github actions context if you wish to keep your build instructions there. You can choose to run steps within your builder container using something like the following suggestion
jobs:
...
container:
runs-on: ubuntu-latest
container: <containerregistry>:buildr/golang
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
Within that run: spec, you could choose to call a build script, or enter the commands that might be present in the script yourself.
What you do with the compiled source is muchly up to you once acquired πŸ‘
Chaining (Frontend / Backend)
You mentioned that you build static assets for your site and then embed them into your golang binary to be served.
Something like that introduces complications of course, but nothing untoward. If you do not need to retrieve your web files until you build your golang container, then you may consider taking the first approach, and copying the content from the published image as part of a Docker directive. This makes more sense if you have two separate projects, one for frontend and backend.
If everything is in one folder, then it sounds like you may just want to extend your build image to facilitate go and js, and then take the latter approach and define those build instructions in a script, makefile, or your run: config in your actions file
Conclusion
This is alot of info, I hope it's digestible for you, and more importantly, I hope it gives you some ideas as to how you can tackle your current issue. Let me know if you would like clarity in the comments

Bazel - Build, Push, Deploy Docker Containers to Kubernetes within Monorepo

I have a monorepo with some backend (Node.js) and frontend (Angular) services. Currently my deployment process looks like this:
Check if tests pass
Build docker images for my services
Push docker images to container registry
Apply changes to Kubernetes cluster (GKE) with kubectl
I'm aiming to automate all those steps with the help of Bazel and Cloud Build. But I am really struggling to get started with Bazel:
To make it work I'll probably need to add a WORKSPACE file with my external dependencies and multiple BUILD files for my own packages/services? I need help with the actual implementation:
How to build my Dockerfiles with Bazel?
How push those images into a registry (preferably GCR)?
How to apply changes to Google Kubernetes Engine automatically?
How to integrate this toolchain with Google Cloud Build?
More information about the project
I've put together a tiny sample monorepo to showcase my use-case
Structure
β”œβ”€β”€ kubernetes
β”œβ”€β”€ packages
β”‚ β”œβ”€β”€ enums
β”‚ β”œβ”€β”€ utils
└── services
β”œβ”€β”€ gateway
General
Gateway service depends on enums and utils
Everything is written in Typescript
Every service/package is a Node module
There is a Dockerfile inside the gateway folder, which I want to be built
The Kubernetes configuration are located in the kubernetes folder.
Note, that I don't want to publish any npm packages!
What we want is a portable Docker container that holds our Angular app along with its server and whatever machine image it requires, that we can bring up on any Cloud provider, We are going to create an entire pipeline to be incremental. "Docker Rules" are fast. Essentially, it provides instrumentality by adding new Docker layers, so that the changes you make to the app are the only things sent over the wire to the cloud host. In addition, since Docker images are tagged with a SHA, we only re-deploy images that changed. To manage our production deployment, we will use Kubernetes, for which Bazel rules also exist. Building a docker image from Dockerfile using Bazel is not possible to my knowledge because
it's by design not allowed due to non-hermetic nature of Dockerfile. (Source:
Building deterministic Docker images with Bazel)
The changes done as part of the source code are going to get deployed in the Kubernetes Cluster, This is one way to achieve the following using Bazel.
We have to put Bazel in watch mode, Deploy replace tells the Kubernetes cluster to update the deployed version of the app.
a.
Command : ibazel run :deploy.replace
In case there are any source code changes do it in the angular.
Bazel incrementally re-builds just the parts of the build graph that depend on the changed file, In this case, that includes the ng_module that was changed, the Angular app that includes that module, and the Docker nodejs_image that holds the server. As we have asked to update the deployment, after the build is complete it pushes the new Docker container to Google Container Registry and the Kubernetes Engine instance starts serving it. Bazel understands the build graph, it only re-builds what is changed.
Here are few Snippet level tips, which can actually help.
WORKSPACE FILE:
Create a Bazel Workspace File, The WORKSPACE file tells Bazel that this directory is a "workspace", which is like a project root. Things that are to be done inside the Bazel Workspace are listed below.
β€’ The name of the workspace should match the npm package where we publish, so that these imports also make sense when referencing the published package.
β€’ Mention all the rules in the Bazel Workspace using "http_archive" , As we are using the angular and node the rules should be mentioned for rxjs, angular,angular_material,io_bazel_rules_sass,angular-version,build_bazel_rules_typescript, build_bazel_rules_nodejs.
β€’ -Next we have to load the dependencies using "load". sass_repositories, ts_setup_workspace,angular_material_setup_workspace,ng_setup_workspace,
β€’ Load the docker base images also , in our case its "#io_bazel_rules_docker//nodejs:image.bzl",
β€’ Dont forget to mention the browser and web test repositaries
web_test_repositories()
browser_repositories(
chromium = True,
firefox = True,
)
"BUILD.bazel" file.
β€’ Load the Modules which was downloaded ng_module, the project module etc.
β€’ Set the Default visiblity using the "default_visibility"
β€’ if you have any Jasmine tests use the ts_config and mention the depndencies inside it.
β€’ ng_module (Assets,Sources and Depndeencies should be mentioned here )
β€’ If you have Any Lazy Loading scripts mention it as part of the bundle
β€’ Mention the root directories in the web_package.
β€’ Finally Mention the data and the welcome page / default page.
Sample Snippet:
load("#angular//:index.bzl", "ng_module")
ng_module(
name = "src",
srcs = glob(["*.ts"]),
tsconfig = ":tsconfig.json",
deps = ["//src/hello-world"],
)
load("#build_bazel_rules_nodejs//:future.bzl", "rollup_bundle")
rollup_bundle(
name = "bundle",
deps = [":src"]
entry_point = "angular_bazel_example/src/main.js"
)
Build the Bundle using the Below command.
bazel build :bundle
Pipeline : through Jenkins
Creating the pipeline through Jenkins and to run the pipeline there are stages. Each Stage does separate tasks, But in our case we use the stage to publish the image using the BaZel Run.
pipeline {
agent any
stages {
stage('Publish image') {
steps {
sh 'bazel run //src/server:push'
}
}
}
}
Note :
bazel run :dev.apply
Dev Apply maps to kubectl apply, which will create or replace an existing configuration.(For more information see the kubectl documentation.) This applies the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).
If you want to pull containers using the workpsace file use the below tag
container_pull(
name = "debian_base",
digest = "sha256:**",
registry = "gcr.io",
repository = "google-appengine/debian9",
)
If GKE is used, the gcloud sdk needs to be installed and as we are using GKE(Google Contianer Enginer), It can be authenticated using the below method.
gcloud container clusters get-credentials <CLUSTER NAME>
The Deploymnet Object should be mentioned in the below format:
load("#io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "dev",
kind = "deployment",
template = ":deployment.yaml",
images = {
"gcr.io/rules_k8s/server:dev": "//server:image"
},
)
Sources :
https://docs.bazel.build/versions/0.19.1/be/workspace.html
https://github.com/thelgevold/angular-bazel-example
https://medium.com/#Jakeherringbone/deploying-an-angular-app-to-kubernetes-using-bazel-preview-91432b8690b5
https://github.com/bazelbuild/rules_docker
https://github.com/GoogleCloudPlatform/gke-bazel-demo
https://github.com/bazelbuild/rules_k8s#update
https://codefresh.io/howtos/local-k8s-draft-skaffold-garden/
https://github.com/bazelbuild/rules_k8s
A few months later and I've gone relatively far in the whole process.
Posting every detail here would just be too much!
So here is the open-source project which has most of the requirements implemented: https://github.com/flolu/fullstack-bazel
Feel free to contact me with specific questions! :)
Good luck
Flo, have you considered using terraform and a makefile for auto-building the cluster?
In my recent project, I automated infrastructure end to end with make & terraform. Essentially, that approach builds the entire cluster, build and deploys the entire project with one single command within 3 - 5 minutes. Depends on how fast gcp is on a given day.
There is a google sample project showing the idea although the terraform config is outdated and needs to be replaced with a config adhering to the current 0.13 / 0/14 syntax.
https://github.com/GoogleCloudPlatform/gke-bazel-demo#build--deploy-with-bazel
The makefile that enables the one-command end to end automation:
https://github.com/GoogleCloudPlatform/gke-bazel-demo/blob/master/Makefile
Again, replace or customize the scripts for your project; I actually wrote two more scripts, one for checking / installing requirements on the client i.e. git / kubctl & gcloud, and another one for checking or configuring & authentication gcloud in case it's not yet configured and authenticated. From there, the terraform script takes over and build the entire cluster and once that's done, the usual auto-deployment kicks in.
I find the idea of layering make over terraform & bazel for end to end automation just brilliant.

How to use a local file or retrieve it remotely (conditionally) in Dockerfile?

I'd like to be able to control the source of a file (Java Archive) in a Dockerfile which is either a download (with curl) or a local file on the same machine I build the Docker image.
I'm aware of ways to control RUN statements, e.g. Conditional ENV in Dockerfile, but since I need access to the filesystem outside the Docker build image, a RUN statement won't do. I'd need a conditional COPY or ADD or a workaround.
I'm interested in built-in Docker functions/features which avoid the use of more than one Dockerfile or wrapping the Dockerfile in a script using templating software (those just workarounds popping into my head).
you can use multi-stage build which is rather new in docker:
https://docs.docker.com/develop/develop-images/multistage-build/

Is it possible to skip a FROM command in a multistage dockerfile?

Attempting to make a dynamic docker file, where the final image may need one of two previous images based on user input.
I don't think you can skip FROM command. Build should start from somewhere, even if it is scratch.
While for trying to create a dynamic dockerfile, you can create the dockerfile using a shell script. I came across one such script at parity-deploy.sh, which dynamically creates a docker-compose.yml file on the basis of configurations provided by user.
Dockerfiles have been able to use ARGs to allow passing in parameters during a docker build using the CLI argument --build-arg for some time. But until recently (Docker's 17.05 release, to be precise), you weren't able to use an ARG to specify all or part of your Dockerfile's mandatory FROM command.
But since the pull request Allow ARG in FROM was merged, you can now specify an image / repository to use at runtime. This is great for flexibility, and as a concrete example, I used this feature to allow me to pull from a private Docker registry when building a Dockerfile in production, or to build from a local Docker image that was created as part of a CI/testing process inside Travis CI.
To use an ARG in your Dockerfile's FROM:
ARG MYAPP_IMAGE=myorg/myapp:latest
FROM $MYAPP_IMAGE
...
Then if you want to use a different image/tag, you can provide it at runtime:
docker build -t container_tag --build-arg MYAPP_IMAGE=localimage:latest .
If you don't specify --build-arg, then Docker will use the default value in the ARG.
Typically, it's preferred that you set the FROM value in the Dockerfile itselfβ€”but there are many situations (e.g. CI testing) where you can justify making it a runtime argument.
According to the documentation, you cannot skip it. It should be the first command in the Dockerfile as well.
As such, a valid Dockerfile must start with a FROM instruction
But notice that:
FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another.
You can edit the file dynamically (e.g. sed) to use the image/images that the user has specified.
Looks like docker support now : https://github.com/docker/cli/issues/1134

Wrap origin public Dockerfile to manage build args, etc

I'm very new to Docker and stuff, so I wonder if I can change source official and public images from Docker.Hub (which I use in FROM directive) on-the-fly, while using them in my own container builds, kinda like chefs chef-rewind do?
For example, if I need to pass build-args to openresty/latest-centos to build it without modules I won't use. I need to put this
FROM openresty/latest-centos
in my Dockerfile, and what else should I do for openresty to be built only with modules I needed?
When you use the FROM directive in a Dockerfile, you are simply instructing Docker to use the named image as the base for the image that will be built with your Dockerfile. This does not cause the base image to be rebuilt, so there is no way to "pass parameters" to the build process.
If the openresty image does not meet your needs, you could:
Clone the openresty git repository,
Modify the Dockerfile,
Run docker build ... to build your own image
Alternatively, you can save yourself that work and just use the existing image and live with a few unused modules hanging around. If the modules are separate components, you could also issue the necessary commands in your Dockerfile to remove them.

Resources