How do I use a custom container image in a Tekton step? - docker

I'm new to Tekton and Tekton Pipelines. The examples I found use standard container images in the Tekton task steps. For example, the following step uses a standard ubuntu container to run a shell script:
steps:
- name: test-step
image: ubuntu
script: |
#!/bin/sh
echo "testing"
I would like to use my own custom container with custom applications in a Tekton step:
steps:
- name: custom-step
image: custom-container
script: |
#!/bin/sh
customCommand arg1 arg2
How do I do this? I found Tekton Tasks and Pipelines Container Contract, which describes the "contract" that a custom container must follow. However, I still don't understand how to define and use a custom container. Specifically,
How is the custom container image actually defined? An example Dockerfile would be helpful.
How do I tell Tekton where my custom container image is located? Do I need save my custom container image in a Docker image repository?
Is there an example showing how to create and use a custom container image that I can refer to? Thanks.

How is the custom container image actually defined? An example Dockerfile would be helpful.
You can use any docker container. You can first test that it works on your local machine using docker run <container>. You need to carefully think about the exit code of your main process in the container, 0 means success and any other number means that it failed, Tekton use this signal to signal if the Task is successfully run or failed.
How do I tell Tekton where my custom container image is located? Do I need save my custom container image in a Docker image repository?
The containers for Tekton are run in Kubernetes. You need to make your container available on a container registry that your Kubernetes cluster can pull them from. This is the same process to how you make the container images available for Kubernetes for your applications in the cluster. E.g. docker push <container-registry-and-image-name>

Related

Setting up container registry for kubeflow

I am using ubuntu 20.04
while following book -> https://www.oreilly.com/library/view/kubeflow-for-machine/9781492050117/
on page 17, it says the following (only relevant parts) which I don't understand....
You will want to store container images called a
container registry. The container registry will be accessed by your Kubeflow cluster.
I am going to use docker hub as container registry. Next
we'll assume that you've set your container registry via an environment variable
$CONTAINER_REGISTRY, in your shell" NOTE: If you use registry that isn't on Google Cloud
Platform, you will need to configure Kubeflow pipelines container builder to have access to
your registry by following the Kaniko configuration guide -> https://oreil.ly/88Ep-
First, I do not understand how to set container registry through environment variable, am I supposed to give it a link??
Second, I've gone into Kaniko config guide and did everything as told -> creating config.json with "auth":"mypassword for dockerhub". After that In the book it says:
To make sure your docker installation is properly configured, you can write one line Dc and
push it to your registry."
Example 2.7 Specify the new container is built on top of Kubeflow's
container
FROM gcr.io/kubeflow-images-public/tensorflow-2.1.0-notebook-cpu:1.0.0
Example 2.8 Build new container and push to registry for use
IMAGE="${CONTAINER_REGISTRY}/kubeflow/test:v1"
docker build -t "${IMAGE}" -f Dockerfile . docker push "${IMAGE}"
I've created Dockerfile with code from Example2.7 inside it, then ran code from Example 2.8 however not working.
Make sure that:
You set the environment variable using export CONTAINER_REGISTRY=docker.io/your_username in your terminal (or in your ~/.bash_profile and run source ~/.bash_profile).
Your .docker/config.json does not have your password in plain text but in base64, for example the output of echo -n 'username:password' | base64
The docker build and the docker push are two separate commands, in your example they're seen as one command, unlike the book.

GitLab CI - :image needed if runner runs on a vm with pre-installed docker-compose

I'm wondering that most of tutorials for the configuration of .gitlab-ci.yml use the image: docker or image: docker/compose.
In my case we have a pre-installed docker and docker-compose on our virtual machine (Linux).
So is it necessary to use a image definition?
In other cases they often use the dind (Docker-in-Docker) functionality, is that necessary in my case?
If not, when do i use it/is it useful?
So is it necessary to use a image definition?
No, as mentioned in "Using Docker images"
GitLab CI/CD in conjunction with GitLab Runner can use Docker Engine to test and build any application.
When used with GitLab CI/CD, Docker runs each job in a separate and isolated container using the predefined image that’s set up in .gitlab-ci.yml.
So you can use any image you need for your job to run.
In other cases they often use the dind (Docker-in-Docker) functionality, is that necessary in my case?
3. If not, when do i use it/is it useful?
As documented in "Building Docker images with GitLab CI/CD ", this is needed if your job is to build a docker image (as opposed to use an existing docker image)

Can Kubernetes ever create a Docker image?

I'm new to Kubernetes and I'm learning about it.
Are there any circumstances where Kubernetes is used to create a Docker image instead of pulling it from a repository ?
Kubernetes natively does not create images. But you can run a piece of software such kaniko in the kubernetes cluster to achieve it. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata
Several options exist to create docker images inside Kubernetes.
If you are already familiar with docker and want a mature project you could use docker CE running inside Kubernetes. Check here: https://hub.docker.com/_/docker and look for the dind tag (docker-in-docker). Keep in mind there's pros and cons to this approach, so take care to understand them.
Kaniko seems to have potential but there's no version 1 release yet.
I've been using docker dind (docker-in-docker) to build docker images that run in production Kubernetes cluster with good results.

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

How do I run startup scripts for DOCKER when deployed via Cloud Compute Engine instances?

I'm creating an instance template under GCP's Cloud Engine section.
Usually when deploying a docker image, there's a docker file that includes some startup scripts after specifying the base image to pull and build, but I can't see where I can either submit a docker file, or enter startup scripts.
I can see a field for startup scripts for the Cloud Compute instance, but that's different from the scripts passed on for the Docker's startup.
Are these perhaps to be filled in under "Command", "Command arguments", or "Environment Variables"?
For clarification, this is someone else's image of a dockerfile I pulled from Google Images. The part I wish to add is "rectangled" in red, the RUN commands, but not these exact commands.
In my case, I would like to add something like
RUN python /pythonscript.py
If I understood well, you are trying to create a Docker image not a compute instance image.
Compute instance can run a docker image that you already builded and pushed to either gcr or any other repository.
Try to build your docker image normaly, push it in a docker repo then use it.
You can run a startup script directly in the Docker container by using a ‘command’ section. If you need to install something after starting a container for example Apache, you should use a Docker image that has Apache.
If you need to run some other arguments, like creating environment variables, here you can find the full list of flags when creating a container image on VM instance.

Resources