How to build a jibri docker image from unstable build? - docker

Jitsi recently release a build 5207 which supports streaming to any server. This build is currently unstable and docker-jitsi-meet uses stable-5142. Now I want to build jibri docker image using 5207 build. I don't know where to make the changes to build the jibri docker image using unstable build.
Please help.

I'm able to create a docker image from unstable build. Below are the steps.
First create a docker image for base using unstable build. Go to base directory and run make JITSI_RELEASE=unstable JITSI_REPO=myimage OR You can use export JITSI_RELEASE=unstable and export JITSI_REPO=myimage will result in myimage/base:latest. I followed second approach here.
Now we have to create base-java image from myimage/base. for this we have change ARG JITSI_REPO=jitsi to ARG JITSI_REPO=myimage in Dockerfile in base-java directory. Now run make command. It will create myimage/base-java:latest
Now got to jibri directory and change ARG JITSI_REPO=jitsi to ARG JITSI_REPO=myimage in Dockerfile and finally run make command.
NOTE
Start the jibri container and check the jibri version using command docker exec docker-jitsi-meet_jibri_1 dpkg -l | grep jibri
UPDATE
Simply run FORCE_REBUILD=1 JITSI_RELEASE=unstable JITSI_REPO=your_dockerhub_username make command from docker-jitsi-meet directory. No need to do anything else. You can control which image to build by editing this line. base and base-java images are mandatory hence don't remove them from file.

Related

How to ensure a sourced script is available when using a docker image for GitLab CI?

I use custom docker images (mostly based on phusion) for GitLab CI alright. But sometimes an image requires sourcing a shell file to work properly (set PATH, LD_LIBRARY_PATH, etc.).
When running an interactive shell from the docker image (e.g. docker run -it <image_name> /bin/bash), this can be fixed by simply adding the appropriate source command to /etc/profile or whatever. But it looks like the scripts in GitLab CI are not run in an interactive shell, and then the paths are not properly set up. I work around this by adding the source (or .) command to the GitLab CI script itself, but this is something image-specific, that should be in the image, not in the script.
Is there anything I can do that will effectively source the file directly on the image (or at least when GitLab CI runs the script on the image)? I could manually inspect what environment changes the sourced file introduces and put them in ENV instructions, but I'm looking for something less fragile when rebuilding the image from possibly updated sources.

How to configure docker specific image dependencies which are managed in a different source code repository

How to configure docker specific artifact dependencies which are managed in a different source code repository. My docker image depends on jar files (say project-auth), configuration (say project-theme) which is actually maintained in a different repository than the docker image.
What would be the best way to copy dependencies for a docker image (say project-deploy repo), prior to building the image. i.e in the above case project-deploy needs jar files and configuration which needs to be mounted as a volume from the current folder.
I don't want this to be committed as the dependencies tend to get stale and I want the docker image creation to be part of the build process itself.
You can use Docker multi-stage builds for this purpose.
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.
For example:
Suppose that the source code for dependencies is present in repo - "https://github.com/demo/demo.git"
Using multi stage builds, you can create a stage in which you'll clone the git repo and create the dependency Jar (or anything else that you need) at runtime.
At last, you can copy the jar into your final image.
# Use any base image. I took centos here
FROM centos:7 as builder
# Install only those packages which are required.
RUN yum install -y maven git \
&& git clone <YOUR_GIT_REPO_URL>
WORKDIR /myfolder
# Create jar at run time. You can update this step according to your project requirements.
RUN mvn clean package
# From here our normal Dockerfile steps starts.
FROM centos:7
# Add all the necessary steps required to build your image
.
.
.
# This is how you can copy the jar which was created above (Step 4) in your final docker image.
COPY --from=builder SOURCE_PATH DESTINATION_PATH
Please refer this to get a better understanding about multi stage builds in docker.

How do I pass a docker image from one TeamCity build to another?

I need to split a teamcity build that builds and pushes a docker image into a docker registry into two separate builds.
a) The one that builds the docker image and publishes it as an artifact
b) The one that accepts the docker artifact from the first build and pushes it into a registry
The log says, that there are these three commands running:
docker build -t thingy -f /opt/teamcity-agent/work/55abcd6/docker/thingy/Dockerfile /opt/teamcity-agent/work/55abcd6
docker tag thingy docker.thingy.net/thingy/thingy:latest
docker push docker.thingy.net/thingy/thingy:latest
There's plenty of other stuff going on, but I figured that this is the important part.
So I have copied the initial build two times, with the first command in the first build, and the next two in the second build.
I have set the first build as a snapshot dependency for the second build, and run it. And what I get is:
FileNotFoundError: [Errno 2] No such file or directory: 'docker': 'docker'
Which probably is because some of the files are missing.
Now, I did want to publish the docker image as an artifact, and make the first build an artifact dependency, but I can't find where does the docker put its files and all of the searches containing a "docker" and a "file" in them just lead to a bunch of articles about what the Dockerfile is.
So what can I do to make it so that the second build could use the resulting image and/or enviroment from the first build?
in all honesty I didn't understand what exactly you are trying to do here.
However, this might help you:
You can save the image as a tar file:
docker save -o <image_file_name>.tar <image_tag>
This archive can then be moved and imported somewhere else.
You can get a lot of information about an image or a container with "docker inspect":
docker inspect <image_tag>
Hope this helps.

What is a Docker build stage?

As far as I understand build stages in Docker are fundamental things, and I have a practical understanding of them but I have trouble coming up with a proper definition, and I also can't seem to find one.
So: what is the definition of a Docker build stage?
Edit: I'm not asking "how do I use a build stage?" or "how can I use multi-build stages?" which people seem very eager to answer :-)
The reason I have this question is because I saw the following sentences in the docs:
"The FROM instruction initializes a new build stage"
"a name can be given to a new build stage"
Which left me wondering: what exactly is a build stage?
I don't think there will ever be a strict definition for Docker build stage because a build stage is in general something theoretical which:
can be defined by you
depends on your case (language / libraries)
In this question: Difference between build and deploy? one of the answers says...
Build means to Compile the project.
I think you can see it this way too. A build stage is any procedure that generates something which can later be taken and used.
The idea with docker multi-stage builds is to:
generate what you are going to need
leave behind what you don't need and use the product of step 1 in a more lightweight way
If you have read the docs, Alex Ellis has a nice example where the same logic takes place:
he starts with a golang image, adds libraries, builds his app (Go generates a binary executable file)
after that, he doesn't need golang and the libraries to ship/run it so, he picks an alpine image, adds the executable file from step 1 and ships his app with an image that has much smaller size.
Since version 17, docker now supports multiple stages during a docker build executions.
This means, that you no longer need to define only one source image in your docker file and do the whole build in a single run, but you can define multiple stages with different images in your Dockerfile for each stage with multiple FROM definitions:
# Build stage
FROM microsoft/aspnetcore
# ..do a build with a dev image for creating ./app artifact
# Publish - use a hardened, production image
FROM alpine:latest
CMD ["./app"]
This gives you the benefit to break your image building process to be optimized for a task that you are doing in a stage - for example the stages could be:
use an image with extra linting dependencies to check your source
use a dev-image with all development dependencies already installed to build your source
use another image including test frameworks to run various tests on the artifacts
and once everything passed ok, use a minimal-sized, optimized, hardened image to capture the final artifacts for production
Read more in details about multistage-build:
https://docs.docker.com/develop/develop-images/multistage-build/
A stage is the creation an image. In a multi-stage build, you go through the process of creating more than one image, however you typically only tag a single one (exceptions being multiple builds, building a multi-architecture image manifest with a tool like buildx, and anything else docker releases after this answer).
Each stage, building a distinct image, starts from a FROM line in the Dockerfile. One stage doesn't inherit anything done in previous stages, it is based on its own base image. So if you have the following:
FROM alpine as stage1
RUN apk add your_tool
FROM alpine as stage2
RUN your_tool some args
you will get an error since your_tool is not installed in the second stage.
Which stage do you get as output from the build? By default the last stage, but you can change that with the docker image build --target stage1 . to build the stage with the name, stage1 in this example. The classic docker build will run from the top of the Dockerfile until if finishes the target stage. Buildkit builds a dependency graph and builds stages concurrently and only if needed, so do not depend on this ordering to control something like a testing workflow in your Dockerfile (buildkit can see if nothing in the test stage is needed in your release stage and skip building the test).
What's the value of multiple stages? Typically, its done to separate the build environment from the runtime environment. It allows you to perform the entire build inside of docker. This has two advantages.
First, you don't require an external Makefile and various compilers and other tools installed on the host to compile the binaries that then get copied into the image with a COPY line, anyone with docker can build your image.
And second, the resulting image doesn't include all the compilers or other build time tooling that isn't needed at runtime, resulting in smaller and more secure images. The typical example is a java app with maven and a full JDK to build, a runtime with just the jar file and the JRE.
If each stage makes a separate image, how do you get the jar file from the build stage to the run stage? That comes from a new option to the COPY command, --from. An oversimplified multi-stage build looks like:
FROM maven as build
COPY src /app/src
WORKDIR /app/src
RUN mvn install
FROM openjdk:jre as release
COPY --from=build /app/src/target/app.jar /app
CMD java -jar /app/app.jar
With that COPY --from=build we are able to take the artifact built in the build stage and add it to the release stage, without including anything else from that first stage (no layers of compile tools like JDK or Maven get added to our second stage).
How is the FROM x as y and the COPY --from=y /a /b working together? The FROM x as y is defining an image name for the duration of this build, in this case y. Anywhere later in the Dockerfile that you would put an image name, you can put y and you'll get the result of this stage as your input. So you could say:
FROM upstream as mybuilder
RUN apk add common_tools
FROM mybuilder as stage2
RUN some_tool arg2
FROM mybuilder as stage3
RUN some_tool arg3
FROM minimal_base as release
COPY --from=stage2 /bin2 /
COPY --from=stage3 /bin3 /
Note how stage2 and stage3 are each FROM mybuilder that is the output of the first stage.
The COPY --from=y allows you to change the context where you are copying from to be another image instead of the build context. It doesn't have to be another stage. So, for example, you could do the following to get a docker binary in your image:
FROM alpine
COPY --from=docker:stable /usr/local/bin/docker /usr/local/bin/
Further documentation on this is available at: https://docs.docker.com/develop/develop-images/multistage-build/
a build stage starts at a FROM statement and ends at the step before the next FROM statement
stage | steɪdʒ |
noun
a point, period, or step in a process or development
Take a practical example: you want to build an image which contains a production ready web server with Typescript files compiled to Javascript. You want to build that Typescript within a Docker container to simplify dependency management. So you need:
node.js
Typescript
any dependencies needed for compilation
Webpack or whatever
nginx/Apache/whatever
In your final image you only really need the compiled .js files and, say, nginx. But to get there, you need all that other stuff first. When you upload that final image, it will contain all the intermediate layers, even if they're unnecessary for the final product.
Docker build stages now allow you to actually separate those stages, or steps, into separate images, while still using just one Dockerfile and not needing to glue several Dockerfiles together with external shell scripts or such. E.g.:
FROM node as builder
RUN npm install ...
# whatever you need to build your files
FROM nginx as production
COPY --from=builder /final.js /var/www/html
The final result of this Dockerfile is a small image with nginx as its base plus just the final .js file. It does not contain all the unnecessary stuff like node.js and the npm dependencies.
builder here is the first stage, production is the second stage. In this case the first stage will be discarded at the end of the process, but you can also choose to build a specific stage using docker build --target=builder. A new FROM introduces a new, separate stage. They're essentially separate Dockerfiles, but they can share data using COPY --from.

Jenkins + Docker - How To Deal With Versions

I've got Jenkins set up to do 2 things in 2 separate jobs:
Build an executable jar and push to Ivy repo
Build a docker image, pulling in the jar from the Ivy repo, and push image to a private docker registry
During step 1 the jar will have some version which will be appended to the filename (e.g. my-app-0.1-SNAPSHOT, my-app-1.0-RELEASE, etc.). The problem that I'm facing is that in the Dockerfile we have to pull in the correct jar file based on the version number from the upstream build. Additionally, I would ideally like the docker image to be tagged with that same version number.
Would love to hear from the community about any possible solutions to this problem.
Thanks in advance!!
Obviously you need a unique version from (1) to refer to in (2).
0.1 -> 0.2 -> 0.3 -> ...
Not too complicated in terms of how things work together from a build / Docker point of view. I guess the far bigger challenge is to give up SNAPSHOT builds in the development workflow.
With your current Jenkins: release every build you create a container for.
Much better alternative: Choose a CI / CD server that uses build pipelines. And if you haven't already done so, take a look at the underlaying concept here.
You could use the Groovy Postbulid Plugin to extract with a regular expression the exact name of the generated .jar file at the end of step 1.
Then for step 2, you could have a Dockerfile template and replace in it some placeholder with the exact jar name, build the image and push it to your registry.
Or, if you don't use a Dockerfile you could have in your Docker registry a premade Docker image which has everything but the jar file and add the jar to it with those steps:
create a container from the image
add the jar file into the container using the docker cp command
commit the container into a new image
push the new image to your docker registry
Same need by my customer. We ended up by putting placeholders in the Dockerfile, which are replaced using sed just before the docker build.
This way, you can use the version in multiple locations, either in the FROM or in any filenames.
Example:
FROM custom:#placeholder#
ENV VERSION #placeholder#
RUN wget ***/myjar-${VERSION}.jar && ...
Regarding the consistency, a unique version is used:
from a job parameter (Jenkins)
to build the artifact (Maven)
to tag the Docker image (Docker)
to tag the Git repository containing the Dockerfile (Git)

Resources