Testing an application that needs MySQL/MariaDB in Jenkins - docker

This is likely a standard task, but I've spent a lot of time googling and prototyping this without success.
I want to set up CI for a Java application that needs a database (MySQL/MariaDB) for its tests. Basically, just a clean database where it can write to. I have decided to use Jenkins for this. I have managed to set up an environment where I can compile the application, but fail to provide it with a database.
What I have tried is to use a Docker image with Java and MariaDB. However, I run into problems starting MariaDB daemon, because at that point Jenkins already activates its user (UID 1000), which doesn't have permissions to start the daemon, which only the root user can do.
My Dockerfile:
FROM eclipse-temurin:17-jdk-focal
RUN apt-get update \
&& apt-get install -y git mariadb-client mariadb-server wget \
&& apt-get clean
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
The docker-entrypoint.sh is pretty trivial (and also chmod a+x'd, that's not the problem):
#! /bin/sh
service mysql start
exec "$#"
However, Jenkins fails with these messages:
$ docker run -t -d -u 1000:1001 [...] c8b472cda8b242e11e2d42c27001df616dbd9356 cat
$ docker top cbc373ea10653153a9fe76720c204e8c2fb5e2eb572ecbdbd7db28e1d42f122d -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
I have tried debugging this from the command line using the built Docker image c8b472cda8b. The problem is as described before: because Jenkins passes -u 1000:1001 to Docker, docker-entrypoint.sh script no longer runs as root and therefore fails to start the daemon. Somewhere in Docker or Jenkins the error is "eaten up" and not shown, but basically the end result is that mysqld doesn't run and also it doesn't get to exec "$#".
If I execute exactly the same command as Jenkins, but without -u ... argument, leaving me as root, then everything works fine.
I'm sure there must be a simple way to start the daemon and/or set this up somehow completely differently (external database?), but can't figure it out. I'm practically new to Docker and especially to Jenkins.

My suggestion is:
Run the docker build command without -u (as root)
Create Jenkins user inside the container (via Dockerfile)
At the end of the entrypoint.sh switch to jenkins user by su - jenkins
One disadvantage is that every time you enter the container you will be root user

Related

Docker ROS automatic start of launch file

I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else

Source files are updated, but CMD does not reflect

I'm new to docker and am trying to dockerize an app I have. Here is the dockerfile I am using:
FROM golang:1.10
WORKDIR /go/src/github.com/myuser/pkg
ADD . .
RUN curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
RUN dep ensure
CMD ["go", "run", "cmd/pkg/main.go"]
The issue I am running into is that I will update source files on my local machine with some log statements, rebuild the image, and try running it in a container. However, the CMD (go run cmd/pkg/main.go) will not reflect the changes I made.
I looked into the container filesystem and I see that the source files are updated and match what I have locally. But when I run go run cmd/pkg/main.go within the container, I don't see the log statements I added.
I've tried using the --no-cache option when building the image, but that doesn't seem to help. Is this a problem with the golang image, or my dockerfile setup?
UPDATE: I have found the issue. The issue is related to using dep for vendoring. The vendor folder had outdated files for my package because dep ensure was pulling them from github instead of locally. I will be moving to go 1.1 which support to go modules to fix this.
I see several things:
According to your Dockerfile
Maybe you need a dep init before dep ensure
Probably you need to check if main.go path is correct.
According to docker philosophy
In my humble opinion, you should create an image with docker build -t <your_image_name> ., executing that where your Dockerfile is, but without CMD line.
I would execute your go run <your main.go> in your docker run -d <your_image_name> go run <cmd/pkg/main.go> or whatever is your command.
If something is wrong, you can check exited containers with docker ps -a and furthermore check logs with docker logs <your_CONTAINER_name/id>
Other way to check logs is access to the container using bash and execute go run manually:
docker run -ti <your_image_name> bash
# go run blablabla

Alpine Docker ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database: Permission denied

So I have used the default docker for testcafe which on docker hub is testcafe/testcafe and I have to run a few testcafe scripts.
However, I need the screenshot that fires on error, to be uploaded to somewhere where I can look at it later after the docker image is done running.
I am using the Imgur program which uses bash so I re-did a few things to make it sh compatible and everything works except I need curl. I tried running
apk add curl
but I'm getting the error
ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database:
Now I no this means that I do not have permission to do this but can I get around this is there some way to become root (this is in bitbucket pipeline).
I do NOT really want to create my own docker.
Also note all questions I have found relating to this are about installing while creating the docker, however, my question is how to do this after the docker is created. thx (a fine answer would be another way to save the screen shot, but preferably not with ssh).
For those seeing this error using a Dockerfile (and coming here via a Google search): add the following line to your Dockerfile:
USER root
Hopefully this will help anyone who is not interested in creating a new container.
If you are trying to enter into your docker container like so:
docker exec -it <containername> /bin/sh
Instead, try this:
docker exec -it --user=root <containername> /bin/sh
docker exec -it --user=root {containername} bash
with this I can able to execute apk-update
The best fix is to place USER <youruser> AFTER the lines where your docker build is failing. In most cases it is safe to add the USER line directly above the command or entrypoint.
For example:
FROM python:3.8.0-alpine
RUN addgroup -S app && adduser -S -G app app
RUN apk add --no-cache libmaxminddb postgresql-dev gcc musl-dev
ADD . .
USER app
ENTRYPOINT ["scripts/entrypoint.sh"]
CMD ["scripts/gunicorn.sh"]
For those seeing this error when running through a Jenkins pipeline script (and coming hre via a Google search), use the following when starting your Docker image:
node('docker') {
docker.image('golang:1.14rc1-alpine3.11').inside(' -u 0') {
sh 'apk add curl'
...
}
}
For a Docker container it is easy:
docker exec -it --user root container-name sh
For Kubernetes pods, it is a bit more complicated. If your image is built with a non-root user and also you cannot run pods with a root user inside your cluster, you need to install the packages with this method:
Identify the user which the pod is using
Create a new Dockerfile
Configure it as such
FROM pod-image-name:pod-image-tag
USER root
RUN apk update && apk add curl
USER the-original-pod-user
Then build it
docker build -t pod-image-name:pod-image-tag-with-curl .
And change the image of your deployment/pod inside the cluster from pod-image-name:pod-image-tag to pod-image-name:pod-image-tag-with-curl
I have resolved the same problem executing the "docker build -t" command with root user:
#docker build -t $DOCKER_IMAGE

Delay Docker Container RUN until tox environment is built

I am trying to find a way to delay the docker container to be up until the task in ENTRYPOINT is completed. To explain it further, I have a docker file which has the entry point
ENTRYPOINT ["bash", "-c", "tox", "-e", "docker-server"]
When I run the container using
Docker run -d -t -p 127.0.0.1:8882:8882 datawarehouse
it immediately makes the container up where as tox command is still building the environment. The problem with this is that, if I trigger a cron job or run a python code immediately it will fail because the tox environment is still in the build phase. I want to avoid running anything until the ENTRYPOINT task is complete, can this be achieved in the docker file or in the run command?
yes , in the docker-compose file you can set it to sleep or you can define dependencies.
https://docs.docker.com/compose/startup-order/
https://8thlight.com/blog/dariusz-pasciak/2016/10/17/docker-compose-wait-for-dependencies.html
I dont have an elegant solution, but here is what I did.
RUN <your dependencies>
# Then add a second RUN command with a sleep at the beginning:
RUN sleep 400 && gcloud dataproc jobs submit xxxxxx
Each RUN command will run at a separate container layer on a clean slate, hence the sleep && the actual entry-point command goes together as one logical command.
But as you can see this was Hard coded, change the sleep duration accordingly.
I think that this in an incorrect approach. When a container "start" we need to avoid install dependencies, libraries, etc. The build image process is the moment to do that: We ensure that an image "AAAA" always will "works" if we install any dependencies, build any code in the "build" images process. When a container run, is only for do that, just "run".

Docker: Why does wait-for always time out?

This page discusses how to control startup order using docker-compose. It recommends three tools: wait-for-it, dockerize or wait-for.
I have struggled to get either wait-for-it or wait-for working as expected, but in this question I'll focus on wait-for.
Each time my docker container starts, it quits with "Operation Timed Out".
Here's my very simple docker file as an example:
FROM ubuntu
COPY ./wait-for.sh /
WORKDIR /
RUN chmod +x ./wait-for.sh
CMD sh -c './wait-for.sh www.eficode.com:80 -- echo "Eficode site is up"'
This should copy the script from the current directory to the root, make it executable and set the run command to execute the script and check the status of the eficode website (example taken from the eficode github page).
I've tried supplying the timeout flag, which does adjust the timeout, but doesn't affect the result. I've also tried running this script as part of a docker-compose command (following the example on the docker-compose documentation page linked above) but, again, with the same result.
What am I doing wrong?
You are missing the netcat package and nc isn't available in your example image. Add the following somewhere in your Dockerfile:
RUN apt-get -q update && apt-get -qy install netcat
As Andy mentions, you need nc to be installed. You can:
Manually install the package with his command
Switch to wait-for-it that uses bash since your base image is ubuntu. This script doesn't need nc since bash can hit ports directly.
Switch to alpine linux if you don't need bash, it ships with nc. That just means changing the first line to FROM alpine.

Resources