Validating: problems found while running docker build - docker

When I try to build a Docker image using docker build -t audio:1.0.1 ., it builds an image (with an IMAGE ID, but not the name I intended during the build) that automatically runs and stops (but does not get removed) immediately
after the build process finishes with the following last lines of output:
The image shows up, without having a TAG or being in a REPOSITORY, when I execute docker images:
How do I troubleshoot this to build a "normal" image?
My Docker version is 18.09.1, and I am using it on macOS Mojave Version 10.14.1
Following is the content of my Dockerfile:
FROM ubuntu:latest
# Run a system update to get it up to speed
# Then install python3 and pip3 as well as redis-server
RUN apt-get update && apt-get install -y python3 python3-pip \
&& pip3 install --trusted-host pypi.python.org jupyter \
&& jupyter nbextension enable --sys-prefix widgetsnbextension
# Create a new system user
RUN useradd -ms /bin/bash audio
# Change to this new user
USER audio
# Set the container working directory to the user home folder
# WORKDIR /home/jupyter
WORKDIR /home/audio
EXPOSE 8890
# Start the jupyter notebook
ENTRYPOINT ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8890"]

How do I troubleshoot this to build a "normal" image?
You have the error right there on the screenshot. useradd failed to create the group because it already exists so the docker build was aborted. Note the the audio group is a system one so maybe you don't want to use that.
So either create a user with a different name or pass -g audio to the useradd command to it uses the existing group.
If you need to make the user creation conditional then you can use the getent command to check the user/group existence, for example:
# create the user if doesn't exists
RUN [ ! $(getent passwd audio) ] && echo "useradd -ms /bin/bash audio"
# create the user and use the existing group if it exists
RUN [ ! $(getent group audio) ] && echo "useradd -ms /bin/bash audio -g audio"

Related

Docker build with same Dockerfile produces different results on Windows vs. Linux

I have a simple Dockerfile based on eclipse-temurin:11.0.15_10-jre-focal which is just creating a user and copying a jar and some config files into the user's home directory:
FROM eclipse-temurin:11.0.15_10-jre-focal
RUN apt-get update -y && apt-get install -y vim-tiny iputils-ping && rm -rf /var/lib/apt/lists/*
ARG APP_USR=bulkload
RUN useradd --user-group --create-home --base-dir /opt --shell /bin/bash $APP_USR
USER $APP_USR
COPY --chown=$APP_USR:$APP_USR target/${project.artifactId}-${project.version}-all.jar src/test/resources/bulk-load-config.json /opt/$APP_USR/
COPY --chown=$APP_USR:$APP_USR src/test/resources/*.properties /opt/$APP_USR/config/
COPY --chown=$APP_USR:$APP_USR content /opt/$APP_USR/content/
CMD ["bash"]
When I build it on Windows ("DockerVersion": "20.10.14"), everything is as expected. When I build it using Azure DevOps pipeline ("DockerVersion": "20.10.11" on Linux), there are anomalies:
User's home directory is owned by root
All the files and directories copied via COPY command are also owned by root (in spite of --chown switch)
I don't understand this behavior. The useradd command is executed inside the container so it shouldn't matter if the build is done on Windows or Linux. Furthermore, I assume that the COPY commands should fail if the --chown instruction couldn't be done, but it didn't. I suppose I must be doing something wrong but what?

Why does simple Dockerfile give "permission denied"?

I am learning to use Docker with ROS, and I am surprised by this error message:
FROM ros:kinetic-robot-xenial
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN apt-get update
Gives this error message
Step 7/7 : RUN apt-get update
---> Running in 95c40d1faadc
Reading package lists...
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)
The command '/bin/sh -c apt-get update' returned a non-zero code: 100
apt-get generally needs to run as root, but once you've run a USER command, commands don't run as root any more.
You'll frequently run commands like this at the start of the Dockerfile: you want to take advantage of Docker layer caching if you can, and you'll usually be installing dependencies the rest of the Dockerfile needs. Also for layer-caching reasons, it's important to run apt-get update and other installation steps in a single step. So your Dockerfile would typically look like
FROM ros:kinetic-robot-xenial
# Still root
RUN apt-get update \
&& apt-get install ...
# Copy in application (still as root, won't be writable by other users)
COPY ...
CMD ["..."]
# Now as the last step create a user and default to running as it
RUN adduser ros
USER ros
If you need to, you can explicitly USER root to switch back to root for subsequent commands, but it's usually easier to read and maintain Dockerfiles with less user switching.
Also note that neither sudo nor user passwords are really useful in Docker. It's hard to run sudo in a script just in general and a lot of Docker things happen in scripts. Containers also almost never run things like getty or sshd that could potentially accept user passwords, and they're trivial to read back from docker history, so there's no point in setting one. Conversely, if you're in a position to get a shell in a container, you can always pass -u root to the docker run or docker exec command to get a root shell.
switch to the root user by:
USER root
and then every command should work
Try putting this line at the end of your dockerfile
USER $USERNAME (once this line appears in dockerfile...u will assume this users permissions...which in this case does not have to install anything)
by default you are root
You add the user ros to the group sudo but you try to apt-get update without making use of sudo. Therefore you run the command unprivileged and you get the permission denied.
Use do run the command (t):
FROM ros:kinetic-robot-xenial
RUN whoami
RUN apt-get update
# create non-root user
RUN apt-get install sudo
RUN echo "ros ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
USER $USERNAME
RUN whoami
RUN sudo apt-get update
All in all that does not make much sense. It is OK to prepare a docker image (eg. install software etc.) with its root user. If you are concerned about security (which is a good thing) leave the sudo stuff and make sure that the process(es) that run when the image is executed (eg the container is created) with your unprivileged user...
Also consider multi stage builds if you want to separate the preparation of the image from the actual runnable thing:
https://docs.docker.com/develop/develop-images/multistage-build/

Changing RUN to CMD stops the container from working

I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]

Docker-compose cannot find Java

I am trying to use a Python wrapper for a Java library called Tabula. I need both Python and Java images within my Docker container. I am using the openjdk:8 and python:3.5.3 images. I am trying to build the file using Docker-compose, but it returns the following message:
/bin/sh: 1: java: not found
when it reaches the line RUN java -version within the Dockerfile. The line RUN find / -name "java" also doesn't return anything, so I can't even find where Java is being installed in the Docker environment.
Here is my Dockerfile:
FROM python:3.5.3
FROM openjdk:8
FROM tailordev/pandas
RUN apt-get update && apt-get install -y \
python3-pip
# Create code directory
ENV APP_HOME /usr/src/app
RUN mkdir -p $APP_HOME/temp
WORKDIR /$APP_HOME
# Install app dependencies
ADD requirements.txt $APP_HOME
RUN pip3 install -r requirements.txt
# Copy source code
COPY *.py $APP_HOME/
RUN find / -name "java"
RUN java -version
ENTRYPOINT [ "python3", "runner.py" ]
How do I install Java within the Docker container so that the Python wrapper class can invoke Java methods?
This Dockerfile can not work because the multiple FROM statements at the beginning don't mean what you think it means. It doesn't mean that all the contents of the Images you're referring to in the FROM statements will end up in the Images you're building somehow, it actually meant two different concepts throughout the history of docker:
In the newer Versions of Docker multi stage builds, which is a very different thing from what you're trying to achieve (but very interesting nontheless).
In earlier Versions of Docker, it gave you the ability to simply build multiple images in one Dockerfile.
The behavior you are describing makes me assume you are using such an earlier Version. Let me explain what's actually happening when you run docker build on this Dockerfile:
FROM python:3.5.3
# Docker: "The User wants me to build an
Image that is based on python:3.5.3. No Problem!"
# Docker: "Ah, the next FROM Statement is coming up,
which means that the User is done with building this image"
FROM openjdk:8
# Docker: "The User wants me to build an Image that is based on openjdk:8. No Problem!"
# Docker: "Ah, the next FROM Statement is coming up,
which means that the User is done with building this image"
FROM tailordev/pandas
# Docker: "The User wants me to build an Image that is based on python:3.5.3. No Problem!"
# Docker: "A RUN Statement is coming up. I'll put this as a layer in the Image the user is asking me to build"
RUN apt-get update && apt-get install -y \
python3-pip
...
# Docker: "EOF Reached, nothing more to do!"
As you can see, this is not what you want.
What you should do instead is build a single image where you will first install your runtimes (python, java, ..), and then your application specific dependencies. The last two parts you're already doing, here's how you could go about installing your general dependencies:
# Let's start from the Alpine Java Image
FROM openjdk:8-jre-alpine
# Install Python runtime
RUN apk add --update \
python \
python-dev \
py-pip \
build-base \
&& pip install virtualenv \
&& rm -rf /var/cache/apk/*
# Install your framework dependencies
RUN pip install numpy scipy pandas
... do the rest ...
Note that I haven't tested the above snippet, you may have to adapt a few things.

How to view GUI apps from inside a docker container

When I try to run a GUI, like xclock for example I get the error:
Error: Can't open display:
I'm trying to use Docker to run a ROS container, and I need to see the GUI applications that run inside of it.
I did this once just using a Vagrant VM and was able to use X11 to get it done.
So far I've tried putting way #1 and #2 into a docker file based on the info here:
http://wiki.ros.org/docker/Tutorials/GUI
Then I tried copying most of the dockerfile here:
https://hub.docker.com/r/mjenz/ros-indigo-gui/~/dockerfile/
Here's my current docker file:
# Set the base image to use to ros:kinetic
FROM ros:kinetic
# Set the file maintainer (your name - the file's author)
MAINTAINER me
# Set ENV for x11 display
ENV DISPLAY $DISPLAY
ENV QT_X11_NO_MITSHM 1
# Install an x11 app like xclock to test this
run apt-get update
run apt-get install x11-apps --assume-yes
# Stuff I copied to make a ros user
ARG uid=1000
ARG gid=1000
RUN export uid=${uid} gid=${gid} && \
groupadd -g ${gid} ros && \
useradd -m -u ${uid} -g ros -s /bin/bash ros && \
passwd -d ros && \
usermod -aG sudo ros
USER ros
WORKDIR /home/ros
# Sourcing this before .bashrc runs breaks ROS completions
RUN echo "\nsource /opt/ros/kinetic/setup.bash" >> /home/ros/.bashrc
# Copy entrypoint script into the image, this currently echos hello world
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
My personal preference is to inject the display variable and share the unix socket or X windows with something like:
docker run -it --rm -e DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my-gui-image
Sharing the localtime just allows the timezone to match up as well, I've been using this for email apps.
The other option is to spin up a VNC server, run your app on that server, and then connect to the container with a VNC client. I'm less a fan of that one since you end up with two processes running inside the container making signal handling and logs a challenge. It does have the advantage that the app is better isolated so if hacked, it doesn't have access to your X display.

Resources