cannot open RPM, skipping in Dockerfile - docker

I'm trying to create a Dockerfile to build our resusable image. What I have so far is
FROM crystaltwix/centos-mono
MAINTAINER crystaltwix
ADD ./rpms/MyRpm.rpm ./rpms
RUN yum --nogpgcheck localinstall ./rpms/MyRpm.rpm
I get an error that says
Cannot open: ./rpms/Myrpm.rpm. Skipping.
What I don't understand why it doesn't work is, if I do run the image in my container:
sudo docker run -i -t -v /home/crystaltwix/projects/rpms:/opt/rpms crystaltwix/centos-mono /bin/bash
Then in the shell of my container, I do the same command:
yum --nogpgcheck localinstall ./rpms/MyRpm.rpm
This works fine. It just doesn't work within my Dockerfile. Am I missing something specific about the way Dockerfile builds images? Thanks in advance.

From https://docs.docker.com/reference/builder/#add:
If <src> is any other kind of file, it is copied individually along with its metadata. In this case, if <dest> ends with a trailing slash /, it will be considered a directory and the contents of <src> will be written at <dest>/base(<src>).
ADD ./rpms/MyRpm.rpm ./rpms results in ./rpms being the MyRpm.rpm file. Try ADD ./rpms/MyRpm.rpm ./rpms/ instead.

Related

Docker ROS automatic start of launch file

I developed a few ROS packages and I want to put the packages in a docker container because installing all the ROS packages all the time is tedious. Therefore I created a dockerfile that uses a base ROS image, installed all the necessary dependencies, copied my workspace, built the workspace in the docker container and sourced everything afterward. You can find the docker file here:
FROM ros:kinetic-ros-base
RUN apt-get update && apt-get install locales
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
RUN apt-get update && apt-get install -y \
&& rm -rf /var/likb/apt/lists/*
COPY . /catkin_ws/src/
WORKDIR /catkin_ws
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; catkin_make'
RUN /bin/bash -c '. /opt/ros/kinetic/setup.bash; source devel/setup.bash'
CMD ["roslaunch", "master_launch sim_perception.launch"]
The problem is: When I run the docker container wit the "run" command, docker doesn't seem to know that I sourced my new ROS workspace and therefore it cannot launch automatically my launch script. If I run the docker container as bash script with "run -it bash" I can source my workspace again and then roslaunch my .launch file.
So can someone tell me how to write my dockerfile correctly so I launch my .launch file automatically when I run the container? Thanks!
From Docker Docs
Each RUN instruction is run independently and won't effect next instruction so when you run last Line no PATH are saved from ROS.
You need Source .bashrc or every environment you need using source first.
You can wrap everything you want (source command and roslaunch command) inside a sh file then just run that file at the end
If you review the convention of ros_entrypoint.sh you can see how best to source the workspace you would like in the docker. We're all so busy learning how to make docker and ros do the real things, it's easy to skip over some of the nuance of this interplay. This sucked forever for me; hope this is helpful for you.
I looked forever and found what seemed like only bad advice, and in the absence of an explicit standard or clear guidance I've settled into what seems like a sane approach that also allows you to control what launches at runtime with environment variables. I now consider this as the right solution for my needs.
In the Dockerfile for the image you want to set the start/launch behavior;
towards the end; you should use ADD line to insert your own ros_entrypoint.sh (example included); Set it as the ENTRYPOINT and then a CMD to run by default run something when the docker start.
note: you'll (obviously?) need to run the docker build process for these changes to be effective
Dockerfile looks like this:
all your other dockerfile ^^
.....
# towards the end
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
Example ros_entryppoint.sh:
#!/bin/bash
set -e
# setup ros environment
if [ -z "${SETUP}" ]; then
# basic ros environment
source "/opt/ros/$ROS_DISTRO/setup.bash"
else
#from environment variable; should be a absolute path to the appropriate workspaces's setup.bash
source $SETUP
fi
exec "$#"
Used in this way the docker will automatically source either the basic ros bits... or if you provide another workspace's setup.bash path in the $SETUP environment variable, it will be used in the container.
So a few ways to work with this:
From the command line prior to running docker
export SETUP=/absolute/path/to/the/setup.bash
docker run -it your-docker-image
From the command line (inline)
docker run --env SETUP=/absolute/path/to/the/setup.bash your-docker-image
From docker-compose
service-name:
network_mode: host
environment:
- SETUP=/absolute/path/to/the_workspace/devel/setup.bash #or whatever
command: roslaunch package_name launchfile_that_needed_to_be_sourced.launch
#command: /bin/bash # wake up and do something else

Automatic build with Add from URL

I'm trying to do a auto build on hub.docker.com using a ADD with files from a URL. I have the following docker file on github, builds are being triggered:
FROM ubuntu:14.04
MAINTAINER Andy Cobley "andy#example.org"
ENV REFRESHED_AT 2015-29-04
RUN apt-get update
RUN apt-get install -y nginx
RUN mkdir -p /var/www/html
ADD http://example.org:8080/global.conf /etc/nginx/conf.d/
ADD http://example.org:8080/nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
ENTRYPOINT ["/usr/sbin/nginx"]
The files are not being added into the container. I can confirm the files do exist on the server and are accessible. Is there something I'm missing ?
As your first ADD ends with a /, docker thinks the source (global.conf) is a directory, try with ADD http://example.org:8080/global.conf /etc/nginx/conf.d/global.conf
I think I've solved this. building remotely in this situation you need to do a docker pull before doing a docker run.

Dockerfile manual install of multiple deb files

Working with Docker and I notice almost everywhere the "RUN" command starts with an apt-get upgrade && apt-get install etc.
What if you don't have internet access and simply want to do a "dpkg -i ./deb-directory/*.deb" instead?
Well, I tried that and I keep failing. Any advice would be appreciated:
dpkg: error processing archive ./deb-directory/*.deb (--install):
cannot access archive: No such file or directory
Errors were encountered while processing: ./deb-directory/*.deb
INFO[0002] The command [/bin/sh -c dpkg -i ./deb-directory/*.deb] returned a non-zero code: 1`
To clarify, yes, the directory "deb-directory" does exist. In fact it is in the same directory as the Dockerfile where I build.
This is perhaps a bug, I'll open a ticket on their github to know.
Edit: I did it here.
Edit2:
Someone answered a better way of doing this on the github issue.
* is a shell metacharacter. You need to invoke a shell for it to be expanded.
docker run somecontainer sh -c 'dpkg -i /debdir/*.deb'
!!! Forget the following but I leave it here to keep track of my reflexion steps !!!
The problem comes from the * statement which doesn't seem to work well with the docker run dpkg command. I tried your command inside a container (using an interactive shell) and it worked well. It looks like dpkg is trying to install the so called ./deb-directory/*.deb file which doesn't exist instead of installing all the .deb files contained there.
I just implemented a workaround. Copy a .sh script in your container, chmod +x it and then use it as your command.
(FYI, prefer using COPY instead of ADD when the file isn't remotely copied. Check the best practices for writing Dockerfiles for more info.)
This is my Dockerfile for example purpose:
FROM debian:latest
MAINTAINER Vrakfall <jeremy#artphotolaurent.be>
COPY install.sh /
#debdir is a directory
COPY debdir /debdir
RUN chmod +x /install.sh
CMD ["/install.sh"]
The install.sh (copied at the root directory) simply contains:
#!/bin/bash
dpkg -i /debdir/*.deb
And the following
docker build -t debiantest .
docker run debiantest
works well and install all the packages contained in the /debdir directory.

docker: executable file not found in $PATH

I have a docker image which installs grunt, but when I try to run it, I get an error:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
If I run bash in interactive mode, grunt is available.
What am I doing wrong?
Here is my Dockerfile:
# https://registry.hub.docker.com/u/dockerfile/nodejs/ (builds on ubuntu:14.04)
FROM dockerfile/nodejs
MAINTAINER My Name, me#email.com
ENV HOME /home/web
WORKDIR /home/web/site
RUN useradd web -d /home/web -s /bin/bash -m
RUN npm install -g grunt-cli
RUN npm install -g bower
RUN chown -R web:web /home/web
USER web
RUN git clone https://github.com/repo/site /home/web/site
RUN npm install
RUN bower install --config.interactive=false --allow-root
ENV NODE_ENV development
# Port 9000 for server
# Port 35729 for livereload
EXPOSE 9000 35729
CMD ["grunt"]
This was the first result on google when I pasted my error message, and it's because my arguments were out of order.
The container name has to be after all of the arguments.
Bad:
docker run <container_name> -v $(pwd):/src -it
Good:
docker run -v $(pwd):/src -it <container_name>
When you use the exec format for a command (e.g., CMD ["grunt"], a JSON array with double quotes), it will be executed without a shell. This means that most environment variables will not be present.
If you specify your command as a regular string (e.g. CMD grunt) then the string after CMD will be executed with /bin/sh -c.
More info on this is available in the CMD section of the Dockerfile reference.
I found the same problem. I did the following:
docker run -ti devops -v /tmp:/tmp /bin/bash
When I change it to
docker run -ti -v /tmp:/tmp devops /bin/bash
it works fine.
For some reason, I get that error unless I add the "bash" clarifier. Even adding "#!/bin/bash" to the top of my entrypoint file didn't help.
ENTRYPOINT [ "bash", "entrypoint.sh" ]
There are several possible reasons for an error like this.
In my case, it was due to the executable file (docker-entrypoint.sh from the Ghost blog Dockerfile) lacking the executable file mode after I'd downloaded it.
Solution: chmod +x docker-entrypoint.sh
I had the same problem, After lots of googling, I couldn't find out how to fix it.
Suddenly I noticed my stupid mistake :)
As mentioned in the docs, the last part of docker run is the command you want to run and its arguments after loading up the container.
NOT THE CONTAINER NAME !!!
That was my embarrassing mistake.
Below I provided you with the picture of my command line to see what I have done wrong.
And this is the fix as mentioned in the docs.
A Docker container might be built without a shell (e.g. https://github.com/fluent/fluent-bit-docker-image/issues/19).
In this case, you can copy-in a statically compiled shell and execute it, e.g.
docker create --name temp-busybox busybox:1.31.0
docker cp temp-busybox:/bin/busybox busybox
docker cp busybox mycontainerid:/busybox
docker exec -it mycontainerid /bin/busybox sh
In the error message shown:
Error response from daemon: Cannot start container foo_1: \
exec: "grunt serve": executable file not found in $PATH
It is complaining that it cannot find the executable grunt serve, not that it could not find the executable grunt with the argument serve. The most likely explanation for that specific error is running the command with the json syntax:
[ "grunt serve" ]
in something like your compose file. That's invalid since the json syntax requires you to split up each parameter that would normally be split by the shell on each space for you. E.g.:
[ "grunt", "serve" ]
The other possible way you can get both of those into a single parameter is if you were to quote them into a single arg in your docker run command, e.g.
docker run your_image_name "grunt serve"
and in that case, you need to remove the quotes so it gets passed as separate args to the run command:
docker run your_image_name grunt serve
For others seeing this, the executable file not found means that Linux does not see the binary you are trying to run inside your container with the default $PATH value. That could mean lots of possible causes, here are a few:
Did you remember to include the binary inside your image? If you run a multi-stage image, make sure that binary install is run in the final stage. Run your image with an interactive shell and verify it exists:
docker run -it --rm your_image_name /bin/sh
Your path when shelling into the container may be modified for the interactive shell, particularly if you use bash, so you may need to specify the full path to the binary inside the container, or you may need to update the path in your Dockerfile with:
ENV PATH=$PATH:/custom/dir/bin
The binary may not have execute bits set on it, so you may need to make it executable. Do that with chmod:
RUN chmod 755 /custom/dir/bin/executable
The binary may include dynamically linked libraries that do not exist inside the image. You can use ldd to see the list of dynamically linked libraries. A common reason for this is compiling with glibc (most Linux environments) and running with musl (provided by Alpine):
ldd /path/to/executable
If you run the image with a volume, that volume can overlay the directory where the executable exists in your image. Volumes do not merge with the image, they get mounted in the filesystem tree same as any other Linux filesystem mount. That means files from the parent filesystem at the mount point are no longer visible. (Note that named volumes are initialized by docker from the image content, but this only happens when the named volume is empty.) So the fix is to not mount volumes on top of paths where you have executables you want to run from the image.
If you run a binary for a different platform, and haven't configured binfmt_misc with the --fix-binary option, qemu will be looking for the interpreter inside the container filesystem namespace instead of the host filesystem. See this Ubuntu bug report for more details on this issue.
If the error is from a shell script, the issue is often with the first line of that script (e.g. the #!/bin/bash). Either the command doesn't exist inside the image for a reason above, or the file is not saved as ascii or utf8 with Linux linefeeds. You can attempt dos2unix to fix the linefeeds, or check your git and editor settings.
in my case i order params wrong move all switchs before image name
I got this error message, when I was building alpine base image :
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "bash": executable file not found in $PATH: unknown
In my docker-compose file, I had the command directive in which executing command using bash and bash does not come with alpine base image.
command: bash -c "python manage.py runserver 0.0.0.0:8000"
Then I realized and executed command using sh (shell).
It worked for me.
problem is glibc, which is not part of apline base iamge.
After adding it worked for me :)
Here are the steps to get the glibc
apk --no-cache add ca-certificates wget
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk
apk add glibc-2.28-r0.apk
Refering to the title.
My mistake was to put variables via --env-file during docker run. Among others the file consisted of a PATH extension: PATH=$PATH:something, which caused PATH var look literally like PATH=$PATH:something (var resolution hadn't been performed) instead of PATH:/usr/bin...:something.
I couldn't make the resolution work through --env-file, so the only way I see this working is by using ENV in Dockerfile.
I ran into this issue using docker-compose. None of the solutions here or on this related question resolved my issue. Ultimately what worked for me was clearing all cached docker artifacts with docker prune -a and restarting docker.
to make it work add soft reference to /usr/bin:
ln -s $(which node) /usr/bin/node
ln -s $(which npm) /usr/bin/npm

How can I make a host directory mount with the container directory's contents?

What I am trying to do is set up a docker container for ghost where I can easily modify the theme and other content. So I am making /opt/ghost/content a volume and mounting that on the host.
It looks like I will have to manually copy the theme into the host directory because when I mount it, it is an empty directory. So my content directory is totally empty. I am pretty sure I am doing something wrong.
I have tried a few different variations including using ADD with default themes folder, putting VOLUME at the end of the Dockerfile. I keep ending up with an empty content directory.
Does anyone have a Dockerfile doing something similar that is already working that I can look at?
Or maybe I can use the docker cp command somehow to populate the volume?
I may be missing something obvious or have made a silly mistake in my attempts to achieve this. But the basic thing is I want to be able to upload a new set of files into the ghost themes directory using a host-mounted volume and also have the casper theme in there by default.
This is what I have in my Dockerfile right now:
FROM ubuntu:12.04
MAINTAINER Jason Livesay "ithkuil#gmail.com"
RUN apt-get install -y python-software-properties
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get -qq update
RUN apt-get install -y sudo curl unzip nodejs=0.10.20-1chl1~precise1
RUN curl -L https://en.ghost.org/zip/ghost-0.3.2.zip > /tmp/ghost.zip
RUN useradd ghost
RUN mkdir -p /opt/ghost
WORKDIR /opt/ghost
RUN unzip /tmp/ghost.zip
RUN npm install --production
# Volumes
RUN mkdir /data
ADD run /usr/local/bin/run
ADD config.js /opt/ghost/config.js
ADD content /opt/ghost/content/
RUN chown -R ghost:ghost /opt/ghost
ENV NODE_ENV production
ENV GHOST_URL http://my-ghost-blog.com
EXPOSE 2368
CMD ["/usr/local/bin/run"]
VOLUME ["/data", "/opt/ghost/content"]
As far as I know, empty host-mounted (bound) volumes still will not receive contents of directories set up during the build, BUT data containers referenced with --volumes-from WILL.
So now I think the answer is, rather than writing code to work around non-initialized host-mounted volumes, forget host-mounted volumes and instead use data containers.
Data containers use the same image as the one you are trying to persist data for (so they have the same directories etc.).
docker run -d --name myapp_data mystuff/myapp echo Data container for myapp
Note that it will run and then exit, so your data containers for volumes won't stay running. If you want to keep them running you can use something like sleep infinity instead of echo, although this will obviously take more resources and isn't necessary or useful unless you have some specific reason -- like assuming that all of your relevant containers are still running.
You then use --volumes-from to use the directories from the data container:
docker run -d --name myapp --volumes-from myapp_data
https://docs.docker.com/userguide/dockervolumes/
You need to place the VOLUME directive before actually adding content to it.
My answer is completely wrong! Look here it seems there is actually a bug. If the VOLUME command happens after the directory already exists in the container, then changes are not persisted.
The Dockerfile should always end with a CMD or an ENTRYPOINT.
UPDATE
My solution would be to ADD files in the container home directory, then use a shell script as an entry point in which I'll copy the file in the shared volume and do all the other tasks.
I've been looking into the same thing. The problem I encountered was that I was using a relative local mount path, something like:
docker run -i -t -v ../data:/opt/data image
Switching to an absolute local path fixed this up for me:
docker run -i -t -v /path/to/my/data:/opt/data image
Can you confirm whether you were doing a relative path, and whether this helps?
Docker V1.8.1 preserves data in a volume if you mount it with the run command. From the docker docs:
Volumes are initialized when a container is created. If the container’s
base image contains data at the specified mount point, that existing
data is copied into the new volume upon volume initialization.
Example: An image defines the
/var/www/html
as a volume and populates it with the data of a web application. Your docker hosts provides a mount directory
/my/host/dir
You start the image by
docker run -v /my/host/dir:/var/www/html image
then you will get all the data from /var/www/html in the hosts /my/host/dir
This data will persist even if you delete the container or the image.

Resources