docker create | Error response from daemon: No command specified - docker

Attached there is my Dockerfile. My intention is to use the following command:
docker build -t fbprophet . && \
docker create --name=awslambda fbprophet && \
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip . \
docker rm awslambda
However, I always receive this error here:
Error response from daemon: No command specified
When running these commands here, it works. I have to run it in different shells so the container doesn't stop running before my export is done.
docker build -t fbprophet . && docker container rm awslambda && docker run -it --name=awslambda fbprophet bash
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip .
Dockerfile:
FROM lambci/lambda:build-python3.7
ENV VIRTUAL_ENV=/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x

Probably the easiest way to get files out of an image you've built is to mount a volume on to a container, and make the main container process just be a cp command:
docker run \
--rm \
-v $PWD:/export \
fbprophet \
cp lambdatest.zip /export
(If you've built an application that uses ENTRYPOINT ["python"] or some such, you need to specify --entrypoint /bin/cp before the image name, and then put the arguments after the image name. Using CMD instead avoids this complication.)
Usually a Docker image has a packaged application (or a reasonable base one could build an application on), and running a container actually runs that application. An image is kind of an inconvenient way to just pass around files. You might find it easier and safer to run the same set of commands outside of Docker on your host to create a virtual environment, and you can just directly cp the file out of there when you're done.

Related

Copy files from container to local in Docker

I want to copy a file from container to my local. The file is generated after execute python script, but due to then ENTRYPOINT, the container exited right after it run, and cant be able to use docker cp command. Any idea on how to prevent the container from exit before manage to copy the file? Below is my Dockerfile:
FROM python:3.9-alpine3.12
WORKDIR /app
COPY . /app/
RUN pip install --no-cache-dir -r requirements.txt && \
rm -f /var/cache/apk/*
ENTRYPOINT ["python3", "main.py"]
I use this command to run the image:
docker run -d -it --name test [image]
If the output file is stored in it's own directory (say /app/output) you can run: docker run -d -it -v $PWD/output:/app/output/ --name test [image] and the file will be in the output directory of the current directory.
If it's not, then run the container with: docker run -d -it --name test [image]
Then copy the file to your own filesystem using docker cp test:/app/example.json . to copy it to the current directory.
If running a container in background is unnecessary then you can copy a file from stdout
docker run -it [image] cat /app/example.json > out_example.json

docker volume masks parent folder in container?

I'm trying to use a Docker container to build a project that uses rust; I'm trying to build as my user. I have a Dockerfile that installs rust in $HOME/.cargo, and then I'm trying to docker run the container, map the sources from $HOME/<some/subdirs/to/project> on the host in the same subfolder in the container. The Dockerfile looks like this:
FROM ubuntu:16.04
ARG RUST_VERSION
RUN \
export DEBIAN_FRONTEND=noninteractive && \
apt-get update && \
# install library dependencies
apt-get install [... a bunch of stuff ...] && \
curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain $RUST_VERSION && \
echo 'source $HOME/.cargo/env' >> $HOME/.bashrc && \
echo apt-get DONE
The build container is run something like this:
docker run -i -t -d --net host --privileged -v /mnt:/mnt -v /dev:/dev --volume /home/stefan/<path/to/project>:/home/stefan/<path/to/project>:rw --workdir /home/stefan/<path/to/project> --name <container-name> -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro -v /etc/shadow:/etc/shadow:ro -u 1000 <image-name>
And then I try to exec into it and run the build script, but it can't find rust or $HOME/.cargo:
docker exec -it <container-name> bash
$ ls ~/.cargo
ls: cannot access '/home/stefan/.cargo': No such file or directory
It looks like the /home/stefan/<path/to/project> volume is masking the contents of /home/stefan in the container. Is this expected? Is there a workaround possible to be able to map the source code from a folder under $HOME on the host, but keep $HOME from the container?
I'm un Ubuntu 18.04, docker 19.03.12, on x86-64.
Dockerfile read variable in physical machine. So you user don't have in virtual machine.
Try change: $HOME to /root
echo 'source /root/.cargo/env' >> /root/.bashrc && \
I'll post this as an answer, since I seem to have figured it out.
When the Dockerfile is expanded, $HOME is /root, and the user is root. I couldn't find a way to reliably introduce my user in the build step / Dockerfile. I tried something like:
ARG BUILD_USER
ARG BUILD_GROUP
RUN mkdir /home/$BUILD_USER
ENV HOME=/home/$BUILD_USER
USER $BUILD_USER:$BUILD_GROUP
RUN \
echo "HOME is $HOME" && \
[...]
But didn't get very far, because inside the container, the user doesn't exist:
unable to find user stefan: no matching entries in passwd file
So what I ended up doing was to docker run as my user, and run the rust install from there - that is, from the script that does the actual build.
I also realized why writing to /home/$USER doesn't work - there is no /home/$USER in the container; mapping /etc/passwd and /etc/group in the container teaches it about the user, but does not create any directory. I could've mapped $HOME from the host, but then the container would control the rust versions on the host, and would not be that self contained. I also ended up needing to install rust in a non-standard location, since I don't have a writable $HOME in the container: I had to set CARGO_HOME and RUSTUP_HOME to do that.

Docker : starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown

While launching a command on my docker image (run), I get the following error :
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown.
The image is an image for Jmeter, that I have created myself :
FROM hauptmedia/java:oracle-java8
MAINTAINER maisie
ENV JMETER_VERSION 5.2.1
ENV JMETER_HOME /opt/jmeter
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
RUN apt-get clean
RUN apt-get update
RUN apt-get -y install ca-certificates
RUN mkdir -p ${JMETER_HOME}
RUN cd ${JMETER_HOME}
RUN wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz
RUN tar -xvzf apache-jmeter-5.2.1.tgz
RUN rm apache-jmeter-5.2.1.tgz
The command that I am launching is :
#!/bin/bash
export volume_path=$(pwd)
export jmeter_path="/opt/apache-jmeter-5.2.1/bin"
docker run --volume ${volume_path}:${jmeter_path} my/jmeter -n -t ${jmeter_path}/TEST.jmx -l ${jmeter_path}/res.jtl
I really can't find any answer to my problem ...
Thank you in advance for any help.
The general form of the docker run command is
docker run [docker options] <image name> [command]
So you are running an image named amos/jmeter, and the command you are having it run is -n -t .... You're getting the error you are because you've only given a list of options and not an actual command.
The first part of this is to include the actual command in your docker run line:
docker run --rm amos/jmeter \
jmeter -n ...
There's also going to be a problem with how you install the software in the Dockerfile. (You do not need a docker run --volume to supply software that's already in the image.) Each RUN command starts in a new shell in a new environment (in a new container even), so saying e.g. RUN cd ... in its own line doesn't do anything. You need to use Dockerfile directives like WORKDIR and ENV to change the environment. The jmeter command isn't in a standard binary directory so you'll also have a little trouble running it. I might change:
# ...
# Run all APT commands in a single command
# (Layer caching can break an install if the list of packages changes)
RUN apt-get clean \
&& apt-get update \
&& apt-get -y install ca-certificates
# Download and unpack the JMeter tar file
# This is all in a single RUN command, so
# (1) the `cd` at the effect has (temporary) effect, and
# (2) the tar file isn't committed to an image before you `rm` it
RUN cd /opt \
&& wget ${JMETER_DOWNLOAD_URL} \
&& tar xzf apache-jmeter-${JMETER_VERSION}.tgz \
&& rm apache-jmeter-${JMETER_VERSION}.tgz
# Create a symlink to the jmeter process in a normal bin directory
RUN ln -s /opt/apache-jmeter-${JMETER_VERSION}/bin/jmeter /usr/local/bin
# Indicate the default command to run
CMD jmeter
Finally, there will be questions around where to store data files. It's better to store data outside the application directory; in a Docker context it's common enough to use short (if non-standard) directory paths like /data. Remember that any file path in a docker run command refers to a path in the container, but you need a docker run -v bind-mount option (your original --volume is equivalent) to make it visible on the host. That would give you a final command like:
docker run -v "$PWD:/data" atos/jmeter \
jmeter -n -t /data/TEST.jmx -l /data/res.jtl

run docker container as a arbitrary user passed to it while running the image

I want to run a docker container as an arbitrary user which is passed to the image while running it. For example docker run -u 1000 myimage.
The above is possible. However I want to create a home directory with this user 1000 while starting the container(possibly through CMD) and do my container service stuff within that directory.
Is this possible and some pointers would be useful on ways to achieve it.
First save your current user and group in variables:
export uid=$(id -u)
export gid=$(id -g)
Then to run your image,you have two options:
1) Run the image from the location of the app directory itself:
sudo docker run -d \
--user $uid:$gid \
-v $(pwd):/home/$USER \
--workdir="/home/$USER" \
myimage
2) Create a new directory for the app, e.g. at /home/$USER/app, but then you will have to write in command line your CMD from the docker file.
For example if this was your Dockerfile:
FROM node:7
WORKDIR /app
COPY package.json /app
COPY . /app
CMD node bin/www
Your would run it like that:
sudo docker run -d \
--user $uid:$gid \
-v $(pwd):/home/$USER \
--workdir="/home/$USER" \
hello-express \
bash -c "cp -rf /app/* /home/$USER/; node bin/www"
Here you pass the user to the container using $uid:$gid and you mount the user's home directory as a volume and then set it as the working directory.
I know it's quite complex, but it's the only way to achieve exactly what you want.
If you want a simpler solution, consider planning it differently. See this example for running a docker container as a non-root user.

Why does docker "--filter ancestor=imageName" find the wrong container?

I have a deployment script that builds new images, stop the existing containers with the same image names, then starts new containers from those images.
I stop the container by image name using the answer here: Stopping docker containers by image name - Ubuntu
But this command stops containers that don't have the specified image name. What am I doing wrong?
See here to watch docker stopping the wrong container:
Here is the dockerfile:
FROM ubuntu:14.04
MAINTAINER j#eka.com
# Settings
ENV NODE_VERSION 5.11.0
ENV NVM_DIR /root/.nvm
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install libs
RUN apt-get update
RUN apt-get install curl -y
RUN curl https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash \
&& chmod +x $NVM_DIR/nvm.sh \
&& source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
RUN apt-get clean
# Install app
RUN mkdir /app
COPY ./app /app
#Run the app
CMD ["node", "/app/src/app.js"]
I build like so:
docker build -t "$serverImageName" .
and start like so:
docker run -d -p "3000:"3000" -e db_name="$db_name" -e db_username="$db_username" -e db_password="$db_password" -e db_host="$db_host" "$serverImageName"
Why not use the container name to differentiate you environments?
docker run -d --rm --name nginx-dev nginx
40ca9a6db09afd78e8e76e690898ed6ba2b656f777b84e7462f4af8cb4a0b17d
docker run -d --rm --name nginx-qa nginx
347b32c85547d845032cbfa67bbba64db8629798d862ed692972f999a5ff1b6b
docker run -d --rm --name nginx nginx
3bd84b6057b8d5480082939215fed304e65eeac474b2ca12acedeca525117c36
Then use docker ps
docker ps -f name=nginx$
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3bd84b6057b8 nginx "nginx -g 'daemon ..." 30 seconds ago Up 28 seconds 80/tcp, 443/tcp nginx
According to the docs --filter ancestor could be finding the wrong containers if they are in any way children of other containers.
So to be sure my images are separate right from the start I added this line to the start of my dockerfile, after the FROM and MAINTAINER commands:
RUN echo DEVTESTLIVE: This line ensures that this container will never be confused as an ancestor of another environment
Then in my build scripts after copying the dockerfile to the distribution folder I replace DEVTESTLIVE with the appropriate environment:
sed -i -e "s/DEVTESTLIVE/$env/g" ../dist/server/dockerfile
This seems to have worked; I now have containers for all three environments running simultaneously and can start and stop them automatically through their image names.

Resources