What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
Related
I am using Docker and have a docker-compose.yml and a Dockerfile. In my Dockerfile, I create a folder. When I build the container, I can see that the folder is created, but when I run the container, I can see all the files, but the folder that I created during the container build is not visible.
Both of these files are in the same location.
Here is docker-compose
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: tail -f /dev/null #command to leave the container on
Here is my Dockerfile
FROM alpine
WORKDIR /app
COPY . /app
RUN mkdir "test"
RUN ls
To build the container I use the command: docker-compose build --progress=plain --no-cache. Command RUN ls from Dockerfile prints me that there are 3 files: Dockerfile, docker-compose.yml and created in Dockerfile test directory.
When my container is running and i'm entering the docker to check which files are there i haven't directory 'test'.
I probably have 'volumes' mounted incorrectly. When I delete them, after entering the container, I see the 'test' folder. Unfortunately,
I want my files in the container and on the host to be sync.
This is a simple example. I have the same thing creating dependencies in nodejs. I think that the question written in this way will be more helpful to others.
When you mount the same volume in docker-compose it will mount that folder to the running image, and test folder will be overwritten by mounted folder.
This may work for you (to create folder from docker-compose file), but im not really sure in your use case:
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: mkdir /app/test && tail -f /dev/null
Based on your comment, below is an example on how i would use Dockerfile to build node packages and save node_modules back to host:
Dockerfile
FROM node:latest
COPY . /app
RUN npm install
Sh
#!/bin/bash
docker build -t my-image .
container_id=$(docker run -d my-image)
docker cp $container_id:/app/node_modules .
docker stop $container_id
docker rm $container_id
Or more simple way, on how I use to do it, is just to run the docker image and ssh into it:
docker run --rm -it -p 80:80 -v $(pwd):/home/app node:14.19-alpine
ssh into running container, perform npm commands then exit;
docker run -i -t testing bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown.
I created the image in Docker Hub , it is private image.
FROM scratch
# Set the working directory to /app
WORKDIR Desktop
ADD . /Dockerfile
RUN ./Dockerfile
EXPOSE 8085
ENV NAME testing
This is in my Dockerfile
I tired to run it, when i run docker images i am getting the details
I think you need to do login in command prompt.useing below command.
docker login -u username -p password url
Apart from the login which should not cause these, as you build an image on your local system which I assume it should exist on local system which will only pull image if not exist on local, the real reason is you are building an image from scratch and there are no binaries in scratch image, even no bash or sh.
Second mistake:
RUN ./Dockerfile
Your Dockerfile is a file, not binaries, while here you are trying to execute using RUN directive.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
FROM scratch
COPY hello /
CMD ["/hello"]
While here hello can be an executable file such as a C++ compiled file.
Docker scratch image
But what I would suggest to say "hello" in Docker is to use Busybox or Alpine as a base image which has a shell and both are under 5MB.
FROM busybox
CMD ["echo","hello Docker!"]
now build and run
docker build -t hello-docker .
docker run --rm -it hello-docker
Below is my dockerfile
FROM node:10.15.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY ./build/release /usr/src/app/
RUN yarn
EXPOSE 3000
CMD [ "node", "server.js" ]
First I ran
docker build -t app .
and then
docker run -t -p 3000:3000 app
Everything works fine via localhost:3000 in my computer.
Then I try to export this image by
docker export 68719e2bb0cd > app.tar
and import again by
cat app.tar | docker import - app2
then run
docker run -t -d -p 2000:3000 app2
and the error came out
docker: Error response from daemon: No command specified.
Why this happened?
You're using the wrong commands: docker export and docker import only transfer the filesystem part of an image and not other data like environment variables or the default command. There's not really a good typical use case for these commands.
The standard way to do this is to set up a Docker registry or use a public registry server like Docker Hub, AWS ECR, GCR, ... Once you have this set up you can docker push an image to the registry from the system it was built on, and then docker pull it on the system you want to run it on (or directly docker run it, which will automatically pull the image if not present).
If you really can't set up a registry then the commands you actually want are docker save and docker load, which save complete images with all of their metadata. I've only every wanted these in environments where I can't connect the systems I want to run images to the registry server; otherwise a registry is almost always better. (Cluster environments like Docker Swarm and Kubernetes all but require a registry as well.)
Just pass the command to run. because the imported image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
The correct command would be something like:
docker run -t -d -p 2000:3000 app2 /path/to/something.sh
I've certain basic docker command which i run in my terminal. Now what i want is to use all the basic docker commands into one docker file and then, build that docker file.
For eg.
Consider two docker files
File - Docker1, Docker2
Docker1 contains list of commands to run
And inside Docker2 i want to build Docker1 and run it as well
Docker2:(Consider the scenario with demo code)
FROM ubuntu:16.04
MAINTAINER abc#gmail.com
WORKDIR /home/docker_test/
RUN docker build -t Docker1 .
RUN docker run -it Docker1
I want to do something like this. But it is throwing - docker: error response from daemon oci runtime create failed container_linux.go
How can I do this? Where am I going wrong
P.S - I'm new to Docker
Your example is mixing two steps, image creation and running an image, that can't be mixed that way (with a Dockerfile).
Image creation
A Dockerfileis used to create an image. Let's take this alpine3.8 docker file as a minimal example
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
It's a base image, it's not based on another image, it starts FROM scratch.
Then a tar file is copied and unpacked, see ADD and the shell is set as starting command, see CMD. You can build this with
docker build -t test_image .
Issued from the same folder, where the Dockerfile is. You will also need the rootfs.tar.xz in that folder, copy it from the alpine link above.
Running a container
From that test_image you can now spawn a container with
docker run -it test_image
It will start up and give you the shell inside the container.
Docker Compose
Usually there is no need to build your images over and over again before spawning a new container. But if you really need to, you can do it with docker-compose. Docker Compose is intended to define and run a service stack consisting of several containers. The stack is defined in a docker-compose.yml file.
version: '3'
services:
alpine_test:
build: .
build: . takes care of building the image again before starting up, but usually it is sufficient to have just image: <image_name> and use an already existing image.
I am new to docker, I installed docker as per the instructions provided in the official site.
# build docker images
docker build -t iky_backend:2.0.0 .
docker build -t iky_gateway:2.0.0 frontend/.
Now, while I am running these commands in the terminal after the installation of docker, I am getting the below error. I tried with by adding sudo also. But no use.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/esh/Dockerfile: no such file or directory
Your docker images should execute just fine (may require sudo if you are unable to connect to docker daemon).
docker build requires a Dockerfile to present at the same directory (you are executing at your home folder - dont do that) or you need to use -f to specify the path instead of .
Try this:
mkdir build
cd build
create your Dockerfile here.
docker build -t iky_backend:2.0.0 .
docker images