I've installed docker and docker compose v2 on wsl. Using docker --version, docker compose version, docker-compose --version I get:
Docker version 20.10.12, build 20.10.12-0ubuntu4
Docker Compose version v2.14.2
docker-compose version 1.29.2, build unknown
But when I run docker compose up I got the following error:
docker: 'compose' is not a docker command.
See 'docker --help'
Why am I getting this error?
The tree view from my dir:
.
├── Dockerfile
├── README.md
├── data
├── docker-compose.yml
try using docker-compose up instead
Related
My OS is Ubuntu 20.04.3 LTS.
I've installed docker compose V2, and I can access it from the command line regularly:
$ docker compose version
Docker Compose version v2.2.2
I've also installed compose-switch according to the manual instructions here: https://docs.docker.com/compose/cli-command/#compose-switch and it's working fine:
$ docker-compose version
Docker Compose version v2.2.2
But if I use sudo neither will work:
$ sudo docker compose version
docker: 'compose' is not a docker command.
See 'docker --help'
$ sudo docker-compose version
docker: 'compose' is not a docker command.
See 'docker --help'
docker version is the same with or without sudo:
Version: 20.10.12
API version: 1.41
So, how can I get docker compose working with sudo?
I had installed docker-compose under my user's home directory. I had to move the file docker-compose from ~/.docker/cli-plugins to /usr/local/lib/docker/cli-plugins
$ sudo mkdir -p /usr/local/lib/docker/cli-plugins
$ sudo mv /home/<username>/.docker/cli-plugins/docker-compose /usr/local/lib/docker/cli-plugins/docker-compose
And now everything works as expected.
The docker command you are running as your local user must be calling a different binary than what it calls when running as another user (i.e. root user).
When you invoke a command using sudo, it will by default use the root user shell environment which includes the PATH env variable.
I suspect you will see a different path output when running these two commands:
type docker
sudo type docker
I've found that I cannot list the contents of a nested directory in a Docker container.
File structure:
├── Dockerfile
└── foo
└── bar
└── baz.txt
Dockerfile:
FROM debian
COPY foo /app/foo
WORKDIR /app
CMD ["ls", "foo/bar"]
I can run ls foo/bar/ successfully on the host machine:
$ ls foo/bar/
baz.txt
But when I run the container, I get a "Permission denied" error.
$ docker build -t foo .
$ docker run foo
ls: cannot open directory 'foo/bar': Permission denied
I don't understand why permission is denied, because I'm the root user in the container, and the directory seems to have the appropriate permissions:
$ docker run -it foo sh
# whoami
root
# cd foo
# ls -l
total 4
drwxrwxr-x. 2 root root 4096 Aug 3 19:13 bar
Environment:
$ docker --version
Docker version 19.03.8, build afacb8b
Host machine: Fedora 32
The solution from the comments:
The problem was caused by SElinux which is enabled by default on Fedora.
Edit /etc/sysconfig/docker to remove --selinux-enabled from the OPTIONS variable.
Restart the docker daemon.
$ sudo systemctl restart docker
Run the container. There's no need to rebuild the image.
$ docker run foo
baz.txt
I'm trying to mount a volume via docker-composer but after running docker-compose up the directory is empty.
Dockerfile
FROM alpine:3.8
COPY test.txt ./app/
docker-compose.yml
version: "3.7"
services:
test:
image: myrep/image:latest
volumes:
- "./app/:/app/"
My procedure:
Build docker image on client (docker build .)
Push docker image to my registry (docker tag xxx myrep/image && docker push myrep/image)
On the server I pull the image (docker pull myrep/image)
Run docker-compose up (docker-compose up)
Then when I look into the app folder there is no test.txt file
Any idea what I'm doing wrong?
You copied the file into the image, but when you start the container you overwrite it with your directory when mounting it.
If you want the file to be there don’t mount the volume.
You can verify this by running the image without a volume and executing:
docker-compose exec test ls -l /app
May be you should try to add ./ before test.txt as it is not copying the file to the root directory
Hope it will work for you
FROM alpine:3.8
COPY ./test.txt ./app/
What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback
I am new to docker, I installed docker as per the instructions provided in the official site.
# build docker images
docker build -t iky_backend:2.0.0 .
docker build -t iky_gateway:2.0.0 frontend/.
Now, while I am running these commands in the terminal after the installation of docker, I am getting the below error. I tried with by adding sudo also. But no use.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/esh/Dockerfile: no such file or directory
Your docker images should execute just fine (may require sudo if you are unable to connect to docker daemon).
docker build requires a Dockerfile to present at the same directory (you are executing at your home folder - dont do that) or you need to use -f to specify the path instead of .
Try this:
mkdir build
cd build
create your Dockerfile here.
docker build -t iky_backend:2.0.0 .
docker images