docker-compose fails to run on remote host from local - docker

Via my local I would like to run docker-compose on my remote machine. I have found two ways that should accomplish this but am running into errors.
Fist I am running the following versions:
me:api$ docker-compose -v
docker-compose version 1.29.2, build unknown
me:api$ docker -v
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2
First way and the way I would prefer for this to work is to the the --context flag. Based on this blog post should be possible.
I created the context like so:
docker context create prod --docker "host=ssh://user#host.com"
I can then run the following and get an output of running containers
docker --context prod ps
However running with docker-compose the command fails
docker-compose --context prod -f docker-compose.yml -f docker-compose.prod.yml up -d
ERROR: Context 'prod' not found
The other option was to use the -H flag based on this SO answer to set the host that I want to execute the commands on. Yes I can SSH into my machine with the user I am using.
docker-compose -H ssh://user#host -f docker-compose.yml -f docker-compose.prod.yml up -d
/bin/sh: 1: ssh: Permission denied

Well it does not look great. I have not been able to get any of the methods to work with docker-compose however there is a new (at the time of writing this) docker compose replacement from docker. The instructions are here. I was hoping that the --context flag would work but it is not found with the new version which leads me to believe that it did not work before. The -H flag lead to the same Permission Denied error as before.
The only way I was able to get this to work was by setting the DOCKER_HOST environment variable.

Related

why do docker compose up --scale set instance to 0

I am using a public docker compose plugin to run docker compose job on service machine.
I see this line of code
docker-compose -f docker-compose.yml -p buildkite18e15a6103824eb89c747d49151c7eea up --scale build-premade=0 build-premade
What is the purpose of running --scale build-premade=0? I get an error when I try to run this locally. (docker-compose up doc grep for --scale)
no container found for project "{project_name}": not found.
Anyone seen this before or knows how to get around this? I am guessing there is a docker setting or config somewhere?
reference to GH Issue - https://github.com/buildkite-plugins/docker-compose-buildkite-plugin/issues/322

How to debug docker-compose up failed?

My docker configuration is returning error message with docker-compose up.
ERROR: for proxy Container "d23a4ae03365" is unhealthy.
One of the containers failed to start. Now, the problem is how to debug it. Should I find the container (I know exactly which one) in my docker-compose.yml and build up a manual docker command like:
docker run -i -t <Image Name> -v ... -v ... /bin/bash
Go inside the container and work out why it didn't start.
That will require me manually fill in the volume paths. Otherwise, how to debug it? I just need to know the error message.

CI & Docker-in-a-Docker

I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.

Docker-compose ps not showing any output

I am trying to run docker-compose ps and docker-compose logs and neither are showing any output. I was able to run docker-compose up and verified the correct containers are started with docker ps. However docker-compose logs and ps dont show anything
> sudo docker-compose -f /opt/docker-compose/server1-compose.yml ps
Name Command State Ports
------------------------------
> sudo docker-compose -f /opt/docker-compose/server1-compose.yml logs
Attaching to
Both commands are returning intended output. What is wrong here?
docker-compose version: 1.4.2
Docker version 1.7.1, build 786b29d
thanks so #dnephin to posting a response. For completeness here it is:
dnephin-
" I suspect what's happening here is that the project name is
different. The default project name is the basename of a directory, so
if you run docker-compose from a different directory you might get a
different project name.
You can set it with either -p or COMPOSE_PROJECT_NAME environment
variable. If you look at the first part of the container names (before
the first underscore) that's the project name.
There are open issues to configure the project name from a file. I
think we'll be looking to implement that soon."
Adding the -p switch to my compose command fixed the issue.
ie: sudo docker-compose -f /opt/docker-compose/server1-compose.yml logs -p projectname

docker-compose up not adding code to remote container

I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs

Resources