I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs
Related
I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.
I am using jenkinssci/docker to setup some build automation on a server for a laravel project.
Using the command docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts, everything boots up fine, i create the admin login, create the project and link all of that together.
Yesterday i downloaded libraries to the container that this command gave me in docker using docker exec -u 0 -it <container_name_or_id> /bin/bash to get into the container as root to install things like php, composer, noodejs/npm. After this was done, i built the project and got a successful build.
Today I start the docker container using the same above command, build the project and build fails. The container no longer has any of the downloaded libraries (php, composer, node).
It is my understanding that including jenkins_home:/var/jenkins_home in the command to start the docker container, data would persist. This is wrong?
So my question is, how can i make it so that i can keep these libraries in the docker container that it builds?
I just started learning about these tools yesterday, so i'm not entirely sure I am even doing it the best. All i need is to be able to log into the server for Jenkins and build the project/ship the code to our staging/live servers.
side note: I am not currently using a Dockerfile. as mentioned here I am able to download tools in the container as root.
Your understanding is correct: you should use a persistent volume, otherwise you will lose your data every time the container is recreated.
I understand that you are running the container in a single machine with docker. You need to put a full path or relative path on the local folder of the volume definition to be sure that data persists, try with:
docker run -p 8080:8080 -p 50000:50000 -v ./jenkins_home:/var/jenkins_home jenkins/jenkins:lts
Look at the ./ on the local folder
Here my docker-compose.yml I'm using for a long time
version: '2'
services:
jenkins:
image: jenkins/jenkins:lts
volumes:
- ./jenkins:/var/jenkins_home
ports:
- 80:8080
- 50000:50000
Is basically the same but in yaml format
I am using Windows 10 Pro with Docker installed. I $ docker pull rocker/shiny image on my computer and started it as described in documentation https://hub.docker.com/r/rocker/shiny/ using the following command:
docker run -d -p 80:3838 -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/srv/shiny-server/ -v C:\\Users\\<My name>\\Documents\\R\\Rprojects\\ShinyHelloWorld\\:/var/log/shiny-server/ rocker/shiny
The container created successfully:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0ee402966b9 rocker/shiny "/usr/bin/shiny-serv…" 2 minutes ago Up 2 minutes 0.0.0.0:80->3838/tcp youthful_banach
I created ShinyHelloWorld application using RStudio, and the folder on the local host that I mounted to docker container basically contains one file app.R with default shiny application created by RStudio.
Now the problem is: I can't run this application from my browser using address http://localhost:3838/ShinyHelloWorld/.
When I use URL http://localhost:3838 it returns web page with single sentence Index of /. So, there is some one who listens.
Did I correctly run shiny server?
I suppose that I am using incorrect URL in my browser to access server. How to do it correctly?
Do I need some installation of my shiny app to the server?
Is it possible to run shiny server using tocken, like with:
http://localhost:8888/?token=44dab68c1bc7b1662041853573f37cfa03f13d029d397816
as described, e.g. in the book for COOK, J.: Docker for Data Science: Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server: Apress., 2017
How to find the tocken if it exists?
Suppose that I want to use docker-compose.yml and then $ docker-compose up. Please, help complete the script below to execute the same command as above.
version: "3"
services:
image: rocker/shiny
volumes:
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/srv/shiny-server/
- C:\\Users\\aabor\\Documents\\R\\Rprojects\\ShinyHelloWorld:/var/log/shiny-server/
ports:
- 80:3838
container_name: rocker-shiny-container
Look at ports 0.0.0.0:80->3838/tcp - means your port 80 will go to 3838 on the container - so you should try http://localhost first.
I resolved the issue by myself. The problem was with folder path.
This command will create docker container correctly:
docker run -d -p 3838:3838 -v //c/Users/<My Name>/Documents/R/Rprojects:/srv/shiny-server/ -v //c/Users/<My Name>/Documents/R/Rprojects:/var/log/shiny-server/ rocker/shiny
Then if I use URL http://localhost:3838/ShinyHelloWorld/ in my browser shiny application will start.
Is there any way to find a source of the docker container script? I have a setup where I can not find any docker-compose.yml file nor the bash script etc that would have run all the Docker containers currently running. I have a virtual machine that starts docker containers on the startup, but have no idea which file is actually run.
i think no option to know which docker-compose file is use.
but you can check manual every you project folder.
the docker-compose mechanism is by matching the docker-compose.yml file. so if you run command sudo docker-compose ps in every your project folder. docker-compose will match between the docker-compose file used by container and docker-compose file in your project, if the same than the results will be displayed, if not the results is not displayed
If the containers are running automatically on reboot and you have no cron/bash profile/rc.local or any other startup screen then that may mean that they are containers with --restart option set. You can change that by running below command
docker ps -q | xargs docker update --restart no
docker ps -q | xargs docker stop
Then restart the machine. The containers should not start. If they do then you have some script somewhere which is starting them
I am trying to integrate docker into my CI platform. After getting this working properly with a Docker-in-a-docker solution, I came across a blog post by one of the Docker maintainers, where he says that instead of using a Docker-in-a-docker solution for my CI, I should instead simply mount the /var/run/docker.sock to my CI container.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
So I tried this. I ran the following command:
docker run -p 8080:8080 -p 50000:50000 -v /var/run/docker.sock:/var/run/docker.sock jenkins
Using jenkins as my CI container.
When running the above command, jenkins starts up properly, and I can jump into the container to see that the docker.sock file is located in the /var/run/ path.
However, when I run the command: docker, the machine returns with the following message:
bash: docker: command not found
Does anyone know what I am missing in order to make this work per the author's instructions?
I am using Docker v. 1.11.1, on a fresh CentOS 7 box.
Thanks in advance
Figured this out today. The above command will work so long as the docker daemon + dependencies are added to the container. In my case, I ended up writing a simple Dockerfile, which also included the line:
RUN curl -sSL https://get.docker.com/ | sh
This installed Docker on the container, and when I ran docker images from within the container, I could see all of the images from my host machine. I am now able to use all of the docker commands from within the container.