I am trying to create local development environment using Docker Compose. I started using this example https://github.com/b00giZm/docker-compose-nodejs-examples/tree/master/03-express-gulp-watch and it is working like a charm. No problem there.
However the structure in that example is too simple and doesn't fit to my needs. I am planning to run my application with coreos on production, so I need a bunch of other config files also. This is roughly how I CHANGED the example above:
application
app
bin
public
routes
views
app.js
gulpfile.js
package.json
vm
coreos (production configs here)
docker (development configs here)
app
Dockerfile
docker-compose.yml
Dockerfile for actual application in there, because I would like to use separate dockerfiles for production and development use.
I also changed my docker-compose.yml to this:
web:
build: app
volumes:
- "../../app:/src/app"
ports:
- "3030:3000"
- "35729:35729"
After this "docker-compose build" goes ok, but "docker-compose up" doesn't. I get an error saying, that gulpfile cant be found. In my logic this is because of volume mounts, they don't work with parent directories as I assume.
Any idea what I am doing wrong? Or I you have working example for this situation, please share it.
Your are probably hitting the issue of using volumes too early and trying to access them in cascading docker images.
See this:
https://github.com/docker/docker/issues/3639
dnephin was right. Removing old containers did the trick. Thanks!
Related
I have docker-compose.yml
volumes:
- D:/Docker/config:/config
- D:/Downloads:/downloads
I can do this with docker-compose up without any issue
But in portainer stack, I got an error
Deployment error
failed to deploy a stack: Named volume "D:/Docker/config:/config" is used in service "test" but no declaration was found in the volumes section. : exit status 1
Basically I want to map my host folder D:/Docker/config. How do I do this in portainer?
I spent a long time figuring this out
So in Docker-Compose, on Windows running Docker Desktop in WSL2 mode when entering a mount point for a Bind Mount you have to format it
- /mnt/DRIVE-LETTER/directory/to/location:/container/path
An example would be
- /mnt/k/docker/tv/xteve/config:/home/xteve/config
You also have the option of using relative paths from where the Compose file is located but with Portainer that isn't an option. I know, I tried everything I could think of. Then I was looking at tutorials & I saw the same thing #warheat1990 posted here & experimented with that.
Portainer tells you to paste your Docker-Compose, but the paths are different. The paths inside Portainer will not work & be ignored or placed somewhere you can't get to them from windows, unless you remove the "/mnt" & start with the drive letter
- /DRIVE-LETTER/directory/to/location:/container/path
An example would be
- /k/docker/tv/xteve/config:/home/xteve/config
I tested it & outside of Portainer without the "/mnt" it fails, but within Portainer it can't be there. So far I'm fairly confident that there is no way to do it that works for both. Which is super annoying because Portainer makes it easy to paste your Compose or actually import the file, but then you must edit it, just nobody tells you that...
Hope that helps
use /d/Downloads to make it work, thanks to #xerx593
edit : came here about the same question, and forgot the question was for portainer. This answer below worked in docker compose in windows.
Not sure how helpful that is but as I was trying to deploy Filerun in docker desktop windows, I ran into that issue myself.
Goal was to have my containers data in one place on a windows folder but Filerun files were a different drive.
I fumbled A LOT but what worked for me in the end with docker compose :
volumes:
- D:\myfolder:/mnt/myfolder
- ./filerun/html:/var/www/html
- ./filerun/user-files:/user-files # (default, could be pointed directly to mnt)
The ./filerun define a relative path to where the docker compose yml is stored and executed so that I have persistence in my windows folder there even when removing a compose (this is not intended for intensive prodiction stuff).
And my files were accessible in Filerun through /mnt/myfolder.
Not sure if docker desktop has been updated on that since the previous answer was given.
I'm working on a project with multiple collaborators; to share code and compute environment, we've setup a github repository which includes a Dockerfile and docker-compose.yml file. I can work on code and my collaborators can just pull the repository, run docker-compose up and have access to my jupyter notebooks in the same environment that I develop them.
The only problem with this is that, because we are working at different sites, the data that we are computing over is in different locations. So on my end, I want my docker-compose.yml to include:
volumes:
- /mnt/shared/data:/data
while my collaborators need it to say something like
volumes:
- /Volumes/storage/data:/data
I get that one way to do this would be to use an environment variable; in the docker-compose.yml file:
volumes:
- "$DATA_PATH":/data
This forces them to run something like:
DATA_PATH=/Volumes/storage/data docker-compose up
As a solution, this isn't necessarily a problem, but it feels clunky to me, and fails to be self-documenting in the repository. I can wrap docker-compose in a shell script (a potential solution to almost any problem), but this also feels clunky. I can't help but suspect that there's a better solution here. Does docker-compose allow for this kind of functionality? Is there a best-practices way of accomplishing this? If not, I'm curious if anyone knows what the motivation behind excluding this functionality might be and/or why it isn't considered a good idea.
Thanks in advance.
You are extremely close. What I would add is, that you have a host specific .env file, see Environment variables in Compose, on each computer, in the same folder as the docker-compose.yml, with
DATA_PATH=/mnt/shared/data
or whatever value for DATA_PATH you like. Just add that .env to your .gitignore, so that every host keeps his own config off the repository and that's it.
I am pretty new to Docker. After reading specifically what I needed I figured out how to create a pretty nice Docker setup. I have created some setup where in I can start up multiple systems using one docker-compose.yml file.
I am currently using this for testing specific PHP code on different PHP and MySQL versions. The file structure looks something like this:
./mysql55/Dockerfile
./mysql56/Dockerfile
./mysql57/Dockerfile
./php53/Dockerfile
./php54/Dockerfile
./php56/Dockerfile
./php70/Dockerfile
./php71/Dockerfile
./php72/Dockerfile
./web (shared folder with test files available on all php machines)
./master_web (web interface to send test request to all possible versions using one call)
./docker-compose.yml
In the docker-compose file I setup different containers most refering to the local Dockerfiles, some refering to online image names. When I run docker-compose up all containers start as expected in the configured network configuration and I'm able to use it as desired.
I would first of all like to know how this setup is called. Is this called a "docker swarn" or is such setup called differently?
Secondly, I'd like to make one "compiled/combined" file (image, container, swarn, engine, machine) or however it is called) of this. That I can save without having to depend on external sources again. Of course the docker-compose.yml file will work as long as all the refered external sources are still available. But I'd like to pusblish my fully confired setup as is. How do I do that?
You can publish built images with Docker registry. You can setup your own or use third-party service.
After that, you need to prefix you image names with your registry IP/DNS in docker-compose.yml. This way, you can deploy it anywhere docker-compose is installed (and docker-compose itself can be run as docker container too), just need to copy your docker-compose.yml file there.
docker-machine is tool to deploy to multiple machines, as is docker swarm.
I have a web application XY which consists of nginx, php-fpm and mariadb. I successfully splitted everything up to it's own container using docker-compose and it's running like a charm. For development purposes I just mounted a local directory that contains the actual source/php code. When deploying this to a staging or production environment the Docker docs told me to bake the source code into the actual image. In this case I have to copy the source code to the nginx as well as to the php-fpm image when building it, because both of them need it.
When the application itself gets bigger (more assets and libaries) the nginx and php-fpm images grow both. In my opinion this somehow violates the "keep the image as small as possible" rule and this seems so deeply wrong to me. I've always learned not to repeat myself and store logic in one place, encouple things and so on.
Is this the right way to do it, or am I missing something?
In this case I'd probably create a new container which would contain the source code. This container can export the source code's directory in a volume that both the nginx and the php-fpmcan can mount.
There is an interesting writeup on Dockerise your PHP application with Nginx and PHP7-FPM. This example uses volumes to share code between PHP and Nginx.
Your point about not repeating yourself isn't a bad one, but consider you may not always want the same number of Nginx containers and PHP containers. Maybe the PHP part of your application will be under more load than the part that serves static assets and you'll want to scale that up independently. If you use something like Docker Swarm, you aren't even guaranteed all of your PHP containers will be on the same host.
Your images are deployment artifacts, there isn't anything wrong with having the same static content baked into multiple images.
We are trying to run two apps via docker-compose. These apps are (obviously) in separate folders, each of them having their own docker-compose.yml . On the filesystem it looks like this:
dir/app1/
-...
-docker-compose.yml
dir/app2/
-...
-docker-compose.yml
Now we need a way to compose these guys together for they have some nitty-gritty integration via http.
The issue with default docker-compose behaviour is that if treats all relative paths with respect to folder it is being run at. So if you go to dir from the example above and run
docker-compose up -f app1/docker-compose.yml -f app2/docker-compose.yml
you'll end up being out of luck if any of your docker-compose.yml's uses relative paths to env files or whatever.
Here's the list of ways that actually work, but have their drawbacks:
Run those apps separately, and use networks.
It is described in full at Communication between multiple docker-compose projects
I've tested that just now, and it works. Drawbacks:
you have to mention network in docker-compose.yml and push that to repository some day, rendering entire app being un-runnable without the app that publishes the network.
you have to come up with some clever way for those apps to actually wait for each other
2 Use absolute paths. Well, it is just bad and does not need any elaboration.
3 Expose the ports you need on host machine and make them talk to host without knowing a thing about each other. That is too, obviously, meh.
So, the question is: how can one manage the task with just docker-compose ?
Thanks to everyone for your feedback. Within our team we have agreed to the following solution:
Use networks & override
Long story short, your original docker-compose.yml's should not change a bit. All you have to do is to make docker-compose.override.yml near it, which publishes the network and hooks your services into it.
So, whoever wants to have a standalone app runs
docker-compose -f docker-compose.yml up
But when you need to run apps side-by-side and communicating with each other, you should go with
docker-compose -f docker-compose.yml -f docker-compose.override.yml up