This question already has an answer here:
Docker compose up keep replace existing container
(1 answer)
Closed 11 days ago.
I am working on a Project where I create a WebApp that is hackable (via XSS, SQL-Injections etc.) for demonstration purposes. I also have a version of the App that is "secure". They are both existing in the same GIT repository, but on different branches. To be able to run both Apps at the same time, I wanted to start them in separate, disjoint docker-compose instances (consisting of a Frontend, Backend and Database). But since they are in the same directory on my PC and I just change the contents with git, it overrides the instance I created first with the newer one. My guess is that docker tags the docker-compose instances by naming them after the directory where the docker-compose.yml is to be found.
Here I have a picture to illustrate what I mean by "Docker-Compose instance". This is getting overridden every time I do a docker-compose build followed by a docker-compose up. I want to be able to have two disjoint instances of it. One being "SecureForum" and one being "HackableForum".
How can I use a command like docker-compose build in a way that results in two different instances, even though I'm doing it in the same repository. Is there maybe a way to tag the different docker-compose instances, so they don't override each other?
I want to avoid to create a copy of the whole repository in order to run the docker-compose build command from different directories so they would not override eachother.
Docker-Compose verison: 3.8
I hope it was understandable what my problem is, and what I am trying to achieve. I am new to Stackoverflow so any help with being more precise with my question is apreciated :)
This seems to be suboptimal usage of git branches, so I would merge them, have differently named files (code, Dockerfiles, etc...) for the secure and insecure versions of your app and then roll it all into one big docker-compose file that uses two separate networks (one for the secure apps, the other for the insecure).
However, if you want to stay with your current setup simply create a new folder (with another name than your current folder), clone the same repository, switch to the insecrure branch and start docker-compose in the new folder.
Now switch back to the original folder, switch to the secure branch and start docker-compose there.
Related
We frequently spin up some quick exploratory docker-based projects for a day or two that we'd like to quickly and easily discard when done, without disturbing our primary ongoing containers and images.
At any point in time we have quite a few 'stable' docker images and containers that we do NOT want to rebuild all the time.
How can one remove all the images and containers associated with the current directory's Dockerfile and docker-compose.yml file, without disturbing other projects' images and containers?
(All the Docker documentation I see shows how to discard them ALL, or requires finding and listing a bunch of IDs manually and discarding them manually. This seems primitive and time-wasting... In a project folder, the Dockefile and docker-compose.yml file have all the info needed to identify "all images and images that were created when building and running THIS project" so it seems there would quick command to remove that project's docker dregs when done.)
As an example, right now I have rarely revised Docker images and containers for several production Rails 5 apps I'd like to keep untouched, but a half-dozen folders with short-term Rails 6 experiments that represent dozens of images and containers I'd like to discard.
Is there any way to tell Docker... "here's a Dockerfile and a docker-compose,yml file, stop and remove any/all containers and images that were created by them?"
We are running Docker Swarm + Ceph. All runs okay. We plan to move more of our internally developed as well as third party applications to Swarm. Now the issue I have is as follows:
Let’s say I deployed 10 third party applications as Swarm stacks. I do this with Docker stack deploy command. I supply docker-compose file to it (often, in its turn composed out of multiple docker-compose-*.yml files compiled via docker-compose config). Now, if I want to change something in the stack, I often need my initial compose-files. Is there any kind of registry for docker-compose deployment descriptors? Like docker image registry but for docker-compose descriptors?
One of the ideas I have is to maintain Git or Mercurial repository with some directory structure to version the descriptors. This idea looks interesting, but works (semi-)well just for third party applications. With our own applications this adds a problem, as we often use CI/CD. And this would mean that we need to checkout one more repository during deployment, replace deployment descriptors for our apps and commit them. This may be a little tricky, as it may potentially lead to merge conflicts, etc.
Ideally, the solution I am looking for shall provide an easy way to get the latest versions of deployment descriptors of a particular (deployed) stack at the same time holding previous versions of them.
How do you manage your docker-compose files when there are too much of them?
We have a project which uses docker-compose.yml file to run several services. Our developers use that to run and test the software on their local dockers.
However, when running this on our server, I need to have some changes: we have a docker network setup + some env variables which have to be set.
Normally, we would have to create a copy of the file and apply those changes there. This will mean that when a dev makes changes to the regular docker-compose.yml file, he will have to remember that there's another file to be changed. This never works. I know we could make a git hook to monitor for changes in this file and ask for similar in the other file, but I can see possibilities of changes which may not have to be reflected in the other one.
I wanted to know if there's a way to create a separate docker compose file, which will take the original one and add some changes to it?
Thanks
Krystian
yes, docker-compose support inheritance.
Compose supports two methods of sharing common configuration:
Extending an entire Compose file by using multiple Compose files
Compose supports two methods of sharing common configuration:
We are trying to run two apps via docker-compose. These apps are (obviously) in separate folders, each of them having their own docker-compose.yml . On the filesystem it looks like this:
dir/app1/
-...
-docker-compose.yml
dir/app2/
-...
-docker-compose.yml
Now we need a way to compose these guys together for they have some nitty-gritty integration via http.
The issue with default docker-compose behaviour is that if treats all relative paths with respect to folder it is being run at. So if you go to dir from the example above and run
docker-compose up -f app1/docker-compose.yml -f app2/docker-compose.yml
you'll end up being out of luck if any of your docker-compose.yml's uses relative paths to env files or whatever.
Here's the list of ways that actually work, but have their drawbacks:
Run those apps separately, and use networks.
It is described in full at Communication between multiple docker-compose projects
I've tested that just now, and it works. Drawbacks:
you have to mention network in docker-compose.yml and push that to repository some day, rendering entire app being un-runnable without the app that publishes the network.
you have to come up with some clever way for those apps to actually wait for each other
2 Use absolute paths. Well, it is just bad and does not need any elaboration.
3 Expose the ports you need on host machine and make them talk to host without knowing a thing about each other. That is too, obviously, meh.
So, the question is: how can one manage the task with just docker-compose ?
Thanks to everyone for your feedback. Within our team we have agreed to the following solution:
Use networks & override
Long story short, your original docker-compose.yml's should not change a bit. All you have to do is to make docker-compose.override.yml near it, which publishes the network and hooks your services into it.
So, whoever wants to have a standalone app runs
docker-compose -f docker-compose.yml up
But when you need to run apps side-by-side and communicating with each other, you should go with
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
One microservice stays in one docker container. Now, let's say that I want to upgrade the microservice - for example, some configuration is changed, and I need to re-run it.
I have two options:
I can try to re-use existing image, by having a script that runs on containers startup and that updates the microservice by reading new config (if there is) from some shared volume. After the update, script runs the microservice.
I can simply drop the existing image and container and create the new image (with new name) and new container with updated configuration/code.
Solution #2 seems more robust to me. There is no 'update' procedure, just single container creation.
However, what bothers me is if this re-creation of the image has some bad side-effects? Like a lot of dangling images or something similar. Imagine that this may happens very often during the time user plays with the app - for example, if developer is trying out something, he wants to play with different configurations of microservice, and he will re-start it often. But once it is configured, this will not change. Also, when I say configuration I dont mean just config files, but also user code etc.
For production changes you'll want to deploy a new image for changes to the file. This ensures your process is repeatable.
However, developing by making a new image every time you write a new line of code would be a nightmare. The best option is to run your docker container and mount the source directory of the container to your file system. That way, when you make changes in your editor, the code in the container updates too.
You can achieve this like so:
docker run -v /Users/me/myapp:/src myapp_image
That way you only have to build myapp_image once and can easily make changes thereafter.
Now, if you had a running container that was not mounted and you wanted to make changes to the file, you can do that too. It's not recommended, but it's easy to see why you might want to.
If you run:
docker exec -it <my-container-id> bash
This will put you into the container and you can make changes in vim/nano/editor of your choice while you're inside.
Your option #2 is definitely preferable for a production environment. Ideally you should have some automation around this process, typically to perform something like a blue-green deploy where you replace containers based on the old image one by one with those from the new, testing as you go and then only when you are satisfied with the new deployment do you clean up the containers from the previous version and remove the image. That way you can quickly roll-back to the previous version if needed.
In a development environment you may want to take a different approach where you bind mount the application in to the container at runtime allowing you to make updates dynamically without having to rebuild the image. There is a nice example in the Docker Compose docs that illustrates how you can have a common base compose YML and then extend it so that you get different behavior in development and production scenarios.