Making a new container with the same configuration as the old one - docker

Let's say I make a container with some flags. For instance,
docker run -v my_volume:/data my_cool_image
Now, let's say my_cool_image is updated to a new version. Is there a nice way to make a new container with the same -v flag as the old one? The container has been properly configured so that the data does not get stored in the container, so deleting the old container is not a concern.
The best solution I can find is to use docker-compose, but that seems a bit silly for single-container systems.

I'd use a shell script or a Docker Compose YAML file. (Compose isn't really overkill; if you add some error handling and write out one option per line for readability, the shell script and the YAML file wind up being about the same length.)
There's nothing built in to Docker that can extract the docker run options from an existing container.

Related

How to extract docker-compose file from running docker start

I have docker stack started with docker stack deploy --compose-file ...
and later manually edited via Docker Portainer UI.
I'd like to write a script that updates the docker image tag of one of the services.
To do that I need to "download" the latest "docker-compose" stack definition however I cannot find the appropriate docker command.
I do know that the best would be to stop changing stack manually and rely on its definition stored in git but unfortunately, it is not up to me.
Please point me to the appropriate docker command or confirm that it is not available.
As far as i know there is no command you could get the compose file from the running container directly. At least not implemented out of the box in docker. You could try to parse all the relevant information from docker inspect and few other commands to list/inspect all relevant objects?.
I have once came across the similar situation where we had a running container but no run/compose command which we needed to update. At the time (roughly a year ago) i found and used docker-autocompose which did very good job. We only had to manually verify and adjust few things,but it got all the difficult parts with run parameters done for us.
It could help in your case to automate it if your compose configs are simple enough.
But if you wanted to fully automate it to mimic CD, then i would not recommend the approach above. In that case i would check if you could use portainer api as #LinFelix recommended. Or store compose files somewhere - prepared with parameters ($IMAGE_TAG) (git/on server) so you can then generate temporary compose files with all configuration and then remove the current one.

Accessing host's api from inside a container

I'm trying to make a build env with docker and i want to make this automatic. i've written a custom go binary to handle build stuff and i've built an image which has the go binary, maven and java8 sdk installed.
The steps that binary does are:
Clone a git repo
Run build command
Extract build artifacts to host. (which hasnt done yet.)
I'm passing repo url as parameter to binary while running container and it does build.
But the problem is i need those artifacts in order to run builted app.
I know i can use volumes, but i dont want to use them because when build has done volumes are becoming dangle and it needs a job for deleting those dangling volumes.
I thought i can create an api for saving files into host (that means i have to run that api inside host machine) and my custom go binary can send files to the api and api will do the saving.
But when it comes to calling host from inside a container i've got a problem. i'm getting connection refused to port xx error.
Is there a better way to do it , or should i change my approach?
found an answer on accessing-host-machine-as-localhost-from-a-docker-container-thats-also-inside
Running container with --add-host option is the answer.
While you could use
docker cp CONTAINER:SRC_PATH DEST_PATH
to get the files out of your container, I still believe using a volume is the better idea. Instead of using an anonymous volume use a named one:
docker run -v /local/host/dir:/build/output YOURIMAGE
This allows you to pick up the artefacts on your host from the /local/host/dir
https://docs.docker.com/engine/tutorials/dockervolumes/#locate-a-volume

Recover docker container's run arguments

I often find myself in need of re-creating container with minor modifications to arguments used to docker run container originally (things like changing published ports, network, memory amount).
Now I am making images and running them in place of old containers.
This works fine but I don't always have original params to docker run saved and sometimes (esp. when there are lot of things to define) it becomes pain to recover them.
Is there any way to recover docker run arguments from existing container?
Sorry for being a couple of years late, but I had a similar question and no satisfying answer yet, so I still needed to find my way out.
I've found two sources addressing the issue:
A gist
To run, save this to a file, e.g. run.tpl and do docker inspect --format "$(<run.tpl)" name_or_id_of_running_container
A docker image
Quick run:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
Both solutions are quite simple to use, but the second one failed to generate the command for an Nginx container because they did not manage to have it quoted like this "nginx" "-g" "daemon off;"
So, I focused on the first solution, which is a golang template intended to feed the --format parameter of docker inspect. I liked it because it was kind of simple, elegant, and no other tool needed.
I've made some improvements in my forked gist and notified the original author about it.
Couple of answers to this. Run your containers using docker-compose, then you can just run compose files and retain all your configuration. Obviously compose is designed for multi-container applications, but massively underrated for single-container, complex run argument use cases.
Second one is to put your run command into a LABEL on the image. Take a look at Label Schema's docker.cmd etc... Then you can easily retrieve from the image (or from your Dockerfile).
the best way to do this is not to type the commands manually. put them into a shell script... a .sh file on linux/mac, or a .cmd file on windows. then you just run the shell script to create your container and you never have to worry about re-typing the commands and options, you'll never get them wrong, etc.
personally, i write my scripts with "npm scripts" in my package.json file. but the same thing can be done with any tool that can run command-line program with arguments
i do this along with a few other tricks to make sure i never fail to build my images or run my containers. makes life with docker soooo much easier. :)
You can use docker inspect to get the container's configuration. Reconstructing the docker run command from that can be somewhat tedious though.
Another option is to search your shell history using either history | grep "docker run" or ctrl+r (if you use bash). That way, you don't need to go out of your way to save the commands but can still recover them quickly.

Docker-Compose: Initialize vs Run

I'm migrating an existing rails application to docker and docker-compose. There are a few scripts that need to run only at the creation of the containers, for instance a script that copies the prod db into a volume and and indexes it in Elasticsearch.
From then on, when I start the containers locally for development, I only want to run the rails development server and not all the db init scripts. I could make two docker-compose files (say init and run) that are the same except for the command: option on the webapp container.
Is there a better way?
The base Docker system doesn't have an "on run" concept for custom scripts.
What you can do is one of these approaches:
Add to your script a check of if it already has done that. Then it doesn't matter if you re-run it again and again.
Integrate the db into the docker and ship it as already made with the data loaded.
Make a 2 part docker system: The 1st would be the docker you know now with a possible "ONBUILD" command so the 2nd one would run the script. Then the 2nd docker is a one inhering the original one and would run the script with or without the "ONBUILD" above. In docker-compose you would have a local build which would trigger the import while creating the local docker image.
Just an idea
You can use extends in your compose *.yml files.
Extend documentation and examples.

Dynamically get docker version during image build

I'm working on a project the requires me to run docker within docker. Currently, I am just relying on the docker client to be running within docker and passing in an environment variable to the TCP address of the docker daemon with which I want to communicate.
The file in the Dockerfile that I use to install the client looks like this:
RUN curl -s https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
However, the problem is that this will always download the latest docker version. Ideally, I will always have the Docker instance running this container on the latest version, but occasionally it may be a version behind (for example I haven't yet upgraded from 1.2 to 1.3). What I really want is a way to dynamically get the version of the Docker instance that's building this Dockerfile, and then pass that in to the URL to download the appropriate version of Docker. Is this at all possible? The only thing I can think of is to have an ENV command at the top of the Dockerfile, which I need to manually set, but ideally I was hoping that it could be set dynamically based on the actual version of the Docker instance.
While your question makes sense from an engineering point of view, it is at odds with the intention of the Dockerfile. If the build process depended on the environment, it would not be reproducible elsewhere. There is not a convenient way to achieve what you ask.

Resources