I am a beginner with docker and have created a docker-compose.yml file. Everything is running well, but I want to move my container that is generated by "docker-compose up" to another machine. How can I save/export my container that is running my services of the docker-compose.yml to another machine?
Thanks
For commit running container to image with new tag you can use this command :
$ docker commit <containerID> new_image_name:tag
For save new_image_name:tag to file use :
$ docker save -o new_file_name.tar new_image_name:tag
Now you can move your docker-compose.yml to same folder to another machine and your new_file_name.tar too.
On your mother machine where are this files run :
$ docker load --input new_file_name.tar
Rewrite in docker-compose.yml section image: to your new_image_name:tag
If you lost the name use :
$ docker images
and continue with $ docker load..... from above
Last step is run :
$ docker-compose up -d
Related
I run some tests inside a docker container, at the end, test reports get generated in a directory called 'allure_test_results' and I would like those report to be available on the host machine.
1.Command in a bash file that I run as an entrypoint in a docker file:
behave -f allure_behave.formatter:AllureFormatter -o allure_test_results service/features/
2.The docker image will also be ran in Jenkins CI and I would like the same thing to happen.
3.Solutions I tried (container is not running):
docker cp <container ID>:/allure_test_results/ allure_test_results/
docker run <image id> cp /allure_test_results/:/<repo root>/allure_test_results/
PS. It would be great if the copy can be done inside dockerfile or docker-compose
I would really appreciate any help.
Thank you guys so much
I just figure it out. Thank you great community.
I added this to docker compose file:
volumes:
- ./<host dir>/:/<container dir>/allure_test_results/
You can map the internal directories with host directories. In simple docker use the following
docker run -v <host_directory_path>:/allure_test_results/ allure_test_results docker_image:tag
In docker compose use the volumes mapping as Aziz said.
Volumes:
- <host_directory_path>:/allure_test_results/ allure_test_results
Volume mounting is the option in docker :
docker run -v Jenkins _workspace path:/allure_test_results
we will map volume to jenkins workspace and the you can publish those results bin jenkins
Check this docker container:
https://github.com/fescobar/allure-docker-service
https://github.com/fescobar/allure-docker-service/tree/master/allure-docker-python-behave-example
Is there any way to find a source of the docker container script? I have a setup where I can not find any docker-compose.yml file nor the bash script etc that would have run all the Docker containers currently running. I have a virtual machine that starts docker containers on the startup, but have no idea which file is actually run.
i think no option to know which docker-compose file is use.
but you can check manual every you project folder.
the docker-compose mechanism is by matching the docker-compose.yml file. so if you run command sudo docker-compose ps in every your project folder. docker-compose will match between the docker-compose file used by container and docker-compose file in your project, if the same than the results will be displayed, if not the results is not displayed
If the containers are running automatically on reboot and you have no cron/bash profile/rc.local or any other startup screen then that may mean that they are containers with --restart option set. You can change that by running below command
docker ps -q | xargs docker update --restart no
docker ps -q | xargs docker stop
Then restart the machine. The containers should not start. If they do then you have some script somewhere which is starting them
I have created a docker image of the working environment that I use for my project.
Now I am running the docker using
$ docker run -it -p 80:80 -v ~/api:/api <Image ID> bash
I do this because I don't want to develop in command line and this way I can have my project in api volume and can run the project from inside too.
Now, when I commit the container to share the latest development with someone, it doesn't pack the api volume.
Is there any way I can commit the shared volume along with the container?
Or is there any better way to develop from host and continuously have it reflected inside docker then the one I am using (shared volume)?
A way to go is following:
Dockerfile:
FROM something
...
COPY .api/:/api
...
Then build:
docker build . -t myapi
Then run:
docker run -it -p 80:80 -v ~/api:/api myapi bash
At this point you have myapi image with the first state (when you copied with COPY), and at runtime the container has /api overrided by the directory binding.
Then to share your image to someone, just build again, so you will get a new and updated myapi ready to be shared.
I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4
is there a way to save the contents of a container's "internal" HD. I have tried to use the docker commit but when I shut down the container and turn it on again, the contents that I have downloaded or generated inside the container (logs, etc) are gone.
When you start the container back up do you use docker start or docker run?
docker run -i -t docker/image /bin/bash will start a NEW container with the information from the original imagefile.
docker start {dockercontainerID} will restart a previously running container. you can get a list of previous dockers with docker ps -a
If you have save a docker with docker commit {runningdocker} docker/image2 you will use the new image name. ie `docker run -ti docker/image2 /bin/bash