I am running rethinkdb inside docker container with bind mount (Which is bind with the host).
When we are running rethinkdb without the container, we shall take database dump with shell scripting very easily.
But when we are running rethinkdb in the docker container, where I wanted to take the dump with shell scripting.
We are running rethinkdb inside the container, so all rethinkdb commands run inside the container (not on the host system).
so How to set up automated dumps of rethinkdb running inside the container?
You can use the same script that you use normally when the database is not containerized. You can achieve that by copying the script into the container and then executing it inside.
docker cp dump.sh <container-name>:<script-container-path>
docker exec -it <container-name> <script-container-path>
The above commands will copy the dump script and execute it inside the container.
The resulting dump will be inside the container, which you can copy back into the host using docker cp <container-name>:<dump-path> .
Related
I am using docker-compose for deployment.
I want to restart my "centos-1" container from "centos-2" container. Both containers are running on the same host.
Please suggest, How could I achieve this in a simplest and automated way?
I followed How to run shell script on host from docker container? and tried to run a script on Host from "centos-2" container, but the script is executing inside a container and not on the host.
Script:
#!/bin/bash
sudo docker container restart centos-1
Error:
line 2: docker: command not found
(Docker isn't installed inside any centos-2 container)
You need:
Install docker CLI (command line interface) on second container. Do not confuse with full scale installation - you dont need docker daemon, only command line tool (docker executable)
Share you host's docker daemon (service) to make it accessible in second container. That is achieved with simply sharing /var/run/docker.sock when launching 2nd container, example:
docker run ... -v "/var/run/docker.sock:/var/run/docker.sock" container2 ...
Now you can execute any docker command, like docker stop from second container and these commands are happily passed to your main (and the only) docker daemon.
There is a approach from the CI-context to control the Docker Daemon on System from a running container called Docker-out-of-Docker (DooD):
you have to install docker inside your container
Map you docker installation from your system inside your container using volumes
-v /var/run/docker.sock:/var/run/docker.sock
Now each docker command inside your container are execute on the system docker installation. E.g. if you type docker image list inside your container there should be the same list as if your type the command on your system.
E.g., my local setup is to run a server (listens to :1111) and a docker container (with -p 5555:5555). The purpose is that a server can send the requests to docker container as well (to :5555).
I think that the typical way to deploy servers is to wrap it in a docker image and run the docker image in the cloud. How can I do the same thing but run my custom docker container inside of a server automatically (e.g., add docker run command to a Dockerfile)?
I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.
I am creating a Spring Boot monitoring agent that collects docker metrics. The agent can be attached through POM dependency to any client Spring Boot application that runs inside a docker container.
In the agent, I am trying to programatically run docker stats
But, it fails to execute because the docker container doesn't have docker client installed in it.
So how can I run docker commands in docker container? Please note, I can't make changes to the Dockerfile of client.
You may execute docker commands within the container by defining the docker socket in the container.
run the container and mount the 'docker.sock' in the following manner:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
so mainly you have to mount docker.sock to order to run docker commands within container.
how to access a path of a container from docker-machine? I have the ip docker-machine and I want to connect via remote in a docker image, e.g:
when I connect to ssh docker#5.5.5.5, all file are docker-machine, but I wat to conect a docker image via ssh.
whe I use this comman docker exec -u 0 -it test bash all files from the imagen are ok, but I want to access with ssh using docker-machine.
How can I do it?
This is tricky as Docker is designed to run a single process in foreground and containers dies when the process completed. This means Docker containers don't run anything additional other than what you define in the Dockerfile or docker-compose.yml.
What you can try is using docker-compose.yml file, expose the port 22 to outside world (also can be done through command line with Dockerfile). This is NOT guaranteed to work as this require the image to run an SSH daemon and most cases it runs one process.
If you're looking to persist files that are used by containers, such as when a container is re-deployed it starts where it left off, you can mount a folder from host machine to the container as a volume.