I've got a container running on one of my systems (outside of my Docker swarm) that links host directories as volumes:
docker run -d --name=plex --restart=always -v /plex/config:/config -v /movies:/movies --net=host -p 32400:32400 -e X_PLEX_TOKEN=$PLEXTOKEN wernight/plex-media-server:autoupdate
I'd like to get some high availability with my swarm so in case my Plex container goes down on one host it come up on another host. I'm using NFS for my mounts (movies and plex) and I've got them mounted on each host.
I've started with this:
docker service create --name plex --restart=any --mount /plex/config:/config --mount /movies:/movies -p 32400:32400 -e X_PLEX_TOKEN=$PLEXTOKEN wernight/plex-media-server:autoupdate
But this fails because mount is expecting a key=value pair. The documentation is sparse for not sure where to go from here. Without the mounts the service starts up fine.
What is the associated command to create a service that will bring up my "Plex" instance on other nodes in my swarm in case of a failure?
I tend to agree that the docs here are pretty thin. Here's the syntax I've seen so far:
docker service create --name plex --restart=any \
--mount type=bind,source=/plex/config,target=/config \
--mount type=bind,source=/movies,target=/movies \
-p 32400:32400 -e X_PLEX_TOKEN=$PLEXTOKEN \
wernight/plex-media-server:autoupdate
There's also some discussion on changing this format over on github.
Related
I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb
I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create, docker cp, docker start, docker cp, Docker rm sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a to folder b, where you execute the script on folder b. And then mount the files with ./:/a. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)
I've been doing a bit of reading up about setting up a dockerized RabbitMQ cluster and google turns up all sorts of results for doing so on the same machine.
I am trying to set up a RabbitMQ cluster across multiple machines.
I have three machines with the names dockerswarmmodemaster1, dockerswarmmodemaster2 and dockerswarmmodemaster3
On the first machine (dockerswarmmodemaster1), I issue the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 -p 15672:15672 \
-p 25672:25672 --hostname dockerswarmmodemaster1 --name roger_rabbit \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Now this starts up a rabbitMQ just fine, and I can go to the admin page on 15672 and see that it is working as expected.
I then SSH to my second machine (dockerswarmmodemaster2) and this is the bit I am stuck on. I have been trying variations on the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit -e CLUSTERED=true \
-e CLUSTER_WITH=rabbit#dockerswarmmodemaster1 \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
No matter what I try, the web page on both RabbitMQ machines says that there is no cluster under the 'cluster links' section. I haven't tried involving the third machine yet.
So - some more info:
The machine names are resolvable by DNS.
I have tried using the --net=host switch in the docker run command on both machines; no change.
I am not using docker swarm or swarm mode.
I do not have docker compose installed. I'd prefer not to use it if possible.
Is there any way of doing this from the docker run command or will I have to download the rabbit admin cli and manually join to the cluster?
You can use this plugin https://github.com/aweber/rabbitmq-autocluster to create a RabbitMQ docker cluster.
The plugin uses etcd2 or consul as service discovery, in this way you don't need to use the rabbitmqctl command line.
I used it with docker swarm, but it is not necessary.
Here is the result
The official container seems to not support environment variables CLUSTERED and CLUSTER_WITH. It supports only a list variables that are specified in RabbitMQ Configuration.
According to official Clustering Guide, one of possible solutions is via configuration file. Thus, you can just provide your own configuration to the container.
Modified default configuration in your case will look like:
[
{ rabbit, [
{ loopback_users, [ ] },
{ cluster_nodes, {['rabbit#dockerswarmmodemaster1'], disc }}
]}
].
Save this snippet to, for example, /home/user/rmq/rabbitmq.config.
Hint: If you want to see node in management console, you need to add another file /home/user/rmq/enabled_plugins with only string
[rabbitmq_management].
after that, your command will look like
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit \
-v /home/user/rmq:/etc/rabbmitmq \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
PS You may also need to consider setting environment variable RABBITMQ_USE_LONGNAME.
In order to create a cluster, all rabbitmq nodes that are to form up a cluster must be accessible (each one by others) by node name (hostname).
You need to specify a hostname for each docker container with --hostname option and to add /etc/host entries for all the other containers, this you can do with --add-host option or by manually editing /etc/hosts file.
So, here is the example for a 3 rabbitmq nodes cluster with docker containers (rabbitmq:3-management image).
First, create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1. We are going to have the following:
3 docker containers named rab1con, rab2con and rab3con
IPs respectively will be 172.18.0.11 , -12 and -13
each of them will have the host name respectively rab1, rab2 and rab3
all of them must share the same erlang cookie
Spin up the first one
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host rab2:172.18.0.12 --add-host rab3:172.18.0.13 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
second one
docker run -d --net mynet1 --ip 172.18.0.12 --hostname rab2 --add-host rab1:172.18.0.11 --add-host rab3:172.18.0.13 --name rab2con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
last one
docker run -d --net mynet1 --ip 172.18.0.13 --hostname rab3 --add-host rab2:172.18.0.12 --add-host rab1:172.18.0.11 --name rab3con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Then, in container rab2con, do
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit#rab1
rabbitmqctl start_app
and the same in rab3con and that's it.
I've been following these two tutorials to understand a bit about Docker networking:
https://docs.docker.com/engine/examples/running_redis_service/
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks
The first tutorial says that the container is not exposing ports by not using -p or -P flags.
$ docker run --name redis-server -d <your username>/redis
And when running another container it uses the --link flag to "redis" container:
$ docker run --name redis-client --link redis:db -i -t ubuntu:14.04 /bin/bash
And that way I can connect from redis-client container to redis-server container because they are linked. But while experimenting with other configurations, I run another container, let's call it redis-client-2 -- just after I stoped and removed redis-client container -- that doesn't use the --link flag:
$ docker run --name redis-client-2 -i -t ubuntu:14.04 /bin/bash
And I noticed that even without the --link flag set I can connect to redis-server container's redis server from redis-client-2
My question is, am I misunderstanding the concept of --link and exposed ports on Docker? Why can I still connect to redis-server container with or without the --link flag?
Thanks in advance
Docker containers on the same Docker network (if none is setup, default) as each other can communicate with each other freely. --link is a vestigial feature from before the days of first-class Docker networking.
The -p & -P options only relate to exposing ports outside of the Docker network (i.e. to the host) and has no bearing on container-to-container communication.
I need to connect my db container with my server container. Now I just red about the legacy parameter --link, which works perfect
$> docker run -d -P --name rethinkdb1 rethinkdb
$> docker run -d --link rethinkdb:db my-server
But, if this parameter will be dropped eventually, how would I do something like the above ?
The docs says to use the docker network command instead (which is available since Docker 1.9.0 - 2015-11-03)
Instead of
$> docker run -d -P --name rethinkdb rethinkdb
$> docker run -d --link rethinkdb:rethinkdb my-server
you will now use
$> docker network create --name my-network
$> docker run -d -P --name rethinkdb1 --net=my-network rethinkdb
$> docker run -d --net=my-network my-server
Note that in the new form, container names are used, while before you were able to define an alias.
When two containers are part of the same network, their /etc/hosts file is updated so that you can use the container names instead of their IP addresses.