Docker RabbitMQ runs then stops - docker

I am trying to follow this tutorial on setting up docker clusters https://levelup.gitconnected.com/setting-up-rabbitmq-cluster-c247d61385ed
I get to running the following command that I will need to run for the other two nodes
docker run -d --rm --net rabbit -v C:\RabbitPrototype\config\rabbit-1/:/config/ -e RABBITMQ_CONFIG_FILE=/config/rabbitmq -e RABBITMQ_ERLANG_COOKIE=ABCDYJLFQNTHDRZEPLOZ --hostname rabbit-1 --name rabbit-1 -p 8081:15672 rabbitmq:3.9-management
I can see it run in the docker graphical container view, but after a couple seconds it disappears, as seems that the container stops running, what do I need to do order to keep it running, are there any logs to look at to see why it stopped?
I have removed the --rm mentioned by #Omer
I get this error
2021-12-16 16:24:41.403174+00:00 [erro] <0.130.0> Failed to load advanced configuration file "/config/rabbitmq.config": 1: syntax error before: '.'
My config file that I am trying to load looks like this, copied from the tutorial page, so currently not sure what the issue is with the . (dot) between then users.guest part on line one?
loopback_users.guest = false
listeners.tcp.default = 5672
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#rabbit-1
cluster_formation.classic_config.nodes.2 = rabbit#rabbit-2
cluster_formation.classic_config.nodes.3 = rabbit#rabbit-3

The issue from the error message seems to be that RabbitMQ thinks you are providing it an advanced configuration file instead of the normal configuration file - https://www.rabbitmq.com/configure.html#advanced-config-file . Even though since RabbitMQ 3.7+ has sysctl(the format you used) kind of configuration files, the advanced configuration file still uses the classic configuration format(https://www.rabbitmq.com/configure.html#config-file-formats) which explains the syntax error.
From the docs - https://www.rabbitmq.com/configure.html#configuration-files
Not sure why it would pick the value of the RABBITMQ_CONFIG_FILE as the advanced config file instead of the default one.
Can you update the question with the full logs? Even after the container is dead, you can check its logs using
docker logs rabbit-1

I seem to have something running with this command now running this, I renamed the rabbitmq.config to rabbitmq.conf and also told it to put it in the /etc/rabbitmq/ which seems to be the default location
docker run --net rabbit -v C:\\RabbitPrototype\config\rabbit-1\:/etc/rabbitmq/ -e RABBITMQ_ERLANG_COOKIE=ABCDYJLFQNTHDRZEPLOZ --hostname rabbit-1 --name rabbit-1 -p 8081:15672 rabbitmq:3.9-management

Related

How do I use options in a SurrealDB instance started with Docker?

According to the documentation, you can use a bunch of options (like --log, --user, etc.) to start a SurrealDB instance if it's installed locally.
But can I use flag options if I want to use the Docker image? I´ve tried running docker container run -p 8000:8000 --name surreal surrealdb/surrealdb:latest "start -p root memory" but I get this message, which is from SurrealDB:
error: The subcommand 'start -p root memory' wasn't recognized
Did you mean 'start'?
If you believe you received this message in error, try re-running with 'surreal -- start -p root memory'
USAGE:
surreal [SUBCOMMAND]
For more information try --help
The documentation doesn't specify this use case of Docker SurrealDB instances.
The example command on the github page https://github.com/surrealdb/surrealdb#run-using-docker does not have hyphens and just ran without error for me.
docker run --rm --name surrealdb -p 127.0.0.1:8000:8000 surrealdb/surrealdb:latest start --log trace --user root --pass root memory
Most likely you can adjust it to your needs.

How to store configuration and database outside orientdb docker container

I'm stuck trying to store OrientDB database and configuration outside of the docker container I'm running. This is the first time using both docker and orientdb so my confusion is multilevel.
Based on https://hub.docker.com/_/orientdb/ I have successfully ran the command docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -e ORIENTDB_ROOT_PASSWORD=rootpwd orientdb but I'm stuck trying to specify where on my local disk to store data and configuration so its not lost when the container is stopped/removed.
I tried adding the -v <databases_path>:/orientdb/databases option but to no avail. I'm probably missing something very basic (since this is my first hands on experience with docker and orientdb). Trying to set up volumes in docker desktop and other trial and error tests have also failed.
Can anyone help? Or point me to some tutorial where I can learn because I'm stuck.
Thanks to #nulldroid I finally figured it out. It was the syntax which messed me up as usual. The following command worked for me. No need to set up volumes etc just a correct formatted path to the directory I already had created using the "/d/" in the beginning for windows "D:"
docker run -d --name orientdb -p 2424:2424 -p 2480:2480 -v /d/docker/test1/databases:/orientdb/databases -e ORIENTDB_ROOT_PASSWORD=root orientdb:latest

how to start osixia/openldap image with volumes mounted?

I am running osixia/openldap and osixia/phpldapadmin (volumes mounted) with these commands :
docker run -p 389:389 -p 636:636 --name ldap-service --volume /data/slapd/database:/var/lib/ldap --volume /data/slapd/config:/etc/ldap/slapd.d --hostname ldap-service --detach osixia/openldap:1.2.3 --copy-service --loglevel debug
docker run --name phpldapadmin-service --hostname phpldapadmin-service --link ldap-service:ldap-host --env PHPLDAPADMIN_LDAP_HOSTS=ldap-host --detach osixia/phpldapadmin:0.7.2
On the first run it starts but on restarting the servers with the same command I got the error
- /container/run/startup/slapd failed with status 34
- whereas status 34 refers to LDAP_INVALID_DN_SYNTAX
could not be able to find a solution for this. Any help?
I have solved this problem.
If you look at the document, you will find error 34 stands for invalid DN.
When you initialize your ldap server with docker, if startup script did not find your LDAP_BASE_DN environment variable, it will generate one from LDAP_DOMAIN, for example, LDAP_DOMAIN="xxx.com" will lead to LDAP_BASE_DN="dc=xxx,dc=com".
But if you stop your container, and start another one with old volume mounted, startup scirpt will not generate LDAP_BASE_DN from your LDAP_DOMAIN, when you look at debug log, you will find it is starting up with an empty DN. That is exactly why it won't start normally.
So the solution is clear: set LDAP_BASE_DN every time, if you use a docker-compose file, just add it to your "environment" section.

How to debug persistent data volume mount for Docker Odoo container?

I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.

Get TeamCity running on Docker

I'm brand new to both TeamCity and Docker. I'm struggling to get a Docker container with TeamCity running and usable on my local machine. I've tried several things, to no avail:
I installed Docker for Mac per instructions here. I then tried to run the following command, documented here, for setting up teamcity in docker:
docker run -it --name teamcity-server-instance \
-v c:\docker\data:/data/teamcity_server/datadir \
-v c:\docker\logs:/opt/teamcity/logs \
-p 8111:8111 \
jetbrains/teamcity-server
That returned the following error: docker: Error response from daemon: Invalid bind mount spec "c:dockerdata:/data/teamcity_server/datadir": invalid mode: /data/teamcity_server/datadir.
Taking a different tack, I tried to follow the instructions here - I tried running the following command:
docker run -it --name teamcity -p 8111:8111 sjoerdmulder/teamcity
The terminal indicated that it was starting up a web server, but I can't browse to it at localhost, nor at localhost:8111 (error ERR_SOCKET_NOT_CONNECTED without the port, and ERR_CONNECTION_REFUSED with the port).
Since the website with the docker run command says to install Docker via Docker Toolbox, I then installed that at the location they pointed to (here). I then tried the
docker-machine ip default
command they suggested, but it didn't work, error "Host does not exist: "default"". That makes sense, since the website said the "default" vm would be created by running Docker Quickstart and I didn't do that, but they don't provide any link to Docker Quickstart, so I don't know what they are talking about.
To try to get the IP address the container was running on, I tried this command
docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
That listed the names of the running containers, each followed by a hyphen, then nothing. I also tried
docker ps -a
That listed running contaners also, but didn't give the IP. Also, the port is blank, and the status says "exited (130) 4 minutes ago", so it doesn't seem like the container stayed alive after starting.
I also tried again with port 80, hoping that would make the site show at localhost:
docker run -it --name teamcity2 -p 80:80 sjoerdmulder/teamcity
So at this point, I'm completely puzzled and blocked - I can't start the server at all following the instructions on hub.docker.com, and I can't figure out how to browse to the site that does start up with the other instructions.
I'll be very grateful for any assistance!
JetBrains now provides official docker images for TeamCity. I would recommend starting with those.
The example command in their TeamCity server image looks like this
docker run -it --name teamcity-server-instance \
-v <path to data directory>:/data/teamcity_server/datadir \
-v <path to logs directory>:/opt/teamcity/logs \
-p <port on host>:8111 \
jetbrains/teamcity-server
That looks a lot like your first attempt. However, c:\docker\data is a Windows file path. You said you're running this on a mac, so that's definitely not going to work.
Once TeamCity starts, it should be available on port 8111. That's what -p 8111:8111 part of the command does. It maps port 8111 on your machine to port 8111 in the VM Docker for Mac creates to run your containers. ERR_CONNECTION_REFUSED could be caused by several things. Two most likely possibilities are
TeamCity could take a little while to start up and maybe you didn't give it enough time. Solution is to wait.
-it would start the TeamCity container in interactive mode. If you exit out of the terminal window where you ran the command, the container will also probably terminate and will be inaccessible. Solution is to not close the window or run the container in detached mode.
There is a good overview of the differences between Docker for Mac and Docker Toolbox here: Docker for Mac vs. Docker Toolbox. You don't need both, and for most cases you'll want to use Docker for Mac for testing stuff out locally.

Resources