How to persist data in Prometheus running in a Docker container? - docker

I'm developing something that needs Prometheus to persist its data between restarts. Having followed the instructions
$ docker volume create a-new-volume
$ docker run \
--publish 9090:9090 \
--volume a-new-volume:/prometheus-data \
--volume "$(pwd)"/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
I have a valid prometheus.yml in the right directory on the host machine and it's being read by Prometheus from within the container. I'm just scraping a couple of HTTP endpoints for testing purposes at the moment.
But when I restart the container it's empty, no data from the previous run. What am I missing from my docker run ... command to persist the data into the a-new-volume volume?

Use the default data dir, which is /prometheus. To do that, use this line instead of what you have in your command:
...
--volume a-new-volume:/prometheus \
...
Found here: https://github.com/prometheus/prometheus/blob/master/Dockerfile
Surprisingly is not mentioned in the image docs

I had the same issue a today, but I was using a docker composer file. So wrapping up all what was in comments of other answers and what worked for me. In case setting up the Prometheus docker via yaml compose file...
First create a folder for the volume on the host machine, e.g.:
$ mkdir /tmp/prometheus
Then change the folder owner to nobody, like (use sudo if needed):
$ chown 65534:65534 /tmp/prometheus
Then add volume to the yaml configuration file:
prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- 9090:9090
volumes:
- /tmp/prometheus:/prometheus
- ./prometheus.yml:/etc/prometheus/prometheus.yml
That should do it.

Related

Undefined volume with Docker Compose

I wanted to translate this docker CLI command (from smallstep/step-ca) into a docker-compose.yml file to run with docker compose (version 2):
docker run -d -v step:/home/step \
-p 9000:9000 \
-e "DOCKER_STEPCA_INIT_NAME=Smallstep" \
-e "DOCKER_STEPCA_INIT_DNS_NAMES=localhost,$(hostname -f)" \
smallstep/step-ca
This command successfully starts the container.
Here is the compose file I "composed":
version: "3.9"
services:
ca:
image: smallstep/step-ca
volumes:
- "step:/home/step"
environment:
- DOCKER_STEPCA_INIT_NAME=Smallstep
- DOCKER_STEPCA_INIT_DNS_NAMES=localhost,ubuntu
ports:
- "9000:9000"
When I run docker compose up (again, using v2 here), I get this error:
service "ca" refers to undefined volume step: invalid compose project
Is this the right way to go about this? I'm thinking I missed an extra step with volume creation in docker compose projects, but I am not sure what that would be, or if this is even a valid use case.
The Compose file also has a top-level volumes: block and you need to declare volumes there.
version: '3.9'
services:
ca:
volumes:
- "step:/home/step"
et: cetera
volumes: # add this section
step: # does not need anything underneath this
There are additional options possible, but you do not usually need to specify these unless you need to reuse a preexisting Docker named volume or you need non-standard Linux mount options (the linked documentation gives an example of an NFS-mount volume, for example).
Citing the Compose specification:
To avoid ambiguities with named volumes, relative paths SHOULD always begin with . or ...
So it should be enough to make your VOLUME's host path relative:
services:
ca:
volumes:
- ./step:/home/step
If you don't intend to share the step volume with other containers, you don't need to define it in the top-level volumes key:
If the mount is a host path and only used by a single service, it MAY be declared as part of the service definition instead of the top-level volumes key.
it seems that docker-compose don't know the "volume" you created via command: sudo docker volume create my_xx_volume
so ,just manually mkdir to create a folder and chmod 777 <my_folder>, then your mysql docker will use it very well.
( in production env, don't use chmod but chown to change the file permission )

Docker for Windows - Mount directory is coming empty

I have an image with MYSQL installed. I need to map the /var/lib/mysql directory to my host system. Following is the screenshot that I see within that directory, when I use the following command
docker run --rm -it --env-file=envProxy --network mynetwork --name my_db_dev -p 3306:3306 my_db /bin/bash
Now when I try to mount a directory from my host ( Windows 10 ), by running another container from the same image, the mysql directory is blank.
docker run --rm -it --env-file=envProxy --network mynetwork -v D:/docker/data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
Also tried this, but none works
docker run --rm -it --env-file=envProxy --network mynetwork -v D:\docker\data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
One thing that I see, is that the mysql directory in the path has now root user, instead of mysql as in the previous case.
I wanted all the content from the existing container (mysql directory ) to be copied back to the host mount directory
Is that Possible ? and How can that be achieved ?
Same problem on Docker Desktop(2.0.0.3 (31259)). I'd got the solution from this issues.
I ensured the containers were stopped, opened docker settings, selected "Shared Drives", removed the tick on "C" and added it again. Docker asked for the Windows account credentials and I entered the new ones. After that and starting containers, mount volumes were ok. Problem solved.
It could fix the problem more simply by just reset the credentials in Docker Settings.
If you need to get files from container into host, better use docker cp command: https://docs.docker.com/engine/reference/commandline/cp/
It will look like:
docker cp my_db_dev1:/var/lib/mysql d:\docker\data
UPD
Actually I want to persist the database files across other containers,
so I wanted use volumes
In this case you have to:
Start using docker-compose to orchestrate containers.
In docker-compose.yml you create volume, which is shared between all containers. Something like:
docker-compose.yml
version: '3'
services:
db1:
image: whatever
volumes:
- myvol:/data
db2:
image: whatever2
volumes:
- myvol:/data
volumes:
myvol:
Description: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Use Windows paths writing with backslash '\' and it is recommended using variables to specify path. On the other side for Linux use slash '/' For example:
docker run -it -v %userprofile%\work\myproj\some-data:/var/data
First create a folder structure like below,
C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG
C:\Users\rajit\MYSQL_DATA\DATA_DIR
then please adjust like below,
docker pull mysql:8.0
docker run --name mysql-docker -v C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG:/etc/mysql/conf.d --env="MYSQL_ROOT_PASSWORD=root" --env="MYSQL_PASSWORD=root" --env="MYSQL_DATABASE=test_db" -v C:\Users\rajit\MYSQL_DATA\DATA_DIR:/var/lib/mysql -d -p 3306:3306 mysql:8.0 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
try to turn off anti virus program or fire wall. Then click on "reset credentials" under settings/shared drives.
That worked for me.
Best regards.

Zeppelin Docker Interpreter Configuration

I have created a Zeppelin docker image in my local system and configured the Spark Interpreter through maven repositories and runned the Zeppelin It worked. But when I stop the Docker and runned again the Interpreter binding was gone. How to solve this Issue ? I want that Interpreter binding one-time so that when ever I stop the docker and run again It has to store those interpreter Binding as it is.
You need 3 volumes for persisting configurations, notebooks and logs.
Note: If you added custom interpreters, you need an additional volume for your interpreter binaries.
docker volume create zeppelin-conf
docker volume create zeppelin-notebook
docker volume create zeppelin-logs
docker volume create zeppelin-interpreter
Run the container with above volumes mounted.
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf -v zeppelin-notebook:/zeppelin/notebook -v zeppelin-logs:/zeppelin/logs -v zeppelin-interpreter:/zeppelin/interpreter apache/zeppelin:0.8.1
If you just want to persist configurations you can use following lines:
docker volume create zeppelin-conf
docker run -d --restart always -p 8080:8080 -v zeppelin-conf:/zeppelin/conf apache/zeppelin:0.8.1
Configurations:/zeppelin/conf
Notebooks: /zeppelin/notebook
Logs: /zeppelin/logs
Interpreters: /zeppelin/interpreter
Edit: The /zeppelin directory is the default home directory for docker images. See Dockerfile. Therefore, you don't need to specify ZEPPELIN_NOTEBOOK_DIR, ZEPPELIN_LOG_DIR or ZEPPELIN_INTERPRETER_DIR environment variables.
Mount file into docker run is easy - just pass it into --volume parameter. But in zeppelin case there some parameters pre-configured there, so replace it with empty file is most likely is not what you want achieve. So I could recommend first get that file with default content from container and then mount to it in next run. Please follow step-by step instructions:
First we prepare default config for nest runs.
Run default container temporary:
sudo docker run -d --name zeppelin-test apache/zeppelin:0.8.1
And get default config from it:
mkdir -p conf
sudo docker zeppelin-test cat /zeppelin/conf/interpreter.json > conf/interpreter.json
Note 1: It will not be used for work, so most parameters unimportant. It need to be done once for setup only!
Note 2: Because that config populated on start unfortunately you can't obtain it on single run like: sudo docker run --rm apache/zeppelin:0.8.1 cat /zeppelin/conf/interpreter.json
Now we can provide use it as bind-mount.
If you use direct docker run method without docker-compose, add option, among others: --volume $(pwd)/conf/interpreter.json:/zeppelin/conf/interpreter.json
But I recommend use docker-compose, so there option placed under volumes: key like - ./conf/interpreter.json:/zeppelin/conf/interpreter.json. Full example:
version: '3.7'
services:
zeppelin:
image: apache/zeppelin:0.8.1
ports:
- "7077:7077"
- "8080:8080"
volumes:
- ./logs:/logs
- ./notebook:/notebook
- ./conf/interpreter.json:/zeppelin/conf/interpreter.json
environment:
ZEPPELIN_NOTEBOOK_DIR: /notebook
ZEPPELIN_LOG_DIR: /logs
And then just run from that directory:
docker-compose up -d
Interpreter bindings are stored in conf/interpreter.json. Need use external interpreter.json file.

Docker RabbitMQ persistency

RabbitMQ in docker lost data after remove container without volume.
My Dockerfile:
FROM rabbitmq:3-management
ENV RABBITMQ_HIPE_COMPILE 1
ENV RABBITMQ_ERLANG_COOKIE "123456"
ENV RABBITMQ_DEFAULT_VHOST "123456"
My run script:
IMAGE_NAME="service-rabbitmq"
TAG="${REGISTRY_ADDRESS}/${IMAGE_NAME}:${VERSION}"
echo $TAG
docker rm -f $IMAGE_NAME
docker run \
-itd \
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \
--name "service-rabbitmq" \
--dns=8.8.8.8 \
-p 8080:15672 \
$TAG
After removing the container, all data are lost.
How do I configure RabbitMQ in docker with persistent data?
Rabbitmq uses the hostname as part of the folder name in the mnesia
directory. Maybe add a --hostname some-rabbit to your docker run?
I had the same issue and I found the answer here.
TL;DR
Didn't do too much digging on this, but it appears that the simplest way to do this is to change the hostname as Pedro mentions above.
MORE INFO:
Using RABBITMQ_NODENAME
If you want to edit the RABBITMQ_NODENAME variable via Docker, it looks like you need to add a hostname as well since the Docker hostnames are generated as random hashes.
If you change the RABBITMQ_NODENAME var to something static like my-rabbit, RabbitMQ will throw something like an "nxdomain not found" error because it's looking for something likemy-rabbit#<docker_hostname_hash>. If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
UPDATE
I previously said,
If you know the Docker hostname and can automate pulling it into your RABBITMQ_NODENAME value like so, my-rabbit#<docker_hostname_hash> I believe it would work.
This would not work as described precisely because the default docker host name is randomly generated at launch, if it is not assigned explicitly. The hurdle would actually be to make sure you use the EXACT SAME <docker_hostname_hash> as your originating run so that the data directory gets picked up correctly. This would be a pain to implement dynamically/robustly. It would be easiest to use an explicit hostname as described below.
The alternative would be to set the hostname to a value you choose -- say, app-messaging -- AND ALSO set the RABBITMQ_NODENAME var to something like rabbit#app-messaging. This way you are controlling the full node name that will be used in the data directory.
Using Hostname
(Recommended)
That said, unless you have a reason NOT to change the hostname, changing the hostname alone is the simplest way to ensure that your data will be mounted to and from the same point every time.
I'm using the following Docker Compose file to successfully persist my setup between launches.
version: '3'
services:
rabbitmq:
hostname: 'mabbit'
image: "${ARTIFACTORY}/rabbitmq:3-management"
ports:
- "15672:15672"
- "5672:5672"
volumes:
- "./data:/var/lib/rabbitmq/mnesia/"
networks:
- rabbitmq
networks:
rabbitmq:
driver: bridge
This creates a data directory next to my compose file and persists the RabbitMQ setup like so:
./data/
rabbit#mabbit/
rabbit#mabbit-plugins-expand/
rabbit#mabbit.pid
rabbit#mabbit-feature_flags

Adding files to standard images using docker-compose

I'm unsure if something obvious escapes me or if it's just not possible but I'm trying to compose an entire application stack with images from docker hub.
One of them is mysql and it supports adding custom configuration files through volumes and to run .sql-files from a mounted directory.
But, I have these files on the machine where I'm running docker-compose, not on the host. Is there no way to specify files from the local machine to copy into the container before it runs it entrypoint/cmd? Do I really have to create local images of everything just for this case?
Option A: Include the files inside your image. This is less than ideal since you are mixing configuration files with your image (that should really only contain your binaries, not your config), but satisfies the requirement to use only docker-compose to send the files.
This option is achieved by using docker-compose to build your image, and that build will send over any files from the build directory to the remote docker engine. Your docker-compose.yml would look like:
version: '2'
services:
my-db-app:
build: db/.
image: custom-db
And db/Dockerfile would look like:
FROM mysql:latest
COPY ./sql /sql
The entrypoint/cmd would remain unchanged. You would need to run docker-compose up --build if the image already exists and you need to change the sql files.
Option B: Use a volume to store your data. This cannot be done directly inside of docker-compose. However it's the preferred way to include files from outside of the image into the container. You can populate the volume across the network by using the docker CLI and input redirection along with a command like tar to pack and unpack those files being sent over stdin:
tar -cC sql . | docker run --rm -it -v sql-files:/sql \
busybox /bin/sh -c "tar -xC /sql"
Run that via a script and then have that same script bounce the db container to reload that config.
Option C: Use some kind of network attached filesystem. If you can configure NFS on the host where you are running your docker CLI, you can connect to those NFS shares from the remote docker node using one of the below options:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Option D: With swarm mode, you can include files as configs in your image. This allows configuration files, that would normally need to be pushed to any node in the swarm, to be sent on demand to the node where your service is running. This uses a docker-compose.yml file to define it, but swarm mode isn't using docker-compose itself, so this may not fit your specific requirements. You can run a single node swarm mode cluster, so this option is available even if you only have a single node. This option does require that each of your sql files are added as a separate config. The docker-compose.yml would look like:
version: '3.4'
configs:
sql_file_1:
file: ./file_1.sql
services
my-db-app:
image: my-db-app:latest
configs:
- source: sql_file_1
target: /sql/file_1.sql
mode: 444
Then instead of a docker-compose up, you'd run a docker stack deploy -c docker-compose.yml my-db-stack.
If you can not use volumes (wants stateless docker-compose.yml and using remote machine), you can have config file written by command.
Example for nginx config in official image:
version: "3.7"
services:
nginx:
image: nginx:alpine
ports:
- 80:80
environment:
NGINX_CONFIG: |
server {
server_name "~^www\.(.*)$$" ;
return 301 $$scheme://$$1$$request_uri ;
}
server {
server_name example.com
...
}
command:
/bin/sh -c "echo \"$$NGINX_CONFIG\" > /etc/nginx/conf.d/redir.conf; nginx -g \"daemon off;\""
Environment variable could also be saved in .env file, you can use Compose's extend feature or load it from shell environment (where you fetched it from enywhere else):
https://docs.docker.com/compose/compose-file/#env_file
https://docs.docker.com/compose/compose-file/#variable-substitution
To get the original entrypoint command of a container:
docker container inspect [container] | jq --raw-output .[0].Config.Cmd
To investigate which file to modify this usually will work:
docker exec --interactive --tty [container] sh
This is how I'm doing it with volumes:
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
- ./shell_scripts:/shell_scripts
i think you had to do in a compose file:
volumes:
- src/file:dest/path
As a more recent update to this question: with a docker swarm hosted on Amazon, for example, you can define a volume that can be shared by services and is available across all nodes of the swarm (using the cloudstor driver, which in turn has AWS EFS underlying for persistence).
version: '3.3'
services:
my-db-app:
command: /shell_scripts/go.sh
volumes:
shell_scripts:/shell_scripts
volumes:
shell_scripts:
driver: "cloudstor:aws"
With Compose V2 you can simply do (as in the documentation) :
docker compose cp src [service:]dest
Before v2 you can use the workaround using docker cp explained in the associated issue
docker cp /path/to/my-local-file.sql "$(docker-compose ps -q mycontainer)":/file-on-container.sql

Resources