My docker compose file has two containers and looks like this
version: '3'
services:
dynamodb:
image: amazon/dynamodb-local
ports:
- '8000:8000'
networks:
- testnetwork
audit-server:
image: audit-dynamo
environment:
DYNAMO_URL: 'http://0.0.0.0:8000'
command: node app.js
ports:
- '3000:3000'
depends_on:
- dynamodb
# restart: always
networks:
- testnetwork
networks:
testnetwork:
My goal is to mount local data to some volume. currently losing data on docker-compose down
So that image uses by default in-memory dynamodb (what you can find by running docker inspect on that image)
"CMD [\"-jar\" \"DynamoDBLocal.jar\" \"-inMemory\"]"
So if you want to keep your data, you need to do something like this in your docker-compose:
version: '3'
volumes:
dynamodb_data:
services:
dynamodb:
image: amazon/dynamodb-local
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal/data/
volumes:
- dynamodb_data:/home/dynamodblocal/data
ports:
- "8000:8000"
You can try this docker-compose config:
version: '3'
volumes:
dynamodb_data:
services:
dynamodb:
image: amazon/dynamodb-local
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal
volumes:
- dynamodb_data:/home/dynamodblocal
ports:
- "8000:8000"
To preserve data across docker installations create volume using docker.
docker volume create --driver local --opt type=none \
--opt device=/var/opt/dynamodb_data --opt o=bind dynamodb_data
use external option:
version: "3"
volumes:
dynamodb_data:
external: true
services:
dynamodb-local:
image: amazon/dynamodb-local
command: ["-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/home/dynamodblocal/data"]
volumes:
- dynamodb_data:/home/dynamodblocal/data
Related
I installed Portainer via Docker Compose. I followed basic directions where I created a portainer_data docker volume:
docker create volume portainer_data
Then I used the following Docker Compose file to setup portainer and portainer agent:
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
But then I see this inside Portainer when I look at the volumes:
What is wrong in my configuration that it creates a new volume and ignores the one I setup?
Thanks.
docker by default prepends the project name to volume name, this is an expected behaviour https://forums.docker.com/t/docker-compose-prepends-directory-name-to-named-volumes/32835 to avoid this you have two options you can set project name when you run the docker-compose up or update the docker-compose.yml file to use the external volume you created
version: '3.3'
services:
portainer-ce:
ports:
- '8000:8000'
- '9443:9443'
container_name: portainer
restart: unless-stopped
command: -H tcp://agent:9001 --tlsskipverify
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- portainer_data:/data
image: 'portainer/portainer-ce:latest'
agent:
container_name: agent
image: portainer/agent:latest
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
ports:
- "9001:9001"
volumes:
portainer_data:
name: portainer_data
external: false
Trying to setup Redis from this image Redismod and struggle to translate the following code into docker-compose
$ docker run \
-p 6379:6379 \
-v /home/user/data:/data \
-v /home/user/redis.conf:/usr/local/etc/redis/redis.conf \
redislabs/redismod \
/usr/local/etc/redis/redis.conf
What I have done till now:
version: "3.2"
services:
redis:
image: "redislabs/redismod"
container_name: 'redis-local'
hostname: 'redis-local'
volumes_from:
- redis_data:/data
- ./redis.conf:/usr/local/etc/redis/redis.conf
args:
- /usr/local/etc/redis/redis.conf
restart: always
ports:
- "6379:6379"
volumes:
redis_data:
But I get the following error ERROR: Service "redis" mounts volumes from "redis_data", which is not the name of a service or container. obviously because I didn't pass the last line /usr/local/etc/redis/redis.conf
And second question, how do I translate --loadmodule and --dir from below, these aren't Redis command:
$ docker run \
-p 6379:6379 \
-v /home/user/data:/data \
redislabs/redismod \
--loadmodule /usr/lib/redis/modules/rebloom.so \
--dir /data
UPDATE
I changed my docker-compose.yml file to the following and it started to work, but it seems that Redis doesn't see the redis.conf file and continue to run in default mode, what I do wrong?
version: "3.2"
services:
redis:
image: "redislabs/redismod"
container_name: 'redis-local'
hostname: 'redis-local'
volumes:
- redis_data:/data
- ./redis.conf:/usr/local/etc/redis/redis.conf
build:
context: .
args:
- /usr/local/etc/redis/redis.conf
restart: always
ports:
- "6379:6379"
The first error was because you used volumes_from instead of volumes. The first one is intended to get the volumes configuration from an existing container. The second one to define the volumes. In your last version redis_data is a docker volume and redis.conf is a bind mount. Your second problem is that you are using build and args that are intended to be used for building images but looks like you wanted to run a command.
Try:
version: "3.2"
services:
redis:
image: "redislabs/redismod"
container_name: 'redis-local'
hostname: 'redis-local'
volumes:
- redis_data:/data
- ./redis.conf:/usr/local/etc/redis/redis.conf
command: usr/local/etc/redis/redis.conf
restart: always
ports:
- "6379:6379"
For more info about volumes, bind mounts and docker compose reference see:
https://docs.docker.com/storage/volumes/
https://docs.docker.com/storage/bind-mounts/
https://docs.docker.com/compose/compose-file/compose-file-v3/#command
Massive Docker noob here in dire need of help. There are two docker containers: simple-jar and elk. simple-jar produces log files in /logs directory within its container, and another application, elk, needs to access these log files to do some processing on them.
How can I share the /logs directory so that elk docker container can access it?
This is the Dockerfile for simple-jar:
FROM openjdk:latest
COPY target/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar /usr/src/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar
EXPOSE 6650
CMD java -jar /usr/src/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar
docker-compose.yml:
version: '3.2'
services:
elk:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
simple-jar:
build:
context: pulsar_logging_consumer/
volumes:
- type: bind
source: ./pulsar_logging_consumer/logs
target: /usr/share/logs
read_only: true
ports:
- "6500:6500"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
You have a couple of options,
1. Create an external named volume, this needs to be created by you (the user) otherwise fails, using the following command
docker volume create --driver local \
--opt type=none \
--opt device=/var/opt/my_data_logs \
--opt o=bind logs_data
Select the volume type that fits, there are different types like nfs, ext3 and 3rd party plugins.
In your docker-compose.yml file
version '3'
volumes:
logs_data:
external: true
services:
app:
image: yourimage:latest
ports:
- 80:80
volumes:
- logs_data:/your/path
Share volumes: Start a container using volumes defined by another, (top-level volumes)
version '3'
volumes:
logs_data:
external: true
services:
app1:
image: appimage:latest
ports:
- 80:80
volumes:
- logs_data:/your/path:ro
app2:
image: yourimage:latest
ports:
- 8080:80
volumes:
- logs_data:/your/path:ro
You can do this by using --link See how to link container in docker?
An better way is to use volumes https://docs.docker.com/storage/volumes/
I don't how to run the docker-compose equivalent of my code
docker run -d --name=server --restart=always --net network --ip 172.18.0.5 -p 5003:80 -v $APP_PHOTO_DIR:/app/mysql-data -v $APP_CONFIG_DIR:/app/config webserver
I've done this:
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
Are you sure you need an IP address for container? It is not recommended practice, why do you want to set it explicitly?
docker-compose.yml
version: '3'
services:
server: # correct, this would be container's name
image: webserver # this should be image name from your command line
ports:
- "5003:80" # correct, but only if you need to communicate to service from ouside
volumes: # volumes just repeat you command line, you can use Env vars
- $APP_PHOTO_DIR:/app/mysql-data
- $APP_CONFIG_DIR:/app/config
command: ["python", "/app/app.py"] # JSON notation strongly recommended
restart: always
Then docker-compose up -d and that's it. You can access your service from host with localhost:5003, no need for internal IP.
For networks, I always include in the docker-compose file, the network specification. If the network already exists, docker will not create a new one.
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
networks:
app_net:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: NETWORK_SUBNET
volumes:
VOLUME_NAME:
driver:local
And you will need to add the volumes separately to match the docker run command.
version: '3.4'
services:
kafka_exporter:
image: danielqsj/kafka-exporter
command: --kafka.server=xx.xx.xx.xx:9092 --kafka.server=xx.xx.xx.xx:9092
ports:
- 9308:9308
links:
- prometheus
prometheus:
image: prom/prometheus
ports:
- 9090:9090
volumes:
- ./mount/prometheus:/etc/prometheus
command: --config.file=/etc/prometheus/prometheus.yml
Above is my docker-compose.yml file.
I am able to spin up both the images.
However, I am not able to access localhost:9308 (kafka_Exporter) from localhost:9090 (prometheus)
Do I need to link/network images?
It should be container_name:port
kafka_exporter:9308