docker-compose doesn't clean cache or temporary files - docker

Using docker-compose, after joining a cluster of Rabbitmq using:
docker-compose up
docker exec -it rabbitmq3 bash
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit#rabbitmq2
rabbitmqctl start_app
Every time I restart docker-compose the cluster is on.
Even after removing the containers and pruning the system:
docker-compose down
docker kill $(docker ps -q)
docker rm $(docker ps -a --format "{{.ID}}")
docker volume prune
docker system prune
How can I reset the containers?
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3.11-rc-management-alpine
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
- "15692:15692"
volumes:
- type: bind
source: $PWD/advanced/rabbitmq2/advanced.config
target: /etc/rabbitmq/advanced.config
- type: bind
source: $PWD/history/rabbitmq2/.bash_history
target: /var/lib/rabbitmq/.bash_history
- type: bind
source: $PWD/cookie/rabbitmq2/.erlang.cookie
target: /var/lib/rabbitmq/.erlang.cookie
networks:
- rabbitmq_net
environment:
- RABBITMQ_DEFAULT_USER=rabbit_admin
- RABBITMQ_DEFAULT_PASS=.123-321.
- RABBITMQ_CONFIG_FILES=/etc/rabbitmq/rabbitmq.conf
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
- RABBITMQ_NODENAME=rabbit#rabbitmq2
rabbitmq3:
image: rabbitmq:3.11-rc-management-alpine
hostname: rabbitmq3
container_name: 'rabbitmq3'
depends_on:
- rabbitmq2
links:
- rabbitmq2
ports:
- "5673:5672"
- "15673:15672"
- "5553:5552"
- "15693:15692"
volumes:
- type: bind
source: $PWD/advanced/rabbitmq3/advanced.config
target: /etc/rabbitmq/advanced.config
- type: bind
source: $PWD/history/rabbitmq3/.bash_history
target: /var/lib/rabbitmq/.bash_history
- type: bind
source: $PWD/cookie/rabbitmq3/.erlang.cookie
target: /var/lib/rabbitmq/.erlang.cookie
- type: bind
source: $PWD/conf/rabbitmq3/rabbitmq.conf
target: /etc/rabbitmq/rabbitmq.conf
networks:
- rabbitmq_net
environment:
- RABBITMQ_DEFAULT_USER=rabbit_admin
- RABBITMQ_DEFAULT_PASS=.123-321.
- RABBITMQ_CONFIG_FILES=/etc/rabbitmq/rabbitmq.conf
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
- RABBITMQ_NODENAME=rabbit#rabbitmq3
networks:
rabbitmq_net:
driver: bridge

You are using host folders, not docker volumes, for persistence.
This means that docker volume prune has no effect on your setup.
Each time you start the services they will map host folders to container folders. When you shutdown the containers the files will still exist on the host.
For a clean start you need to manually delete all the host folders that are mentioned in docker-compose.yml. No docker command will do that for you.

Related

How to export Yii2 migrations into docker container

I have successfully containerized my basic Yii2 application with docker and it runs on localhost:8000. However, I cannot use the app effectively as most of its data are stored in migration files. Is there a way I could export the migrations into docker after running it? (or during execution)
This is my docker compose file
version: '2'
services:
php:
image: yiisoftware/yii2-php:7.1-apache
volumes:
- ~/.composer-docker/cache:/root/.composer/cache:delegated
- ./:/app:delegated
ports:
- '8000:80'
networks:
- my-network
db:
image: mysql:5.7
restart: always
environment:
- MYSQL_DATABASE=my-db
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
ports:
- '3306:3306'
expose:
- '3306'
volumes:
- mydb:/var/lib/mysql
networks:
- my-network
memcached:
container_name: memcached
image: memcached:latest
ports:
- "0.0.0.0:11211:11211"
volumes:
restatdb:
networks:
my-network:
driver: bridge
and my Dockerfile
FROM alpine:3.4
ADD . /
COPY ./config/web.php ./config/web.php
COPY . /var/www/html
# Let docker create a volume for the session dir.
# This keeps the session files even if the container is rebuilt.
VOLUME /var/www/html/var/sessions
It is possible to run yii commands in docker. First let the yii2 container run in the background or another tab of the terminal. The yii commands can be run using the docker exec on the interactive interface which would let us interact with the running container
sudo docker exec -i <container-ID> php yii migrate/up
You can get the container ID using
sudo docker ps

What is the difference between docker run --config vs docker compose config?

So what I am looking at is a docker run command being used in to create a docker container for open telemetry that passes in a config command, and the code looks like...
$ git clone git#github.com:open-telemetry/opentelemetry-collector.git; \
cd opentelemetry-collector/examples; \
go build main.go; ./main & pid1="$!";
docker run --rm -p 13133:13133 -p 14250:14250 -p 14268:14268 \
-p 55678-55679:55678-55679 -p 4317:4317 -p 8888:8888 -p 9411:9411 \
-v "${PWD}/local/otel-config.yaml":/otel-local-config.yaml \
--name otelcol otel/opentelemetry-collector \
--config otel-local-config.yaml; \
kill $pid1; docker stop otelcol
(https://opentelemetry.io/docs/collector/getting-started/#docker)
What I don't understand is how a non-docker related config file(open telemetry config) fits into the "docker run --config" or "docker compose config" commands. Below is the open telemetry config file that seems to be non-docker related
extensions:
memory_ballast:
size_mib: 512
zpages:
endpoint: 0.0.0.0:55679
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
memory_limiter:
# 75% of maximum memory up to 4G
limit_mib: 1536
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
exporters:
logging:
logLevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
extensions: [memory_ballast, zpages]
https://github.com/open-telemetry/opentelemetry-collector/blob/main/examples/local/otel-config.yaml
Now I have looked at these Docker links
https://docs.docker.com/engine/swarm/configs/#how-docker-manages-configs
https://nickjanetakis.com/blog/docker-tip-43-using-the-docker-compose-config-command
but I couldn't figure out how to get the docker run --config command in the open telemetry example to start working in docker compose with docker compose config. Here is my docker compose
version: "3.9"
services:
opentelemetry:
container_name: otel
image: otel/opentelemetry-collector:latest
volumes:
- ~/source/repos/CritterTrackerProject/DockerServices/OpenTelemetry/otel-collector-config.yml:/otel-local-config.yml
config:
- otel-local-config.yml
ports:
- 13133:13133
- 14250:14250
- 14268:14268
- 55678-55679:55678-55679
- 4317:4317
- 8888:8888
- 9411:9411
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
jaeger:
# restart: unless-stopped
container_name: jaeger
image: jaegertracing/all-in-one:latest
ports:
- 16686:16686
# - 14250:14250
# - 14268:14268
# - 5775:5775/udp
- 6831:6831/udp
# - 6832:6832/udp
# - 5778:5778
# - 9411:9411
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
postgres:
restart: always
container_name: postgres
image: postgres:latest
environment:
- POSTGRES_USER=code
- POSTGRES_PASSWORD=code
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
nginx:
restart: always
container_name: webserver
image: nginx:latest
build:
context: ~/source/repos/CritterTrackerProject
dockerfile: DockerServices/Nginx/Dockerfile
ports:
- 80:80
- 443:443
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- my-network
volumes:
postgres:
networks:
my-network:
external: true
name: my-network
Here is my error after running docker compose up in a Git Bash terminal
$ docker compose -f ./DockerServices/docker-compose.yml up -d
services.opentelemetry Additional property config is not allowed
The general form of docker run is
docker run [docker options] image [command]
And if you look at your original command it matches this pattern
docker run \
--rm -p ... -v ... --name ... \ # Docker options
otel/opentelemetry-collector \ # Image
--config otel-local-config.yaml # Command
So what looks like a --config option is really the command part of the container setup; it overrides the Dockerfile CMD, and it is passed as additional arguments to the image's ENTRYPOINT.
In a Compose setup, then, this would be the container's command:.
services:
opentelemetry:
image: otel/opentelemetry-collector:latest
command: --config otel-local-config.yaml
Since this is an application-specific command string, it's unrelated to the docker-compose config command, which is a diagnostic tool that just dumps out parts of your Compose configuration.
What you're doing in the docker run command is the following mounting:
${PWD}/local/otel-config.yaml on the local host to /otel-local-config.yaml from inside the docker
You can achieve same behavior with volumes option from docker compose:
volumes:
"${PWD}/local/otel-config.yaml":/otel-local-config.yaml

Sharing a created directory within a Docker container to another Docker container?

Massive Docker noob here in dire need of help. There are two docker containers: simple-jar and elk. simple-jar produces log files in /logs directory within its container, and another application, elk, needs to access these log files to do some processing on them.
How can I share the /logs directory so that elk docker container can access it?
This is the Dockerfile for simple-jar:
FROM openjdk:latest
COPY target/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar /usr/src/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar
EXPOSE 6650
CMD java -jar /usr/src/pulsar_logging_consumer-1.0-SNAPSHOT-jar-with-dependencies.jar
docker-compose.yml:
version: '3.2'
services:
elk:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
simple-jar:
build:
context: pulsar_logging_consumer/
volumes:
- type: bind
source: ./pulsar_logging_consumer/logs
target: /usr/share/logs
read_only: true
ports:
- "6500:6500"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
You have a couple of options,
1. Create an external named volume, this needs to be created by you (the user) otherwise fails, using the following command
docker volume create --driver local \
--opt type=none \
--opt device=/var/opt/my_data_logs \
--opt o=bind logs_data
Select the volume type that fits, there are different types like nfs, ext3 and 3rd party plugins.
In your docker-compose.yml file
version '3'
volumes:
logs_data:
external: true
services:
app:
image: yourimage:latest
ports:
- 80:80
volumes:
- logs_data:/your/path
Share volumes: Start a container using volumes defined by another, (top-level volumes)
version '3'
volumes:
logs_data:
external: true
services:
app1:
image: appimage:latest
ports:
- 80:80
volumes:
- logs_data:/your/path:ro
app2:
image: yourimage:latest
ports:
- 8080:80
volumes:
- logs_data:/your/path:ro
You can do this by using --link See how to link container in docker?
An better way is to use volumes https://docs.docker.com/storage/volumes/

docker-compose run with specified network name

I have a docker-compose file with three services (Solr, PostgreSQL and pgAdmin), all sharing a Docker network.
version: '2'
services:
solr:
image: solr:7.7.2
ports:
- '8983:8983'
networks:
primus-dev:
ipv4_address: 10.105.1.101
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- primus
- /opt/solr/server/solr/configsets/sample_techproducts_configs
environment:
- SOLR_HEAP=2048m
logging:
options:
max-size: 5m
db:
image: "postgres:11.5"
container_name: "primus_postgres"
ports:
- "5432:5432"
networks:
primus-dev:
ipv4_address: 10.105.1.102
volumes:
- primus_dbdata:/var/lib/postgres/data
environment:
- POSTGRES_DB=primus75
- POSTGRES_USER=primus
- POSTGRES_PASSWORD=primstav
pgadm4:
image: "dpage/pgadmin4"
networks:
primus-dev:
ipv4_address: 10.105.1.103
ports:
- "3050:80"
volumes:
- /home/nils/docker-home:/var/docker-home
environment:
- PGADMIN_DEFAULT_EMAIL=nils.weinander#kulturit.se
- PGADMIN_DEFAULT_PASSWORD=dev
networks:
primus-dev:
driver: bridge
ipam:
config:
- subnet: 10.105.1.0/24
volumes:
data:
primus_dbdata:
This works just fine after docker-compose up (at least pgAdmin can talk to PostgreSQL).
But, then I have a script (actuall a make target, but that's not the point here), which builds, runs and deletes a container with docker-compose run:
docker-compose run -e HOME=/app -e PYTHONPATH=/app/server -u 0 --rm backend \
bash -c 'cd /app/server && python tools/reindex_mp.py -s -n'
This does not work as the reindex_mp.py cannot reach Solr on 10.105.1.101, as the one shot container is not on the same Docker network. So, is there a way to tell docker-compose to use a named network with docker-compose run? docker run has an option --network but that is not available for docker-compose.
You can create a docker network outside your docker-compose and use that network while running services in docker-compose.
docker network create my-custom-created-network
now inside your docker-compose, use this network like this:
services:
serv1:
image: img
networks:
my-custom-created-network
networks:
my-custom-created-network:
external: true
The network creation example creates a bridge network.
To access containers across hosts, use an overlay network.
You can also use the network created inside docker-compose and connect containers to that network.
Docker creates a default network for docker-compose and services which do not have any network configuration specified, will be using default network created by docker for that compose file.
you can find the network name by executing this command:
docker network ls
Use the network appropriate name while starting a container, like this
docker run [options] --network <network-name> <image-name>
Note: Containers in a same network are accessible using container names, you can leverage this instead of using ips

Mounting volumes before executing commands with docker-compose and boot2docker

I'm using OSX and I have installed Kitematic which uses boot2docker to run docker and containers. I have created a container which needs to mount some local folder under docker folder and doing that with docker-compose:
web:
build: .
ports:
- "9001:9001"
- "9002:9002"
volumes:
- /projects/test /somepath
- /projects/test2 /someotherpath
command: ant clean all;./server.sh start
when I run docker-compose up it seems that the volume are not mounted before executing the command phase cause I am getting error logs that /somepathand /someotherpath cannot be found.
I do not understand what is wrong with docker-command configuration.
I think you need to replace the space with a colon when mapping volumes. For example:
kafka:
image: wurstmeister/kafka:0.8.2.1
hostname: kafka
ports:
- "9092:9092"
links:
- zookeeper:zk
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock

Resources