I want to use rabbitMQ, for this I'm using this docker-compose.yml file:
version: '2'
services:
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- /tmp_data:/var/lib/rabbitmq
it works as expected.
I'm entering some users over the admin GUI interface.
But when i delete the container, I was expecting to still have the created users.
But it seems, that rabbitMQ is not saving it in the folder I specified.
I was going through the documentation, but i haven't found any other folder where this configurations are saved
Thanks for you help.
I think you need these three volumes which include all configs, and you need to add one more ENV:
environment:
- RABBITMQ_NODENAME: MYNODE#rabbitmq
volumes:
- ./rabbitmq:/var/lib/rabbitmq
- ./definitions.json:/opt/definitions.json
- ./rabbitmq.config:/etc/rabbitmq/rabbitmq.config
see this
Related
i am trying to create 2 database in a docker compose yml file, one is for the app and the other is for the test part, in the java spring framework i do use the the url like "jdbc:postgresql://localhost:5401/webTest", but id does not work.
From the cmd, i can connect to the database-user with no problem and the table are there but i can not connect to the database-test, is there a specific issue i am blind about?
#service
services:
database-user:
#container_name: postgres-user
image: postgres
ports:
- 5401:5432
volumes:
- postgres-user:/var/lib/postgresql/data
- ./scripts/create-table-db.sql:/docker-entrypoint-initdb.d/create-table-db.sql
environment:
- POSTGRES_USER=webAppUser
- POSTGRES_PASSWORD=user
- POSTGRES_DB=webApp
database-test:
#container_name: postgres-test
image: postgres
ports:
- 5402:5432
volumes:
- postgres-test:/var/lib/postgresql/data
- ./scripts/create-table-db.sql:/docker-entrypoint-initdb.d/create-table-db.sql
environment:
- POSTGRES_USER=webAppTest
- POSTGRES_PASSWORD=test
- POSTGRES_DB=webTest
volumes:
postgres-user:
postgres-test:
i did try to follow some example like here but its not clear.
(the database-user does work also in the java part)
I am new to parse-server and to docker world, and I think that either I did not understand this properly or it is not working. Sorry if this will come as a stupid question.
So from docker documentation, I understand that if I want to bind a folder location to my docker location I have to do something like this.
volumes:
- /host/path/to/folder:/docker/path/to/folder
But the thing that I am missing is that after I create all my docker and I bind the volume paths like this when I am adding new rows into my MongoDB database I will have nothing saved into those folders. Can anyone explain to me what I am doing wrong?
Basically, I am trying to save all my changes from MongoDB and the server into a local folder.
My docker-compose:
version: '3.9'
services:
database:
image: mongo:6.0.1
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
volumes:
- ./data/mongodb:/data/mongodb
server:
restart: always
image: parseplatform/parse-server:5.2.5
ports:
- 1337:1337
environment:
- PARSE_SERVER_APPLICATION_ID=COOK_APP
- PARSE_SERVER_MASTER_KEY=MASTER_KEY_1
- PARSE_SERVER_DATABASE_URI=mongodb://admin:admin#mongo/parse_server?authSource=admin
- PARSE_ENABLE_CLOUD_CODE=yes
- PARSE_SERVER_URL=http://10.0.2.2:1337/parse
links:
- database:mongo
volumes:
- ./data/server:/data/server
dashboard:
image: parseplatform/parse-dashboard:4.1.4
ports:
- "4040:4040"
depends_on:
- server
environment:
- PARSE_DASHBOARD_APP_ID=COOK_APP
- PARSE_DASHBOARD_APP_NAME=COOK_APP
- PARSE_DASHBOARD_MASTER_KEY=MASTER_KEY_1
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=admin
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=true
- PARSE_DASHBOARD_SERVER_URL=http://localhost:1337/parse
volumes:
- ./data/dashboard:/data/dashboard
UPDATE:
After I've checked your response regarding ./data/mongodb:/data/db is working just partially. In a sense that I have these 2 cases.
If I will use it like this data/mongodb:/data/db without that . in order to save that data into my root directory, then everything is working fine. But I would like to save it in my local directory where all the projects will be.
So if I am doing as you said ./data/mongodb:/data/db in order to save it into the local directory my MongoDB is not going to star and I will get this error message for some unknown reason.
{"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."} 2022-09-07T16:05:52.524152876Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"STORAGE", "id":28595, "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"1: Operation not permitted"}} 2022-09-07T16:05:52.524168870Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"ASSERT", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":702}} 2022-09-07T16:05:52.524183328Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"ASSERT", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
Any idea why?
If you check the "where to store data" section of the mongo image documentation (https://hub.docker.com/_/mongo), you will see that, inside the container, mongo actually saves the data in the folder /data/db (and not /data/mongodb). So you will have to bind it:
volumes:
- ./data/mongodb:/data/db
I have a docker-compose LAMP stack comprised of three services; a webserver, php and mysql.
The apache2 webroot inside the container is shared to my local machine using a volume like so:
volumes:
- ./public_html:/usr/local/apache2/htdocs
When the stack is running though, I can't edit files inside of the shared volume, since I have a different local user as the user inside the apache2 container. Additionally the installer of my CMS (Processwire) is unable to acquire permissions to the required install directories.
The apache container uses alpine 2.4.35.
I've build my docker-compose file according to this tutorial:
https://medium.com/#thivi/creating-a-lamp-stack-using-docker-compose-13ca4e3950e1
Below I have attached my docker-compose.yml.
version: '3.7'
services:
apache:
build: './apache'
restart: always
ports:
- 80:80
- 443:443
networks:
- frontend
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./cert/:/usr/local/apache2/cert/
depends_on:
- php
- mysql
php:
build: './php'
restart: always
networks:
- backend
volumes:
- ./public_html:/usr/local/apache2/htdocs
- ./tmp:/usr/local/tmp
mysql:
build: './mysql'
restart: always
ports:
- 3306:3306
expose:
- 3306
networks:
- backend
volumes:
- ./database:/var/lib/mysql
networks:
backend:
frontend:
Is there any way to fix this issue? I'd be grateful for answers, I've been dealing with this issue for the past 2 days, without getting anywhere and I'm also kind of surprised that such an essential feature like directory sharing is so complicated.
/edit:
I've also noticed something interesting; when I execute a bash inside the apache-container the ownership of apache's document root is set to nobody:nobody, which probably also isn't right.
I already have a docker-compose.yml file like this:
version: "3.1"
services:
memcached:
image: memcached:alpine
container_name: dl-memcached
redis:
image: redis:alpine
container_name: dl-redis
mysql:
image: mysql:5.7.21
container_name: dl-mysql
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: dl-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=dl-mysql
- PMA_PORT=3306
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
restart: always
ports:
- 8002:80
volumes:
- /application
links:
- mysql
elasticsearch:
build: phpdocker/elasticsearch
container_name: dl-es
volumes:
- ./phpdocker/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "8003:9200"
webserver:
image: nginx:alpine
container_name: dl-webserver
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./logs:/var/log/nginx:delegated
ports:
- "9003:80"
php-fpm:
build: phpdocker/php-fpm
container_name: dl-php-fpm
working_dir: /application
volumes:
- ./../:/application:delegated
- ./phpdocker/php-fpm/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini
- ./../docker/php-fpm/certs/store_stock/:/usr/local/share/ca-certificates/
- ./logs:/var/log:delegated # nginx logs
- /application/var/cache
environment:
XDEBUG_CONFIG: remote_host=host.docker.internal
PHP_IDE_CONFIG: "serverName=dl"
node:
build:
dockerfile: dl/phpdocker/node/Dockerfile
context: ./../
container_name: dl-node
working_dir: /application
ports:
- "8008:3000"
volumes:
- ./../:/application:cached
tty: true
My goal is to have 2 isolate environments working at the same time in the same server with the same docker-compose file? I wonder if it's possible?
I want to be able to stop and update one env. while the other one is still running and getting the traffic.
Maybe I need another approach in my case?
There are a couple of problems with what you're trying to do. If your goal is to put things behind a load balancer, I think that rather than trying to start multiple instances of your project, a better solution would be to use the scaling features available to docker-compose. In particular, if your goal is to put some services behind a load balancer, you probably don't want multiple instances of things like your database.
If you combine this with a dynamic front-end proxy like Traefik, you can make the configuration largely automatic.
Consider a very simple example consisting of a backend container running a simple webserver and a traefik frontend:
---
version: "3"
services:
webserver:
build:
context: web
labels:
traefik.enable: true
traefik.port: 80
traefik.frontend.rule: "PathPrefix:/"
frontend:
image: traefik
command:
- --api
- --docker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "80:80"
- "127.0.0.1:8080:8080"
If I start it like this, I get a single backend and a single frontend:
docker-compose up
But I can also ask docker-compose to scale out the backend:
docker-compose up --scale webserver=3
In this case, I get a single frontend and three backend servers. Traefik will automatically discover the backends and will round-robin connections between them. You can download this example and try it out.
Caveats
There are a few aspects of your configuration that would need to change in order to make this work (and in fact, you would need to change them even if you were to create multiple instances of your project as you have proposed in your question).
Conflicting paths
Take for example the configuration of your webserver container:
volumes:
- ./logs:/var/log/nginx:delegated
If you start two instances of this service, both containers will mount ./logs on /var/log/nginx. If they both attempt to write to /var/log/nginx/access.log, you're going to have problems.
The easiest solution here is to avoid bind mounts for things like log directories (and any other directories to which you will be writing), and instead use named docker volumes.
Hardcoding container names
In some places, you are hardcoding the container name, like this:
mysql:
image: mysql:5.7.21
container_name: dl-mysql
This will cause problems if you attempt to start multiple instances of this project or multiple instances of the mysql container. Don't statically set the container name.
Deprecated links syntax
Your configuration is using the deprecated links syntax:
links:
- mysql
Don't do that. In modern docker, containers on the same network can simply refer to each other by name. In other words, if your compose configuration has:
mysql:
image: mysql:5.7.21
restart: unless-stopped
working_dir: /application
environment:
- MYSQL_DATABASE=dldl
- MYSQL_USER=docker
- MYSQL_PASSWORD=docker
- MYSQL_ROOT_PASSWORD=docker
volumes:
- ./../:/application
ports:
- "8007:3306"
Other containers in your compose stack can simply use the hostname mysql to refer to this service.
You won't be able to run same compose file on a host without changing the port mappings because that will cause port conflict. I'd recommend creating a base compose file and using extends to override port mappings for different environments.
I want to delete all previous information of drone and make a completely new installation. So what I'm doing is this.
With this dockerfile:
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 80:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_HOST=http://drone-server:8000
- DRONE_GITEA=true
- DRONE_GITEA_URL=http://web:3000
- DRONE_SECRET=${DRONE_SECRET}
drone-agent:
image: drone/agent:0.8
command: agent
restart: always
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
web:
image: gitea/gitea:1.3.2
volumes:
- ./data:/data
ports:
- "3000:3000"
- "22:22"
depends_on:
- db
restart: always
db:
image: mariadb:10
restart: always
environment:
- MYSQL_ROOT_PASSWORD=changeme
- MYSQL_DATABASE=gitea
- MYSQL_USER=gitea
- MYSQL_PASSWORD=changeme
volumes:
- ./db/:/var/lib/mysql
I'm made a docker-compose up
Then to delete everything I made a docker-compose down. And make sure to delete every volume and every container manually. But when I do docker-compose up again the old information is still there. Why? Where is drone getting that information? I'm new with drone and docker so probably I'm doing something wrong because this does not make sense. Maybe I'm forgetting to delete something.
Can you help me with that?
Drone stores details about its runs in an SQLite db in /var/lib/drone by default, which you have mounted as a volume, so the stuff it saves in there is kept on your machine when you spin things down and passed back to the new containers when you create them.
If you want to completely reset everything you need to remove the files in your host machines /var/lib/drone folder too.