My goal is to have a program running on the host machine that writes data to a sqlite db that is then transferred (mounted) to a docker-compose running Grafana.
It is possible to do this with the following configuration
grafana:
container_name: grafana
networks:
- backend
image: grafana/grafana:latest
volumes:
- ../database/database.sqlite:/home/grafana/database.sqlite
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yaml
ports:
- 3000:3000
networks:
backend:
volumes:
grafana_data:
external: true
However, this will only mount the DB at the time of creation, any new changes written to the db will not be reflected on the container.
How can I solve this?
It is possible to mount a live db (in this case I'm using Sqlite for some dashboard visualization in Grafana. The docker-compose.yml is as follows:
The sqlite db will exist, be written and read from the host machine, this in turn will be binded to the docker container.
grafana:
container_name: grafana
networks:
- backend
image: grafana/grafana:latest
volumes:
- type: bind
source: database/database.sqlite
target: /home/grafana/database.sqlite # needs to be absolute
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yaml
ports:
- 3000:3000
networks:
backend:
volumes:
grafana_data:
external: true
Keep in mind that if you're using PRAGMA journal_mode=WAL;, the docker container will not reflect the changes until the journal is closed.
Related
I have this docker-compose.yml file code:
version: '3.3'
services:
redis:
container_name: redis
image: 'redis:latest'
environment:
X_REDIS_PORT: ${REDIS_PORT}
ports:
- ${REDIS_PORT}:${REDIS_PORT}
volumes:
- redis:/data
networks:
app_network:
external: true
driver: bridge
volumes:
postgres:
pgadmin:
supertoken:
redis:
I want to save the cached data inside of the redis container but it is not getting saved in the container instead it gets saved on my local machine.
How to change this behaviour?
inspect your volume,
docker volume inspect redis
the mountpoint is /var/lib/docker/volumes/, check the redis volumes data.
I use folder,there is a demo in my github repo,
volumes:
- ./data:/data
I'm recently trying to get a local Jira instance run in a docker container on an Apple Silicone M1 chip.
I'm using Postgres for the database (also tried mariaDB) and an arm image of Jira that I've found on GitHub.
However, whenever I docker-compose the setup I run into an error 500 "Error writing database configuration file."
Both, Jira and the DB container seem to start up fine.
I guess that the database might be not reachable but I have no idea how to check that.
TLDR: How can I check whether my DB is reachable to my Jira container OR rather how to fix the error 500 from Jira "Error writing database configuration file."
Below is the compose file I'm using:
services:
jira:
dchevell/jira-software-arm64
#image: ghcr.io/eugenmayer/jira:${JIRA_VERSION}
depends_on:
- db
container_name: jirasoftwarevomeugen
volumes:
- jiradata:/var/atlassian/jira
ports:
- '80:8080'
environment:
- 'JIRA_DATABASE_URL=postgresql://jira#db/jiradb'
- 'JIRA_DB_PASSWORD=jellyfish'
- 'CATALINA_OPTS= -Xms256m -Xmx1g'
- 'JIRA_PROXY_NAME='
- 'JIRA_PROXY_PORT='
- 'JIRA_PROXY_SCHEME='
# need for the wait-for-db statement
- 'JIRA_DB_HOST=db'
- 'JIRA_DB_PORT=5432'
db:
image: postgres
hostname: postgresql
volumes:
- postgresqldata:/var/lib/postgresql/data
environment:
- 'POSTGRES_USER=jira'
- 'POSTGRES_PASSWORD=jellyfish'
- 'POSTGRES_DB=jiradb'
- 'POSTGRES_ENCODING=UTF8'
- 'POSTGRES_COLLATE=C'
- 'POSTGRES_COLLATE_TYPE=C'
# uncomment this to run against mysql
# db:
# image: mariadb:10.3
# hostname: mysql
# volumes:
# - mysqldata:/var/lib/mysql
# environment:
# - 'MYSQL_ROOT_PASSWORD=verybigsecretrootpassword'
# - 'MYSQL_DATABASE=jiradb'
# - 'MYSQL_USER=jira'
# - 'MYSQL_PASSWORD=jellyfish'
volumes:
jiradata:
external: false
postgresqldata:
external: false
mysqldata:
external: false
newdb:
external: false
When Jira starts up, it will try to write configuration files into the Jira data directory. This type of problem can occur if the directory you have mounted is not writable by the Jira user.
You may want to investigate the UID:GID of the Jira user in the image you have chosen, then spin up a standalone container to run a shell (which also mounts the jiradata container), chown -R the mounted directory to the correct user:group (if it is not what you expect), and then try restarting Jira.
I took a build in arm architecture and uploaded it to the hub. You can access it here. You can use the following docker compose as an example.
version: "3.1"
volumes:
jira_volume:
postgres_volume:
services:
postgres:
image: postgres:10
ports:
- 5435:5432
volumes:
- postgres_volume:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: admin
jira:
image: yasinmert/atlassian-jira-software-arm64
ports:
- 7030:8080
volumes:
- jira_volume:/var/atlassian/application-data/jira
environment:
ATL_TOMCAT_CONTEXTPATH: /jira
ATL_JDBC_URL: jdbc:postgresql://postgres:5432/jira
ATL_JDBC_USER: postgres
ATL_JDBC_PASSWORD: admin
ATL_DB_DRIVER: org.postgresql.Driver
depends_on:
- postgres
I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true
What I'm trying to do
I normally start my postgres database (local-db) by running docker-compose up which by default relies on the docker-compose.yml file. In the same project directory I have also created a docker-compose.data.yml file. I'd like to use this second compose file to spin up a different database (data-db) for testing purposes when local-db is not running.
What I've tried
In docker-compose.yml:
version: "3.7"
services:
proxy:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: bind
source: ./config/app.conf
target: /etc/nginx/conf.d/app.conf
read_only: true
db:
image: postgres:10
ports:
- "5432:5432"
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=local-db
- POSTGRES_PORT=5432
volumes:
- type: volume
source: postgres-local-db
target: /var/lib/postgresql/data
volumes:
postgres-local-db:
In docker-compose.data.yml:
version: "3.7"
services:
proxy:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: bind
source: ./config/app.conf
target: /etc/nginx/conf.d/app.conf
read_only: true
db:
image: postgres:10
ports:
- "5432:5432"
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=data-db
- POSTGRES_PORT=5432
volumes:
- type: volume
source: postgres-data-db
target: /var/lib/postgresql/data
volumes:
postgres-data-db:
Then I run docker-compose -f docker-compose.data.yml up
Expected results
I want to spin up the data-db database specified in the docker-compose.data.yml file.
Actual results
local-db is spinned up again and trying to connect to data-db via PyCharm results in the error FATAL: database "data-db" does not exist
It's a wild guess but I think you either manually copied data from the local-db to the data-db or your first launch of the data-db container was with POSTGRES_DB variable set to local-db. POSTGRES_PASSWORD, POSTGRES_USER, POSTGRES_DB only make sense when PGDATA contain no database, otherwise they're just ignored. You may test the theory by changing something in the local-db then querying the same value in data-db. This should clearly define if it is the same database or not.
If those two are actually different databases (as they should be) you may re-initialise the data-db container with this:
# destroy containers along with persistent data on volumes
docker-compose -f docker-compose.data.yml down -v
# then create anew
docker-compose -f docker-compose.data.yml up
This time it will respect POSTGRES_DB variable and create a database named data-db.
I also recommend you to add a container_name property to your database containers so you can easily differentiate them. This won't help with the problem at all but it will help you to understand which one is up.
I have 2 applications that are separate codebases, and they each have their own database on the same db server instance.
I am trying to replicate this in docker, locally on my laptop. I want to be able to have both apps use the same database instance.
I would like
both apps to start in docker at the same time
both apps to be able to access the database on localhost
the database data is persisted
be able to view the data in the database using an IDE on localhost
So each of my apps has its own dockerfile and docker-compose file.
On app1, I start the docker instance of the app which is tied to the database. It all starts fine.
When I try to start app2, I get the following error:
ERROR: for app2_mssql_1 Cannot start service mssql: driver failed programming external connectivity on endpoint app2_mssql_1 (12d550c8f032ccdbe67e02445a0b87bff2b2306d03da1d14ad5369472a200620): Bind for 0.0.0.0:1433 failed: port is already allocated
How can i have them both running at the same time? BOTH apps need to be able to access each others database tables!
Here is the docker-compose.yml files
app1:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app1_db:/var/lib/mssql/data
volumes:
app1_db:
and here is app2:
version: "3"
services:
web:
build:
context: .
args:
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=P455w0rd!
volumes:
- app2_db:/var/lib/mssql/data
volumes:
app2_db:
Should I be using the same volume in each docker-compose file?
I guess the problem is in each app i am spinning up 2 different db instances, when in reality I guess i just want one, and it be used by all my apps?
The ports part in docker-compose file will bound the container port to host's port which causes port conflict in your case.
You need to remove the ports part from at least one of the compose file. This way, docker-compose can be up for both. And you can have access to both app at same time. But remember both apps will be placed in separate network bridges.
How docker-compose up works:
Suppose your app is in a directory called myapp, and your docker-compose.yml
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created using web’s configuration. It joins the network myapp_default under the name web.
A container is created using db’s configuration. It joins the network myapp_default under the name db.
If you run the second docker-compose.yml in different folder myapp2, then the nework will be myapp2_default.
Current configuration creates two volumes, two datebase containers and two apps. If you can make them run in the same network and run database as the single container it will work.
I don't think you are expecting two database container two two volumes.
Approach 1:
docker-compose.yml as a single compose.
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
depends_on:
- mssql
app2:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app2.
ports:
- "3032:3000"
depends_on:
- mssql
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
volumes:
app_docker_db:
Approach 2:
To Isolate it further, still want to run them as the sepeare composefiles, create three compose file with network.
docker-compose.yml for database with network
version: "3"
services:
mssql:
image: 'microsoft/mssql-server-linux'
ports:
- '1433:1433'
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SqlServer1234!
volumes:
- app_docker_db:/var/lib/mssql/data
networks:
- test_network
volumes:
app_docker_db
networks:
test_network:
docker-ompose.yml for app1
remove the database container and add below lines to your compose file
version: "3"
services:
app1:
build:
context: .
args:
volumes:
- .:/app # give the path depending up on the docker file of app1.
ports:
- "3030:3000"
networks:
default:
external:
name: my-pre-existing-network
Do the same for another docker-compose by replacing the docker-compose file.
There are many other option to create docker-compose files. Configure the default network and Use a pre-existing network
You're exposing the same port (1433) two times to the host machine. (This is what "ports:..." does). This is not possible as it would block the same port on your host (That's what the message says).
I think the most common way in these cases is that you link your db's to your apps. (See https://docs.docker.com/compose/compose-file/#links). By doing this your applications can still access the databases on their common ports (1433), but the databases are not accessible from the host anymore (only from the container that is linked to it).
Another error I see in your docker compose file is that both applications are exposed by the same ports. This is also not possible for the same reason. I would suggest that you change one of them to "3000:3001", so you can access this application on port 3001.