I am new to parse-server and to docker world, and I think that either I did not understand this properly or it is not working. Sorry if this will come as a stupid question.
So from docker documentation, I understand that if I want to bind a folder location to my docker location I have to do something like this.
volumes:
- /host/path/to/folder:/docker/path/to/folder
But the thing that I am missing is that after I create all my docker and I bind the volume paths like this when I am adding new rows into my MongoDB database I will have nothing saved into those folders. Can anyone explain to me what I am doing wrong?
Basically, I am trying to save all my changes from MongoDB and the server into a local folder.
My docker-compose:
version: '3.9'
services:
database:
image: mongo:6.0.1
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: admin
volumes:
- ./data/mongodb:/data/mongodb
server:
restart: always
image: parseplatform/parse-server:5.2.5
ports:
- 1337:1337
environment:
- PARSE_SERVER_APPLICATION_ID=COOK_APP
- PARSE_SERVER_MASTER_KEY=MASTER_KEY_1
- PARSE_SERVER_DATABASE_URI=mongodb://admin:admin#mongo/parse_server?authSource=admin
- PARSE_ENABLE_CLOUD_CODE=yes
- PARSE_SERVER_URL=http://10.0.2.2:1337/parse
links:
- database:mongo
volumes:
- ./data/server:/data/server
dashboard:
image: parseplatform/parse-dashboard:4.1.4
ports:
- "4040:4040"
depends_on:
- server
environment:
- PARSE_DASHBOARD_APP_ID=COOK_APP
- PARSE_DASHBOARD_APP_NAME=COOK_APP
- PARSE_DASHBOARD_MASTER_KEY=MASTER_KEY_1
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=admin
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=true
- PARSE_DASHBOARD_SERVER_URL=http://localhost:1337/parse
volumes:
- ./data/dashboard:/data/dashboard
UPDATE:
After I've checked your response regarding ./data/mongodb:/data/db is working just partially. In a sense that I have these 2 cases.
If I will use it like this data/mongodb:/data/db without that . in order to save that data into my root directory, then everything is working fine. But I would like to save it in my local directory where all the projects will be.
So if I am doing as you said ./data/mongodb:/data/db in order to save it into the local directory my MongoDB is not going to star and I will get this error message for some unknown reason.
{"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."} 2022-09-07T16:05:52.524152876Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"STORAGE", "id":28595, "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"1: Operation not permitted"}} 2022-09-07T16:05:52.524168870Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"ASSERT", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":702}} 2022-09-07T16:05:52.524183328Z {"t":{"$date":"2022-09-07T16:05:52.523+00:00"},"s":"F", "c":"ASSERT", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
Any idea why?
If you check the "where to store data" section of the mongo image documentation (https://hub.docker.com/_/mongo), you will see that, inside the container, mongo actually saves the data in the folder /data/db (and not /data/mongodb). So you will have to bind it:
volumes:
- ./data/mongodb:/data/db
Related
I'm using docker and docker-compose to easily setup parser-server for a project using the parseplatform/parse-server image.
So far everything runs perfectly and I can go and manually add the classes I need for the app via the parse-dashboard, but I would like to automate this.
Is there any way to pass a schema on parse first run to populate the database with the required classes? I'm keeping PARSE_SERVER_ALLOW_CLIENT_CLASS_CREATION as false.
Any ENV variables that I'm not aware of? Is the only alternative pulling the repo and building the image with some custom config?
As far my search went I can't find any info on the subject. Parse seems to be a very powerful tool, but the docs tend to be very simplistic as far I've seen.
Thanks for any input/help.
docker-compose.yml
version: '3'
services:
mongodb:
image: mongo
container_name: parse-mongo
volumes:
- ./mongodb:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME
- MONGO_INITDB_ROOT_PASSWORD
parse:
image: parseplatform/parse-server
container_name: parse-server
ports:
- 1337:1337
links:
- mongodb:mongo
depends_on:
- mongodb
environment:
- PARSE_SERVER_APPLICATION_ID
- PARSE_SERVER_APP_NAME
- PARSE_SERVER_MASTER_KEY
- PARSE_SERVER_DATABASE_URI
- PARSE_SERVER_MOUNT_GRAPHQL
- PARSE_SERVER_MOUNT_PLAYGROUND
- PARSE_SERVER_ALLOW_CLIENT_CLASS_CREATION
.env
# MONGO DB
MONGO_INITDB_ROOT_USERNAME=###
MONGO_INITDB_ROOT_PASSWORD=###
# PARSE SERVER
PARSE_SERVER_APPLICATION_ID=###
PARSE_SERVER_APP_NAME=###
PARSE_SERVER_MASTER_KEY=###
PARSE_SERVER_DATABASE_URI="mongodb://${MONGO_INITDB_ROOT_USERNAME}:${MONGO_INITDB_ROOT_PASSWORD}#mongo:27017"
PARSE_SERVER_MOUNT_GRAPHQL=1
PARSE_SERVER_MOUNT_PLAYGROUND=1
PARSE_SERVER_ALLOW_CLIENT_CLASS_CREATION=0
I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql
I'm using docker-compose to create a Docker network of containers with InfluxDB, a python script and Grafana to harvest and visualize response codes, query times & other stats of different websites.
I am using Grafana image 7.3.0 with a volume,
I have modified the paths environment variables so I'll have to use only one volume to save all the data.
When I start the Grafana container it logs:
GF_PATHS_CONFIG='/etc/grafana/grafana.ini' is not readable.
GF_PATHS_DATA='/etc/grafana/data' is not writable.
GF_PATHS_HOME='/etc/grafana/home' is not readable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-
docker-container-to-5-1-or-later
mkdir: can't create directory '/etc/grafana/plugins': Permission denied
But here is the thing, I'm not migrating from below 5.1 I'm not even migrating at all!
So I tried to follow their instruction to change permissions of files but it did not worked.
I tried to set the user id in the docker-compose but it did not help.
(as-said in the docs 472 == post 5.1, 104 == pre 5.1 but both did not worked)
I can't even change permissions manually (which is not a satisfying solution btw) because the container is crashing.
I normally don't ask questions because they already have answers but I've seen no one with this trouble using 7.3.0 so I guess it's my time to shine Haha.
Here is my docker-compose.yml (only the grafana part)
version: '3.3'
services:
grafana:
image: grafana/grafana:7.3.0
ports:
- '3000:3000'
volumes:
- './grafana:/etc/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
environment:
- GF_PATHS_CONFIG=/etc/grafana/grafana.ini
- GF_PATHS_DATA=/etc/grafana/data
- GF_PATHS_HOME=/etc/grafana/home
- GF_PATHS_LOGS=/etc/grafana/logs
- GF_PATHS_PLUGINS=/etc/grafana/plugins
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
user: "472"
Thank you very much for your potential help!
Edit : I've been wondering if there is a grafana user in latest version (8.0), I think that build a home dir for grafana using a Dockerfile could be the solution I just need to find that user.
I'm here to close this subject.
So this was kind of a noob mistake but I could not have known.
The problem came from the fact that Grafana won't chown and chmod the volume folder. The error does not occures but it won't work because it does not save the data.
The solution was to remove the env variables and changing permissions of the local './grafana' folder wich contained the volume.
So I did
chown -R <personal local user> /path/to/local/volume/folder && \
chmod -R 777 /path/to/local/volume/folder
And now it works normally
Here is my new docker compose
docker-compose.yml
grafana:
image: grafana/grafana
ports:
- '3000:3000'
volumes:
- './grafana:/var/lib/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
Thanks everybody four your help !
Just replace your user's id that you will get on the following command:
$ id -u
Im running 'id -u' in my terminal and getting '1000'. SO, I replaced user: "xxxx" to user: "1000" in docker-compose.yml
version: '3.3'
services:
grafana:
image: grafana/grafana:7.3.0
ports:
- '3000:3000'
volumes:
- './grafana:/etc/grafana'
networks:
- db-to-grafana
depends_on:
- db
- influxdb_cli
environment:
- GF_PATHS_CONFIG=/etc/grafana/grafana.ini
- GF_PATHS_DATA=/etc/grafana/data
- GF_PATHS_HOME=/etc/grafana/home
- GF_PATHS_LOGS=/etc/grafana/logs
- GF_PATHS_PLUGINS=/etc/grafana/plugins
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
user: "1000"
I am trying to get my Spring Boot, Angular and Mysql together making use of docker-compose (locally it is working). Spring Boot Image as well as angular image are working correctly after executing docker-compose up. I can see my angular app in the browser and I can make successfull Rest Call to my Spring API. The main problem ist, that if I make a Request from Angular to the API there is no successful Rest Call anymore...
Problem could be with db... first it says:
/usr/sbin/mysqld: ready for connections. Version: '8.0.21' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
in the console. But a bit later as last console output for db it says:
mbind: Operation not permitted
I don't know if this is a problem because I can make some Restcalls from browser (not angular) successfully as written earlier.
Another assumption I have is, that ports have to be configured in another way.. but I tried already a lot of different combinations also with spring application + always creating new spring image.
What can also be an issue is, that the db throws some SQL errors like
Error executing DDL "alter table userrole add constraint userIdReference foreign key (`user_id`) references `user` (`user_id`)" via JDBC Statement
But still I can make some RestCalls.. and for instance within MySql workbench I can import the sql file without any problems and start spring boot + angular locally to successfully start the project.
springpart_1 | Hibernate: select * from product where product.current_name = ?
Messages like above do appear on console after starting docker compose-up but does not load anything into angular client.
GET http://localhost:8077/products net::ERR_CONNECTION_REFUSED
Other than that I have no real clue what could be the problem.. probably also because I am new to docker. Thank you in advance for your help.
docker-compose_file
services:
springpart:
image: ce153fc5b589
ports:
- '8077:8077'
environment:
- DATABASE_HOST=db
- DATABASE_PORT=3306:3306
networks:
- backend
- frontend
depends_on:
- db
restart: on-failure
db:
image: mysql:8.0
volumes:
- .src/main/resources/guitarshop/currentGuitarshopData:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=guitarshop
networks:
- backend
angularpart:
image: b8140c7fedec
ports:
- '4200:80'
networks:
- frontend
networks:
frontend:
backend:
angular-image-creation docker_file
FROM node:alpine As builder
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build --prod
FROM nginx:alpine
COPY --from=builder /usr/src/app/dist/guitarShopAngular/ /usr/share/nginx/html
EXPOSE 80
application.properties_file
spring.datasource.url=jdbc:mysql://db:3306/guitarshop?serverTimezone=UTC&useLegacyDatetimeCode=false?autoReconnect=true&failOverReadOnly=false&maxReconnects=10
spring.datasource.initialization-mode=never
spring.datasource.username = root
spring.datasource.password = mypassword
spring.datasource.platform=mysql
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL8Dialect
spring.jpa.show-sql = true
server.port = 8077
spring.main.banner-mode=off
spring.jackson.serialization.fail-on-empty-beans=false
spring.servlet.multipart.enabled=true
spring.servlet.multipart.max-file-size=500KB
spring.servlet.multipart.max-request-size=500KB
spring.servlet.multipart.resolve-lazily=false
If you need any more information just let me know..
Try changing your volume bind mount like so:
"./src/main/resources/guitarshop/currentGuitarshopData:/var/lib/mysql"
Like mentioned in the comments I had no issues with getting the app up and running. In terms of the data it seems like the db files included in the project don't have any data, but I was able to add data manually and then see that data through the app. The only real issue in your compose was the bind mount path, but once I fixed that the data I added persisted after recreating containers.
Here are some suggestions as ideally you should be able to clone the repo and run "docker-compose up -d" and have the app running. Right now that isn't possible since you first have to build your spring app locally, then manually build the spring and angular docker images, then run compose up.
Create a new root folder and move the backend and frontend folders into it.
Move the compose file to the root folder.
Look into building your spring app by using a multi stage build like explained here: https://spring.io/blog/2018/11/08/spring-boot-in-a-container/#multi-stage-build.
Modify your compose file to something like this:
"
version: '3'
services:
springpart:
build: ./GuitarShopBackend
ports:
- '8077:8077'
environment:
- DATABASE_HOST=db
- DATABASE_PORT=3306
networks:
- backend
- frontend
depends_on:
- db
restart: on-failure
db:
image: 'mysql:8.0.17'
volumes:
# This will initialize your database if it doesn't already exist with your provided sql file
- ./GuitarShopBackend/src/main/resources/guitarshop/initialGuitarshopData:/docker-entrypoint-initdb.d
# This will persist your database across container restarts
- ./GuitarShopBackend/src/main/resources/guitarshop/currentGuitarshopData:/var/lib/mysql
ports:
- '3306:3306'
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=guitarshop
networks:
- backend
angularpart:
build: ./GuitarShopAngular
ports:
- '4200:80'
networks:
- frontend
networks:
frontend: null
backend: null
I want to use rabbitMQ, for this I'm using this docker-compose.yml file:
version: '2'
services:
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
volumes:
- /tmp_data:/var/lib/rabbitmq
it works as expected.
I'm entering some users over the admin GUI interface.
But when i delete the container, I was expecting to still have the created users.
But it seems, that rabbitMQ is not saving it in the folder I specified.
I was going through the documentation, but i haven't found any other folder where this configurations are saved
Thanks for you help.
I think you need these three volumes which include all configs, and you need to add one more ENV:
environment:
- RABBITMQ_NODENAME: MYNODE#rabbitmq
volumes:
- ./rabbitmq:/var/lib/rabbitmq
- ./definitions.json:/opt/definitions.json
- ./rabbitmq.config:/etc/rabbitmq/rabbitmq.config
see this