I've configured a little cluster using docker-compose, consisting of parse-server, mongo and parse-dashboard:
version: "3"
services:
myappdb:
image: mongo
ports:
- 27017:27017
myapp-parse-server:
image: parseplatform/parse-server
environment:
- PARSE_SERVER_MASTER_KEY=xxxx
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=0
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
depends_on:
- myappdb
ports:
- 5000:1337
parse-dashboard:
image: parseplatform/parse-dashboard
ports:
- 5001:4040
environment:
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=1
- PARSE_DASHBOARD_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_DASHBOARD_APP_ID=myapp
- PARSE_DASHBOARD_MASTER_KEY=xxxx
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=xxxx
Try as I might, however, I cannot get the deployed parse-dashboard to connect to the myapp-parse-server. After I log in to the dashboard using my browser (at localhost:5001), the dashboard app informs me that it is 'unable to connect to server'.
I've tried pinging the host 'myapp-parse-server' from the parse-dashboard container, and it can see the container just fine. Similarly, it can see the endpoint http://myapp-parse-server:1337/parse; wget returns the expected 403.
If I use a copy of parse-dashboard running on my host machine, it works just fine against http://localhost:5000/parse. So the forwarded port from my host to the parse-server works.
I've also tried configuring dashboard using parse-dashboard-config.json mounted into the container. Yields exactly the same result.
I'm at a loss as to what I'm doing wrong here. Can anybody shed some light on this?
It looks like you have some issues with your docker-compose file:
PARSE_SERVER_URL is pointing to myapp-parse-server and it should point to http://localhost:1337/parse instead (unless you modified your hosts file on the container somehow but I don't see it)
Your myapp-parse-server should link to your database using links
here is an example of a docker-compose file from one blog that I wrote on how to deploy parse-server to google container enginee
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
You can see from the example above that I use links and also create and attach volume for the database disk.
Also, I personally think that it's better to run parse-server with config file in order to decouple all configurations so my configuration file looks like the following (in my docker compose you can see that I'm running parse server with config file and not with env variables)
{
"databaseURI": "mongodb://localhost:27017/my-db",
"appId": "myAppId",
"masterKey": "myMasterKey",
"serverURL": "http://localhost:1337/parse",
"cloud": "./cloud/main.js",
"mountPath": "/parse",
"port": 1337
}
Finally, In my parse-dashboard image I also use config file and simply create it as volume and replace the default config file with my own. Because this step was not covered in my blogs your final docker-compose file should look like the following:
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
dashboard:
image: parseplatform/parse-dashboard:1.1.0
volumes:
- ./dashboard/dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json
environment:
PORT: 4040
PARSE_DASHBOARD_ALLOW_INSECURE_HTTP: 1
ALLOW_INSECURE_HTTP: 1
MOUNT_PATH: "/parse"
And the parse-dashboard.json (the config file) should be:
{
"apps": [
{
"serverURL": "http://localhost:3000/parse",
"appId": "myMasterKey",
"masterKey": "myMasterKey",
"appName": "My App"
}
],
"users": [
{
"user": "myuser",
"pass": "mypassword"
}
],
"useEncryptedPasswords": false
}
I know that it's a little bit long so I really encourage you to read the series of blogs.
Hope it will help you.
Related
Hello I want to publish the "index.php" from the local folder "C:\html\index.php" with docker-compose.yml
in localhost I get the typical apache html "It works". But I do not get the content of my local folder. What I am doing wrong?
here is my docker-compose file:
version: "3"
services:
# --- MySQL 5.7
#
mysql:
container_name: "dstack-mysql"
image: bitnami/mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=admin
- MYSQL_PASSWORD=root
ports:
- '3306:3306'
php:
container_name: "dstack-php"
image: bitnami/php-fpm:8.1
# --- Apache 2.4
#
apache:
container_name: "dstack-apache"
image: bitnami/apache:2.4
ports:
- '80:8080'
- '443:8443'
depends_on:
- php
volumes:
- C:/html:/var/www/html
phpmyadmin:
container_name: "dstack-phpmyadmin"
image: bitnami/phpmyadmin:latest
depends_on:
- mysql
ports:
- '81:8080'
- '8143:8443'
environment:
- DATABASE_HOST=host.docker.internal
volumes:
dstack-mysql:
driver: local
Update:
volumes:
- ./html:/var/www/html
Doesn't works.
I want to have a web development docker environment where I edit in the folder C:\html\index_hello.html in my computer and I will see the changes in the browser localhost:8080, the changes I did. My expectation is that I write in the browser http://localhost:8080/index_hello.html. Did I something wrong? shall I edit other files e.g. apache.conf?
I would suggest avoiding hardcoding directories and using relative directories.
If you place your docker-compose into your C:/html folder and then change you volume to read:
volumes:
- .:/var/www/html
if you run the following:
cd C:/html
docker-compose up -d
you are telling docker-compose to use . meaning the current directory.
if you put the docker-compose.yml in the C:/ directory you can run change the volume to:
volumes:
- ./html:/var/www/html
then the docker compose command should remain the same.
I am having issues adding auth to Solr in a Docker container. I have tried copying the security.json file to the Solr container's $SOLR_HOME folder. But it's returning a response at http://localhost:8983/solr/admin/authentication:
{
"responseHeader":{
"status":0,
"QTime":0},
"errorMessages":["No authentication configured"]}
security.json:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="},
"realm":"My Solr users",
"forwardCredentials": false
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
I'm copying the file in the docker-compose.yml in the volume:
version: "3"
services:
index:
image: solr:8.11.1
ports:
- "8983:8983"
volumes:
- data:/var/solr
- ./security/security.json:/opt/solr-8.11.1/server/solr/security.json
command:
- solr-precreate
- archive_poc_core
volumes:
data:
When I go into the container and check if the file is there with the settings, I can find it. So I don't think that's the problem. I think that the file is copied after solr is started but not sure how to get the security file prior on the container or what the correct way of doing it should be.
Any help, guidance or advice would be appreciated.
Guides I looked at:
https://solr.apache.org/guide/8_1/basic-authentication-plugin.html#enable-basic-authentication
https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#using-security-json-with-solr
I managed to get this working with help from a colleague of mine. We ended up using Zookeeper for managing the Apache side of things.
docker-compose.yml
version: "3"
services:
solr1:
build:
context: .
dockerfile: solr.Dockerfile
container_name: solr1
ports:
- "8983:8983"
volumes:
- data:/var/solr
environment:
- ZK_HOST=zoo1:2181
depends_on:
- zoo1
tty: true
stdin_open: true
zoo1:
tty: true
image: zookeeper:3.6.2
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181
volumes:
data:
solr.Dockerfile:
It copies over the files I needed like the security.json and solr-security.sh. It also adds in the entrypoint when building.
FROM solr:8.11.1
COPY security/security.json security.json
COPY scripts/solr-security.sh /usr/bin/solr-security.sh
ENTRYPOINT ["/usr/bin/solr-security.sh"]
solr-security.sh:
This executes the authentication for Solr via Zookeeper, you can find out more here https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#in-solrcloud-mode
Then starts up the default entrypoint for Solr after the authentication has been done.
#!/bin/bash
solr zk cp /opt/solr-8.11.1/security.json zk:security.json -z zoo1:2181
exec /opt/docker-solr/scripts/docker-entrypoint.sh "$#"
Everything worked as expected when browsing to Solr, it showed the login screen. I hope this helps someone else who was trying to resolve this.
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I am trying to build a set of Docker images that includes an installation of Magento 2 and MariaDB. In rare cases it succeeds (although this could be due to small changes in the app), but in most cases it is stuck on the following:
magento2-db | Version: '10.3.11-MariaDB-1:10.3.11+maria~bionic' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
I see that someone else had this issue, but the cause was actual RUN commands for MariaDB installation, which I don't directly call. There doesn't seem to be anything in the log to indicate an error either.
The last lines in the log are:
[16:49:18.424][Moby ][Info ] [25693.252573] br-83922f7da47b: port 2(vethac51834) entered blocking state
[16:49:18.453][Moby ][Info ] [25693.290035] br-83922f7da47b: port 2(vethac51834) entered forwarding state
[16:49:18.637][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy << POST /v1.25/containers/67175238f0e7a75ef527dbebbb1f5d992f1d01ee166643186dc5f727638aa66b/start (1.0560013s)\n"
[16:49:18.645][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy >> GET /v1.25/events?filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dmagento2%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D\n"
It seems to actually finish executing all steps in the Dockerfile, but I suspect there might be a problem in my docker-compose file, which looks like this:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- magento2-db
magento2-db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- magento2-db-data:/var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Is there anything obviously wrong with my setup, and is there a good way to troubleshoot this, maybe look for something specific in the log?
maybe the way you're building your composer what's the problem.
Try use this one:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- db
db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- /var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Avoid to use services names like 'blabla-something' if you need put a name use as container_name it'll be enough, and the db, links always should links in the services itself no in the containers name.
I hope this could help you.
Try setting -e MYSQL_INITDB_SKIP_TZINFO=1, refer to this issue.
e.g.
docker run -it --rm -e MYSQL_INITDB_SKIP_TZINFO=1 ... mariadb:10.4.8