I am trying to build a set of Docker images that includes an installation of Magento 2 and MariaDB. In rare cases it succeeds (although this could be due to small changes in the app), but in most cases it is stuck on the following:
magento2-db | Version: '10.3.11-MariaDB-1:10.3.11+maria~bionic' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
I see that someone else had this issue, but the cause was actual RUN commands for MariaDB installation, which I don't directly call. There doesn't seem to be anything in the log to indicate an error either.
The last lines in the log are:
[16:49:18.424][Moby ][Info ] [25693.252573] br-83922f7da47b: port 2(vethac51834) entered blocking state
[16:49:18.453][Moby ][Info ] [25693.290035] br-83922f7da47b: port 2(vethac51834) entered forwarding state
[16:49:18.637][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy << POST /v1.25/containers/67175238f0e7a75ef527dbebbb1f5d992f1d01ee166643186dc5f727638aa66b/start (1.0560013s)\n"
[16:49:18.645][ApiProxy ][Info ] time="2018-11-28T16:49:18+02:00" msg="proxy >> GET /v1.25/events?filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dmagento2%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D\n"
It seems to actually finish executing all steps in the Dockerfile, but I suspect there might be a problem in my docker-compose file, which looks like this:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- magento2-db
magento2-db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- magento2-db-data:/var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Is there anything obviously wrong with my setup, and is there a good way to troubleshoot this, maybe look for something specific in the log?
maybe the way you're building your composer what's the problem.
Try use this one:
version: '3.0'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
container_name: 'magento-2.2.6'
ports:
- "80:80"
volumes:
- magento2-test-env:/var/www/html/magento2 # will be mounted on /var/www/html
links:
- magento2-db
env_file:
- .docker/env
depends_on:
- db
db:
container_name: 'magento2-db'
image: mariadb:latest
ports:
- "9809:3306"
volumes:
- /var/lib/mysql/data
env_file:
- .docker/env
volumes:
magento2-db-data:
magento2-test-env:
external: true
Avoid to use services names like 'blabla-something' if you need put a name use as container_name it'll be enough, and the db, links always should links in the services itself no in the containers name.
I hope this could help you.
Try setting -e MYSQL_INITDB_SKIP_TZINFO=1, refer to this issue.
e.g.
docker run -it --rm -e MYSQL_INITDB_SKIP_TZINFO=1 ... mariadb:10.4.8
Related
I am having issues adding auth to Solr in a Docker container. I have tried copying the security.json file to the Solr container's $SOLR_HOME folder. But it's returning a response at http://localhost:8983/solr/admin/authentication:
{
"responseHeader":{
"status":0,
"QTime":0},
"errorMessages":["No authentication configured"]}
security.json:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="},
"realm":"My Solr users",
"forwardCredentials": false
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
I'm copying the file in the docker-compose.yml in the volume:
version: "3"
services:
index:
image: solr:8.11.1
ports:
- "8983:8983"
volumes:
- data:/var/solr
- ./security/security.json:/opt/solr-8.11.1/server/solr/security.json
command:
- solr-precreate
- archive_poc_core
volumes:
data:
When I go into the container and check if the file is there with the settings, I can find it. So I don't think that's the problem. I think that the file is copied after solr is started but not sure how to get the security file prior on the container or what the correct way of doing it should be.
Any help, guidance or advice would be appreciated.
Guides I looked at:
https://solr.apache.org/guide/8_1/basic-authentication-plugin.html#enable-basic-authentication
https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#using-security-json-with-solr
I managed to get this working with help from a colleague of mine. We ended up using Zookeeper for managing the Apache side of things.
docker-compose.yml
version: "3"
services:
solr1:
build:
context: .
dockerfile: solr.Dockerfile
container_name: solr1
ports:
- "8983:8983"
volumes:
- data:/var/solr
environment:
- ZK_HOST=zoo1:2181
depends_on:
- zoo1
tty: true
stdin_open: true
zoo1:
tty: true
image: zookeeper:3.6.2
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181
volumes:
data:
solr.Dockerfile:
It copies over the files I needed like the security.json and solr-security.sh. It also adds in the entrypoint when building.
FROM solr:8.11.1
COPY security/security.json security.json
COPY scripts/solr-security.sh /usr/bin/solr-security.sh
ENTRYPOINT ["/usr/bin/solr-security.sh"]
solr-security.sh:
This executes the authentication for Solr via Zookeeper, you can find out more here https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#in-solrcloud-mode
Then starts up the default entrypoint for Solr after the authentication has been done.
#!/bin/bash
solr zk cp /opt/solr-8.11.1/security.json zk:security.json -z zoo1:2181
exec /opt/docker-solr/scripts/docker-entrypoint.sh "$#"
Everything worked as expected when browsing to Solr, it showed the login screen. I hope this helps someone else who was trying to resolve this.
I use docker with WSL2 on a Debian VM and i'm trying to install passbolt.
I follow the steps on this guide : https://help.passbolt.com/hosting/install/ce/docker.html.
When i run docker-compose up, it's working and i can reach the database with telnet but it's impossible to reach the instance of passbolt with telnet and with my browser.
It's strange because the two containers: mariadb and passbolt are running.
This is my docker-compose.yml:
version: '3.4'
services:
db:
image: mariadb:10.3
env_file:
- env/mysql.env
volumes:
- database_volume:/var/lib/mysql
ports:
- "127.0.0.1:3306:3306"
passbolt:
image: passbolt/passbolt:latest-ce
#Alternatively you can use rootless:
#image: passbolt/passbolt:latest-ce-non-root
tty: true
container_name: passbolt
restart: always
depends_on:
- db
env_file:
- env/passbolt.env
volumes:
- gpg_volume:/etc/passbolt/gpg
- images_volume:/usr/share/php/passbolt/webroot/img/public
command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"]
ports:
- 80:80
- 443:443
#Alternatively for non-root images:
# - 80:8080
# - 443:4433
volumes:
database_volume:
gpg_volume:
images_volume:
If anybody can help me, thanks!
Your docker-compose file looks quite ordinary and I don't see any issues.
Can you please attach your passbolt.env and mysql.env (remove any important information ofcourse).
Also, the passbolt.conf (VirtualHost) might be useful.
Make sure that the DNS A record is valid and that you have no firewall blocks.
Error logs will be appreciated aswell.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I've configured a little cluster using docker-compose, consisting of parse-server, mongo and parse-dashboard:
version: "3"
services:
myappdb:
image: mongo
ports:
- 27017:27017
myapp-parse-server:
image: parseplatform/parse-server
environment:
- PARSE_SERVER_MASTER_KEY=xxxx
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=0
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
depends_on:
- myappdb
ports:
- 5000:1337
parse-dashboard:
image: parseplatform/parse-dashboard
ports:
- 5001:4040
environment:
- PARSE_DASHBOARD_ALLOW_INSECURE_HTTP=1
- PARSE_DASHBOARD_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_DASHBOARD_APP_ID=myapp
- PARSE_DASHBOARD_MASTER_KEY=xxxx
- PARSE_DASHBOARD_USER_ID=admin
- PARSE_DASHBOARD_USER_PASSWORD=xxxx
Try as I might, however, I cannot get the deployed parse-dashboard to connect to the myapp-parse-server. After I log in to the dashboard using my browser (at localhost:5001), the dashboard app informs me that it is 'unable to connect to server'.
I've tried pinging the host 'myapp-parse-server' from the parse-dashboard container, and it can see the container just fine. Similarly, it can see the endpoint http://myapp-parse-server:1337/parse; wget returns the expected 403.
If I use a copy of parse-dashboard running on my host machine, it works just fine against http://localhost:5000/parse. So the forwarded port from my host to the parse-server works.
I've also tried configuring dashboard using parse-dashboard-config.json mounted into the container. Yields exactly the same result.
I'm at a loss as to what I'm doing wrong here. Can anybody shed some light on this?
It looks like you have some issues with your docker-compose file:
PARSE_SERVER_URL is pointing to myapp-parse-server and it should point to http://localhost:1337/parse instead (unless you modified your hosts file on the container somehow but I don't see it)
Your myapp-parse-server should link to your database using links
here is an example of a docker-compose file from one blog that I wrote on how to deploy parse-server to google container enginee
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
You can see from the example above that I use links and also create and attach volume for the database disk.
Also, I personally think that it's better to run parse-server with config file in order to decouple all configurations so my configuration file looks like the following (in my docker compose you can see that I'm running parse server with config file and not with env variables)
{
"databaseURI": "mongodb://localhost:27017/my-db",
"appId": "myAppId",
"masterKey": "myMasterKey",
"serverURL": "http://localhost:1337/parse",
"cloud": "./cloud/main.js",
"mountPath": "/parse",
"port": 1337
}
Finally, In my parse-dashboard image I also use config file and simply create it as volume and replace the default config file with my own. Because this step was not covered in my blogs your final docker-compose file should look like the following:
version: "2"
services:
# Node.js parse-server application image
app:
build: ./app
command: npm start -- /parse-server/config/config.json
container_name: my-parse-app
volumes:
- ./app:/parse-server/
- /parse-server/node_modules
ports:
- "1337:1337"
links:
- mongo
# MongoDB image
mongo:
image: mongo
container_name: mongo-database
ports:
- "27017:27017"
volumes_from:
- mongodata
# MongoDB image volume for persistence
mongodata:
image: mongo
volumes:
- ./data/db:/data/db
command:
- --break-mongo
dashboard:
image: parseplatform/parse-dashboard:1.1.0
volumes:
- ./dashboard/dashboard-config.json:/src/Parse-Dashboard/parse-dashboard-config.json
environment:
PORT: 4040
PARSE_DASHBOARD_ALLOW_INSECURE_HTTP: 1
ALLOW_INSECURE_HTTP: 1
MOUNT_PATH: "/parse"
And the parse-dashboard.json (the config file) should be:
{
"apps": [
{
"serverURL": "http://localhost:3000/parse",
"appId": "myMasterKey",
"masterKey": "myMasterKey",
"appName": "My App"
}
],
"users": [
{
"user": "myuser",
"pass": "mypassword"
}
],
"useEncryptedPasswords": false
}
I know that it's a little bit long so I really encourage you to read the series of blogs.
Hope it will help you.
I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.