So i tried building out a docker container for keycloak using a mariadb container as well for the database storage. (I'm really not sure it's worth replacing the database thats built in with keycloak and using mariadb for the server size i'll be using but i wanted to keep it consistant) My question is, has anyone had luck with this? I Can get the container to start (both the keycloak and the mariadb one) Keycloak will initially work and create the required database and tables after connecting to the mariadb server. From what i can tell it's an encoding / coalition issue with keycloak and mariadb. The logs really only throw a lot of errors resembline
[Failed SQL: (1194) ALTER TABLE keycloak.CLIENT_SESSION ADD REALM_ID VARCHAR(255) NULL]
and the container gets stuck in a restart loop.
Now if i mix it up and configure it with a postgres container, it works fine.
This was the compose settings i was trying to get working.
keycloak_mariadb_production:
container_name: keycloak_mariadb_production
image: mariadb:10.7.7-focal
restart: unless-stopped
ports:
- 33306:3306
expose:
- 33306
environment:
MARIADB_ROOT_PASSWORD: "${KEYCLOAK_DATABASE_ROOT_PASSWORD}"
MYSQL_DATABASE: "${KEYCLOAK_DATABASE_PRODUCTION}"
MYSQL_USER: "${KEYCLOAK_DATABASE_SU_USERNAME}"
MYSQL_PASSWORD: "${KEYCLOAK_DATABASE_SU_PASSWORD}"
command: --init-file /data/application/init.sql
volumes:
- ./mysql-keycloak-data:/var/lib/mysql
- ./backend/app/init_keycloak.sql:/data/application/init.sql
healthcheck:
test: "mysql -u${KEYCLOAK_DATABASE_SU_USERNAME} -p${KEYCLOAK_DATABASE_SU_PASSWORD} -e 'SHOW DATABASES'"
interval: 5s
timeout: 5s
retries: 20
networks:
keycloak_mariadb_production_network:
aliases:
- keycloak_mariadb_production_network
keycloak_frontend_production:
container_name: keycloak_frontend_production
image: quay.io/keycloak/keycloak:20.0.3
restart: unless-stopped
ports:
- 8080:8080
expose:
- 8080
command: ["start-dev"]
environment:
PROXY_ADDRESS_FORWARDING: "true"
KC_DB: mariadb
KC_DB_URL: jdbc:mariadb://keycloak_mariadb_production:3306/su_keycloak_production
KC_DB_USERNAME: "root"
KC_DB_PASSWORD: "test"
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: Pa55w0rd
depends_on:
- keycloak_mariadb_production
networks:
- keycloak_mariadb_production_network
networks:
keycloak_mariadb_production_network:
driver: bridge
I'm just really trying to keep it consistant and keep what i can in a mariadb database. If i have to go the postgress way due to some limitation i can. But i wanted to pick the brains of anyone better than me at this and see if there's been a reliable solution as i have been reading it could be an issue with the jdbc mariadb connector somehow.
Thanks in advance for any advice.
EDIT:
So i tried a base configuration that excluded a local volume under the volumes: stanza, And it boots up fine. It seems like it's not playing well with the local file system if i'm guessing correctly.
mysql:
image: docker.io/mariadb:10
environment:
MARIADB_DATABASE: keycloak
MARIADB_ROOT_PASSWORD: rootpassword
MARIADB_PASSWORD: password
MARIADB_USER: keycloak
ports:
- 3306:3306
keycloak:
image: quay.io/keycloak/keycloak:20.0
environment:
KC_HOSTNAME: localhost
KC_HOSTNAME_PORT: 8080
KC_HOSTNAME_STRICT_BACKCHANNEL: "true"
KC_DB: mariadb
KC_DB_URL: jdbc:mariadb://mysql:3306/keycloak?characterEncoding=UTF-8
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: password
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: adminpassword
KC_HEALTH_ENABLED: "true"
KC_LOG_LEVEL: info
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8080/health/ready" ]
interval: 15s
timeout: 2s
retries: 15
command: start-dev
ports:
- 8080:8080
EDIT To add more logs after attempting to change the permissions of the mysql folder.
Updating the configuration and installing your custom providers, if any. Please wait.
2023-02-03 20:03:51,600 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 8475ms
2023-02-03 20:03:53,867 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: , Hostname: localhost, Strict HTTPS: false, Path: , Strict BackChannel: true, Admin URL: , Admin: , Port: 8080, Proxied: false
2023-02-03 20:03:55,742 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-02-03 20:03:57,062 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-02-03 20:03:57,150 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-02-03 20:03:57,174 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-02-03 20:03:57,186 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-02-03 20:03:57,426 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-02-03 20:03:57,948 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_251187, Site name: null
2023-02-03 20:03:57,966 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1146-42S02: Table 'keycloak.migration_model' doesn't exist
2023-02-03 20:03:58,756 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1146-42S02: Table 'keycloak.databasechangelog' doesn't exist
2023-02-03 20:03:59,371 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1146-42S02: Table 'keycloak.databasechangeloglock' doesn't exist
2023-02-03 20:03:59,442 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1146-42S02: Table 'keycloak.databasechangelog' doesn't exist
2023-02-03 20:03:59,443 INFO [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2023-02-03 20:04:02,225 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1194-HY000: Table 'client_session' is marked as crashed and should be repaired
2023-02-03 20:04:02,433 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to update database
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: liquibase.exception.MigrationFailedException: Migration failed for change set META-INF/jpa-changelog-1.1.0.Beta1.xml::1.1.0.Beta1::sthorger#redhat.com:
Reason: liquibase.exception.DatabaseException: (conn=5) Table 'client_session' is marked as crashed and should be repaired [Failed SQL: (1194) ALTER TABLE keycloak.CLIENT_SESSION ADD REALM_ID VARCHAR(255) NULL]
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Migration failed for change set META-INF/jpa-changelog-1.1.0.Beta1.xml::1.1.0.Beta1::sthorger#redhat.com:
Reason: liquibase.exception.DatabaseException: (conn=5) Table 'client_session' is marked as crashed and should be repaired [Failed SQL: (1194) ALTER TABLE keycloak.CLIENT_SESSION ADD REALM_ID VARCHAR(255) NULL]
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: (conn=5) Table 'client_session' is marked as crashed and should be repaired [Failed SQL: (1194) ALTER TABLE keycloak.CLIENT_SESSION ADD REALM_ID VARCHAR(255) NULL]
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: (conn=5) Table 'client_session' is marked as crashed and should be repaired
2023-02-03 20:04:02,434 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
2023-02-03 20:05:24,940 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: , Hostname: localhost, Strict HTTPS: false, Path: , Strict BackChannel: true, Admin URL: , Admin: , Port: 8080, Proxied: false
2023-02-03 20:05:26,529 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-02-03 20:05:27,490 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-02-03 20:05:27,555 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-02-03 20:05:27,622 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-02-03 20:05:27,648 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-02-03 20:05:27,872 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-02-03 20:05:28,399 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: node_228690, Site name: null
2023-02-03 20:05:28,413 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1146-42S02: Table 'keycloak.migration_model' doesn't exist
2023-02-03 20:05:29,787 INFO [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Updating database. Using changelog META-INF/jpa-changelog-master.xml
2023-02-03 20:05:30,379 WARN [org.mariadb.jdbc.message.server.ErrorPacket] (main) Error: 1877-HY000: Table keycloak/client_session is corrupted. Please drop the table and recreate.
2023-02-03 20:05:30,596 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to update database
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: liquibase.exception.MigrationFailedException: Migration failed for change set META-INF/jpa-changelog-1.1.0.Beta1.xml::1.1.0.Beta1::sthorger#redhat.com:
Reason: liquibase.exception.DatabaseException: (conn=25) Table keycloak/client_session is corrupted. Please drop the table and recreate. [Failed SQL: (1877) DELETE FROM keycloak.CLIENT_SESSION]
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Migration failed for change set META-INF/jpa-changelog-1.1.0.Beta1.xml::1.1.0.Beta1::sthorger#redhat.com:
Reason: liquibase.exception.DatabaseException: (conn=25) Table keycloak/client_session is corrupted. Please drop the table and recreate. [Failed SQL: (1877) DELETE FROM keycloak.CLIENT_SESSION]
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: (conn=25) Table keycloak/client_session is corrupted. Please drop the table and recreate. [Failed SQL: (1877) DELETE FROM keycloak.CLIENT_SESSION]
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: (conn=25) Table keycloak/client_session is corrupted. Please drop the table and recreate.
2023-02-03 20:05:30,597 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--
Mariadb container logs
2023-02-03 21:43:50 5 [ERROR] InnoDB: Table keycloak/user_entity contains 2 indexes inside InnoDB, which is different from the number of indexes 1 defined in the .frm file. See https://mariadb.com/kb/en/innodb-troubleshooting/
2023-02-03 21:43:50 5 [ERROR] InnoDB: Table keycloak/user_entity contains 2 indexes inside InnoDB, which is different from the number of indexes 2 defined in the .frm file. See https://mariadb.com/kb/en/innodb-troubleshooting/
2023-02-03 21:43:51 5 [ERROR] InnoDB: Table keycloak/keycloak_role contains 2 indexes inside InnoDB, which is different from the number of indexes 1 defined in the .frm file. See https://mariadb.com/kb/en/innodb-troubleshooting/
2023-02-03 21:43:51 5 [ERROR] InnoDB: Table keycloak/client contains 2 indexes inside InnoDB, which is different from the number of indexes 1 defined in the .frm file. See https://mariadb.com/kb/en/innodb-troubleshooting/
2023-02-03 21:43:51 5 [ERROR] InnoDB: Table keycloak/realm contains 2 indexes inside InnoDB, which is different from the number of indexes 1 defined in the .frm file. See https://mariadb.com/kb/en/innodb-troubleshooting/
That looks like a permission problem with your volume. Try chmod 777 ./mysql-keycloak-data and if it works then use mininal permission for that folder.
Related
I'm really new to Docker (also postgres) and still finding my feet. I get an error and can't seem to get one of my postgres services running, although when I start it, I'm able to access pgadmin and airflow via the browser. I think there is some sort of conflict happening but I'm not sure where. I have a docker-compose.yml file that starts a few containers, as well as the postgres one in question which has the servce name db:
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
db:
image: postgres:13.0-alpine
restart: always
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin_user
POSTGRES_PASSWORD: secret_password
# PGDATA: /var/lib/postgresql/data
volumes:
- ./db-data:/var/lib/postgresql/data
ports:
- "5433:5432"
pgadmin:
image: dpage/pgadmin4:4.27
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin_user#test_email.com
PGADMIN_DEFAULT_PASSWORD: test_password
PGADMIN_LISTEN_PORT: 1111
ports:
- "1111:1111"
volumes:
- pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
webserver:
image: l/custom_airflow:1.5
container_name: l_custom_airflow
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=n
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
- ./db-data:/usr/local/airflow/db-data
- ./pgadmin-data:/usr/local/airflow/pgadmin-data
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
volumes:
db-data:
pgadmin-data:
The relevant part is this:
db:
image: postgres:13.0-alpine
restart: always
environment:
POSTGRES_DB: postgres
POSTGRES_USER: admin_user
POSTGRES_PASSWORD: secret_password
# PGDATA: /var/lib/postgresql/data
volumes:
- ./db-data:/var/lib/postgresql/data
ports:
- "5433:5432"
[I already have two version of postgres on my local machine, and I saw that they use ports 5432 and then 5433, so it looks like the latest one goes to 5433. Similarly, I have another service (airflow) that depends on an older version of postgres to run, so I assume since that one comes first it takes 5432, and then the new postgres service I want will likely be mapped to 5433 as default - please correct me if I'm wrong]
But when I run docker-compose up -d and check my containers with docker container ls -a I see that this particular container is continuously restarting. I ran docker logs --tail 50 --follow --timestamps pipeline_5_db_1 (the container name for the db service) and I see the following error:
2020-10-28T08:46:29.730973000Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:30.468640800Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:31.048144200Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:31.803571400Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:32.957604600Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:34.885928500Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:38.479922200Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:45.384436400Z chmod: /var/lib/postgresql/data: Operation not permitted 2020-10-28T08:46:58.612202300Z chmod: /var/lib/postgresql/data: Operation not permitted
I googled the error and saw a couple of other SO posts but I can't see a clear explanation. This post and this post are a bit unclear to me (might be because I'm not so familiar), so I'm not sure how to use the responses to solve this issue.
You've got dbdata defined as a named volume at the bottom of the compose file but you're using ./dbdata within each service which is a bind mount. You might try using the named volume instead of the shared directory in your db and webserver services, like this:
volumes:
- db-data:/var/lib/postgresql/data
A bind mount should also work but can be troublesome if permissions on the mounted directory aren't quite right, which might be your problem.
The above also applies to pgadmin-data where the pgadmin service is using a named volume but webserver is using the bind mount (local directory). In fact, it's not clear why the webserver would need access to those data directories. Typically, a webserver would connect to the database via port 5432 (which doesn't even need to be mapped on the host). See for instance the bitnami/airflow docs on Docker Hub.
I have a docker-compose where I pick up two containers, one with mariadb and one with wordpress.
The problem
I receive a connection failure, apparently the user loses and cannot perform authentication.
wp-mysql | 2019-08-09 13:21:16 18 [Warning] Aborted connection 18 to db: > 'unconnected' user: 'unauthenticated' host: '172.31.0.3' (This connection > closed normally without authentication)
Situation
When I go to http: // localhost: 8010 the wordpress service is available, but with an error connecting to the database.
The docker-compose.yml ...
version: '3'
services:
db:
container_name: wp-mysql
image: mariadb
volumes:
- $PWD/data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 12345678
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
ports:
- "3307:3306"
networks:
- my_net
restart: on-failure
wp:
depends_on:
- db
container_name: wp-web
volumes:
- "$PWD/html:/var/www/html"
image: wordpress
ports:
- "8010:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
networks:
- my_net
networks:
my_net:
Error:
wp-mysql | 2019-08-09 13:21:16 18 [Warning] Aborted connection 18 to db: > 'unconnected' user: 'unauthenticated' host: '172.31.0.3' (This connection > closed normally without authentication)
Where is the configuration error?
Why can't the wordpress container not use the user created in the mariadb container environment?
Finally solve it.
After going around and helped by the user #JackNavaRow the solution came out.
It was as simple as rebooting the system and deleting the volumes.
Pick up the containers and everything worked ok.
I leave it here in case anyone encounters this problem, that does not give more turns.
it may due to database files corrupted due to unexpected shutdown, you can delete the database volume
warnning: this action will drop all your database data
you could use docker-compose down -v to remove the volumes and then execute docker-compose up -d to bring it up
in your case, you are not using volume to store your database data, you can remove the data and try again
rm -rf $PWD/data
I am trying to move a working Rails app to docker environment.
Following the UNIX(/docker) philosophy I would like to have each service in its own container.
I managed to get redis and postgres working fine, but I am struggling to get slor and rails talking to each other.
In file app/models/spree/sunspot/search_decorator.rb when the line executes
#solr_search.execute
the following error appear on the console:
Errno::EADDRNOTAVAIL (Cannot assign requested address - connect(2) for "localhost" port 8983):
While researching for a solution I have found people just installing solr in the same container as their rails app. But I would rather have it in a separate container.
Here are my config/sunspot.yml
development:
solr:
hostname: localhost
port: 8983
log_level: INFO
path: /solr/development
and docker-compose.yml files
version: '2'
services:
db:
(...)
redis:
(...)
solr:
image: solr:7.0.1
ports:
- "8983:8983"
volumes:
- solr-data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- mycore
networks:
- backend
app:
build: .
env_file: .env
environment:
RAILS_ENV: $RAILS_ENV
depends_on:
- db
- redis
- solr
ports:
- "3000:3000"
tty: true
networks:
- backend
volumes:
solr-data:
redis-data:
postgres-data:
networks:
backend:
driver: bridge
Any suggestions?
Your config/sunspot.yml should have the following:
development:
solr:
hostname: solr # since our solr instance is linked as solr
port: 8983
log_level: WARNING
solr_home: solr
path: /solr/mycore
# this path comes from the last command of our entrypoint as
# specified in the last parameter for our solr container
If you see
Solr::Error::Http (RSolr::Error::Http - 404 Not Found
Error: Not Found
URI: http://localhost:8982/solr/development/select?wt=json
Create a new core using the admin interface at:
http://localhost:8982/solr/#/~cores
or using the following command:
docker-compose exec solr solr create_core -c development
I wrote a blog post on this: https://gaurav.koley.in/2018/searching-in-rails-with-solr-sunspot-and-docker
Hopefully that helps those who come here at later stage.
When you declare services in a docker-compose file, containers will have their name as hostname. So your solr service will be available, inside the backend network, as solr.
What I'm seeing from your error is that the ruby code is trying to connect at localhost:8983, while it should connect to solr:8983.
Probably you'll need also to change your hostname inside config/sunspot.yml, but I don't work with solr so I'm not sure about this.
I am trying to set up an extensible docker production environment for a few projects on a virtual machine.
My setup is as follows:
Front end: (this works as expected: thanks to Tevin Jeffery for this)
# ~/proxy/docker-compose.yml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- '80:80'
- '443:443'
volumes:
- '/etc/nginx/vhost.d'
- '/usr/share/nginx/html'
- '/etc/nginx/certs:/etc/nginx/certs:ro'
- '/var/run/docker.sock:/tmp/docker.sock:ro'
networks:
- nginx
letsencrypt-nginx-proxy:
container_name: letsencrypt-nginx-proxy
image: 'jrcs/letsencrypt-nginx-proxy-companion'
volumes:
- '/etc/nginx/certs:/etc/nginx/certs'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
volumes_from:
- nginx-proxy
networks:
- nginx
networks:
nginx:
driver: bridge
Database: (planning to add postgres to support rails apps as well)
# ~/mysql/docker-compose.yml
version: '2'
services:
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: wordpress
# ports:
# - 3036:3036
networks:
- db
networks:
db:
driver: bridge
And finaly a wordpress blog to test if everything works:
# ~/wp/docker-compose.yml
version: '2'
services:
wordpress:
image: wordpress
# external_links:
# - mysql_db_1:mysql
ports:
- 8080:80
networks:
- proxy_nginx
- mysql_db
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
# WORDPRESS_DB_HOST: mysql_db_1:3036
# WORDPRESS_DB_HOST: mysql
# WORDPRESS_DB_HOST: mysql:3036
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: wordpress
networks:
proxy_nginx:
external: true
mysql_db:
external: true
My problem is that the Wordpress container can not connect to the database. I get the following error when I try to start (docker-compose up) the Wordpress container:
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 22
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
wp_wordpress_1 exited with code 1
UPDATE:
I was finally able to get this working. my main problem was relying on the container defaults for the environment variables. This created an automatic data volume with without a database or user for word press. After I added explicit environment variables to the mysql and Wordpress containers, I removed the data volume and restarted both containers. This forced the mysql container to recreate the database and user.
To ~/mysql/docker-compose.yml:
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
and to ~/wp/docker-compose.yml:
environment:
# for nginx and dockergen
VIRTUAL_HOST: gizmotronic.ca
# wordpress setup
WORDPRESS_DB_HOST: mysql_db_1
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
One problem with docker-compose is that although sometimes your application is linked to your database, the application will NOT wait for your database to be up and ready. Here is an official Docker read:
https://docs.docker.com/compose/startup-order/
I've faced a similar problem where my test application would fail because it couldn't connect to my server because it wasn't up and running yet.
I made a similar workaround to the article posted in the link above by running a shell script to ping the address of the DB until it is available to be used. This script should be the last CMD command in your application.
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
# Until the mysql sends a 200 HTTP response, we're going to keep checking
until [ $RESPONSE -eq "200" ]; do
sleep 2
echo "MySQL is not ready yet.. retrying... RESPONSE: ${RESPONSE}"
RESPONSE=$(curl --write-out "%{http_code}\n" --silent --output /dev/null "YOUR_MYSQL_DATABASE:3306")
done
# Once we know the server's up, we can start run our application
enter code to start your application here
I'm not 100% sure if this is the problem you're having. Another way to debug your problem is to run docker-compose in detached mode with the -d flag and run docker ps to see if your database is even running. If it is running, run docker logs $YOUR_DB_CONTAINER_ID to see if MySQL is giving you any errors when starting
I'm trying to use a docker-compose.yml for launching mariabd and phpmyadmin. When I edit something on phpmyadmin it kicks me out to login page.
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: Pass123
restart: always
volumes:
- "./.data/db:/var/lib/mysql/:rw"
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:mysql
ports:
- 8181:80
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: Pass123
PMA_HOST: mysql
I've tried with a volume container with busybox to keep data of mysql, changed mariabd for mysql image. But I don't get with the solution. What should I do to solve this?
Thanks in advance
The set of environmental variables supported by the phpmyadmin/phpmyadmin Docker image is different from that of the mariadb image. Try replacing the MYSQL_USERNAME and MYSQL_ROOT_PASSWORD variables of your phpmyadmin service with PMA_USER and PMA_PASSWORD, respectively.
I don't understand the meaning of the link
links:
- db:mysql
The configuration file of phpmyadmin/phpmyadmin (/www/config.inc.php) say by default the host name of database server if 'db' :
$hosts = array('db');
As you name the database server 'db' then link should be write likez this :
links:
- db
If your database name container is not 'db', you should add the environment variable PMA_HOST= (or PMA_HOSTS if multi db servers) with the right name
All over environment variables are useless (even in db config I think)