Can't get access to DB via phpmyadmin - docker - docker

I'm pretty new to docker and I guess I have made a proper beginner mistake here but I really can't get my head around of what's wrong...
I have sucesfully created a docker container with a running Wordpress installation. The link to the DB does work there. I can also access phpmyadmin but I can't get in. The following errors appear:
Invalid hostname for server 1. Please review your configuration.
Connection for controluser as defined in your configuration failed.
This is my docker.yml
version: "2"
services:
my-wpdb:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: letmein
my-wp:
image: wordpress
volumes:
- ./:/var/www/html
ports:
- "8080:80"
links:
- my-wpdb:mysql
environment:
WORDPRESS_DB_PASSWORD: letmein
phpmyadmin:
image: corbinu/docker-phpmyadmin
links:
- my-wpdb:mysql
ports:
- 8181:80
environment:
MYSQL_USERNAME: letmein
MYSQL_ROOT_PASSWORD: letmein
I'm trying to log in with: root, letmein
Thank's! Any help appeciated!

Your phpmyadmin is probably trying to connect to mysql using a different hostname from what you expect. (localhost probably?)
In your specific case you need to set it to use my-wpdb, more specifically you want to set that $MYSQL_PORT_3306_TCP_ADDR to point to your database.
From the source code of that (deprecated) docker image is not quite clear, but I'm guessing you need to specify that with
phpmyadmin:
image: corbinu/docker-phpmyadmin
ports:
- 8181:80
environment:
MYSQL_USERNAME: letmein
MYSQL_ROOT_PASSWORD: letmein
MYSQL_PORT_3306_TCP_ADDR: my-wpdb

Related

installing magento 2 using docker, server has already docker-compose.yml, Should i write separate for magento 2 in the magento folder?

I have a docker-compose.yml on VPS server root
version: '3'
services:
mysql:
image: mariadb:10.3.17
command: --max_allowed_packet=256M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./data/db:/var/lib/mysql:delegated"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
litespeed:
image: litespeedtech/litespeed:${LSWS_VERSION}-${PHP_VERSION}
env_file:
- .env
volumes:
- ./lsws/conf:/usr/local/lsws/conf
- ./lsws/admin/conf:/usr/local/lsws/admin/conf
- ./bin/container:/usr/local/bin
- ./sites:/var/www/vhosts/
- ./acme:/root/.acme.sh/
- ./logs:/usr/local/lsws/logs/
ports:
- 80:80
- 443:443
- 443:443/udp
- 7080:7080
restart: always
environment:
TZ: ${TimeZone}
phpmyadmin:
image: bitnami/phpmyadmin:5.0.2-debian-10-r72
ports:
- 8080:80
- 8443:443
environment:
DATABASE_HOST: mysql
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
restart: always
volumes:
esdata:
it has server configuration in above code, should i write my configuration related to magneto 2 in same file, shown below
version: '3'
services:
web:
image: webdevops/php-apache-dev:ubuntu-16.04
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- /path/to/magento:/app:cached
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
if no then what should be be scenario?
1- should i create new docker-compose-magento.yml on root or inside magento folder?
2- if i write docker-compose.yml inside magento folder then how can i connect it with my server root docker folder so that i can use elasticsearch also.
First, you need to know what application is running using the existing docker-compose file. And that you could check inside the existing virtual host configuration file. And that you could find inside the "sites" directory that is mapped to the lightspeed web server virtual host path that is "/var/www/vhosts" in volume mapping.
If any application is running using that docker-compose file for sure then you have to create a separate docker-compose for running Magento. In this case, a separate docker network will be created for all the Magento 2 docker-compose services and you could not access a service(ElasticSearch) on another network(on a separate docker-compose). You have to implement ES on Magento 2 docker-compose as well.
If nothing is running on the existing docker-compose then you could merge both the docker-compose files as per your requirement and understanding. Or you could apply only your new Magento 2 docker-compose file.
So the main thing here is the usage of two different networks. And docker containers can only talk to another container in the same network.
Also, lightspeed is a web server that uses the same port numbers as in the case of Apache(webdevops/php-apache-dev:ubuntu-16.04). So there will be a port conflict if you create a new docker-compose and try to run both simultaneously. So you need to manage that as well by using different host ports. If this is a production server then that is not possible cause people are not going to access web URLs using non-default port numbers.
The solution for this is Kubernetes, where you can run multiple applications all using the same public ports but with no conflict as in Kubernetes you will divide your single physical server machine into multiple virtual machines and hence no port conflicts.
See this article for Kubernetes setup https://technicallysound.in/how-to-setup-a-static-site-on-kubernetes/
See this article for Magento setup on Docker https://technicallysound.in/how-to-setup-magento-2-on-docker-for-development/

Cannot log in to the MySQL server through browser

Each time when I try to access the phpmyadmin from browser I receive this error:
"Cannot log in to the MySQL server"
I tried to change networks, restart docker.
version: '3'
services:
web:
image: nginx:alpine
ports:
- 80:80
volumes:
- ./public_html:/public_html
- ./conf.d:/etc/nginx/conf.d
networks:
- nginxphp
php:
image: php:7.1.11-fpm-alpine
volumes:
- ./public_html:/public_html
expose:
- 9000
networks:
- nginxphp
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: root
depends_on:
- db
ports:
- "8080:80"
networks:
nginxphp:
Cannot log in to the MySQL server
mysqli_real_connect(): The server requested authentication method unknown to the client [caching_sha2_password]
mysqli_real_connect(): (HY000/2054): The server requested authentication method unknown to the client
Disclaimer: I'm not a Docker user!
Your didn't mention if you're using the browser on the same computer as the sever or remotely. You'll need access to the mysql server (mysqld) through a terminal (command prompt). If this is a new install, it must be on the computer that is running mysql server.
In the Docker mysql page:
"The default configuration for MySQL can be found in /etc/mysql/my.cnf, which may include additional directories such as /etc/mysql/conf.d or /etc/mysql/mysql.conf.d."
Try looking in the /etc/mysql/my.cnf file on the server you're trying access. first. You're looking for:
bind-address = x.x.x.x
This is the address that the mysql server will talk to ("bound to"). Its typically "localhost" or "127.0.0.1".
To eliminate the error message like you see, I had to do two things:
1) change 'bind-address to 0.0.0.0'
this allows the server to connect to any computer. However, THIS IS A SECURITY RISK. Once you get it working, go read about bind addresses on the mysql website and set it appropriately.
2) Create an new account user#ipaddr where user is the new username and IPAddress is the ip4 address of the computer your trying to connect from. i.e.:
CREATE USER 'user'#'192.168.1.68' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'user'#'192.168.1.68';
Now try accessing mysql through the terminal program using the new username on the computer with the ip you entered above by typing:
mysql -uuser -ppassword -hx.x.x.x
Hopefully, this will help point you in the right direction. There's a ton of information about bind addresses and security on the web.
Because the phpmyadmin container has connected to the default host localhost. But your mysql server is located in other container (it means you cannot connect to mysql server by using localhost). So in the phpmyadmin service, you have to set PMA_HOST=db. See full env variables: https://hub.docker.com/r/phpmyadmin/phpmyadmin/
Full docker-compose.yml:
version: '3'
services:
web:
image: nginx:alpine
ports:
- 80:80
volumes:
- ./public_html:/public_html
- ./conf.d:/etc/nginx/conf.d
networks:
- nginxphp
php:
image: php:7.1.11-fpm-alpine
volumes:
- ./public_html:/public_html
expose:
- 9000
networks:
- nginxphp
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: root
MYSQL_PASSWORD: root
depends_on:
- db
ports:
- "8080:80"
networks:
nginxphp:
If you are using phpmyadmin with latest mysql version you will have some login issues:
Cannot log in to the MySQL server mysqli_real_connect():
The server requested authentication method unknown to the client [caching_sha2_password] mysqli_real_connect(): (HY000/2054):
The server requested authentication method unknown to the client
The solution is to downgrade to mysql 5.7 or another.
I tested with mysql 5.7 and it works for me.
If you want to can test with another versions and let the community know.
Below is the docker-compose file that builds and run Nginx + php 7.1 + mysql 5.7 +phpmyadmin
version: '3'
services:
web:
image: nginx:alpine
ports:
- 80:80
volumes:
- ./public_html:/public_html
- ./conf.d:/etc/nginx/conf.d
networks:
- nginxphp
php:
image: php:7.1.11-fpm-alpine
volumes:
- ./public_html:/public_html
expose:
- 9000
networks:
- nginxphp
db:
image: mysql:5.7
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3306
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
depends_on:
- db
ports:
- "8080:80"
networks:
nginxphp:
Hope this will save some time to someone!

Docker ports not being exposed properly

version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
# volumes:
# - ./mysql:/var/lib/mysql/
nginx:
image: nginx
ports:
- "80:80"
This is my docker-compose.yml. The weirdest thing is happening. I can visit localhost:8000 and get the redditaurus app without any issue. However, if I try to do the same thing with localhost:80, or localhost:3306 from a mysql terminal, I'll get access denied or ERR_EMPTY_RESPONSE.
If I try 0.0.0.0:80, I get the default nginx page, so that's okay, but why won't localhost work?
MySQL refuses to be served on either localhost or 0.0.0.0. I've tried accessing it from Sequel Pro, from inside a linked container, and from my host machine's console, and nothing can get into it. If I exec into the SQL container, I can log in just fine, so it's not a password issue.
Why can't I get to my containers normally? :(
You have missing some configuration properties. try this
version: '3'
services:
app:
build: .
ports:
- "8000:8000"
volumes:
- .:/srv/redditaurus
environment:
- REDDIT_KEY=${REDDIT_KEY}
- REDDIT_SECRET=${REDDIT_SECRET}
links:
- mysql:mysql
mysql:
image: mysql
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=mysql_native_password']
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ALLOW_EMPTY_PASSWORD: "YES"
ports:
- "3306:3306"
nginx:
image: nginx
ports:
- "80:80"
if you want to connect mysql via terminal. run this
mysql -uroot -proot —protocol tcp
Next thing is your nginx binding with 80 is work correct.
Problem in here is not docker-compose. It can be in your os configurations.
I used mysql:5.7 tag in docker-compose, and that allowed the container to work. I guess the latest branch has some issue with my local env.
Still not sure what's up with nginx, but it's not an issue.

How to link MySQL RDS in docker-compose.yml file?

Instead of below mysql container DB link, I want to link AWS Mysql RDS in docker yml file. Is it possible?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"
There are a couple of ways that you can link to a AWS RDS MySQL instance from your docker-compose.yml.
The first, and perhaps simplest way is to set environment variables on the containers that need access to the RDS MySQL instance. So for example, you could update your OperationEngine service definition to look something like:
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
ports:
- "8080:8080"
environment:
- DOCKER_HOST_IP: ${DOCKER_HOST_IP}
- JAVA_OPTS: ${JAVA_OPTS}
- MYSQL_HOST: "your-mysql-cname.rds.amazon.com"
- MYSQL_USER: "username"
- MYSQL_PASSWORD: "password"
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/
You can then update the configuration in that service to read database connection details from the environment e.g ${MYSQL_HOST}.
The obvious downside to this approach is that you have connection details stored as plain text in your docker-compose.yml, which is not great, but may be acceptable depending on your requirements.
The 2nd approach (and the one I tend to favour) is to bind mount the database configuration into the running container.
Most applications support reading database connection details from a properties file. As an example: lets say that on start-up your application read from /config/database.properties and required the following properties to connect to the database:
config.db.host=your-mysql-cname.rds.amazon.com
config.db.user=foo
config.db.password=bar
I would setup my environment so that at runtime, I bind mount a properties file that provides all of the required values to the container:
OperationEngine:
volumes:
- /secure/config/database.properties:/config/database.properties
The /secure/config directory is part of the filesystem on your Docker host. How that directory gets created and populated is up-to-you. Typically I approach this by having my environment setup scripts make the directory and then clone a private Git repository into this directory which contains the correct configuration for that environment. Naturally only those with the required permission levels can view the Git repositories that contain sensitive configuration details i.e. for production system.
Hope that helps.
No, i have not created networking in my yml file. Below is the entirecontent of my yml file, so what changes would be required?
mysql_db:
image: mysql:5.6
container_name: shishir_db
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxxx"
#MYSQL_USER: "shishir"
#MYSQL_DATABASE: "shishir1"
MYSQL_PASSWORD: "xxxxxxxx"
ports:
- "3306:3306"
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: shishir/kafkaengine:0.10
container_name: kafka
links:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_HOST_IP}
KAFKA_ZOOKEEPER_CONNECT: ${DOCKER_HOST_IP}:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_CREATE_TOPICS: ${KAFKA_TOPICS}
KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS: 12000
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 12000
flywaydb:
image: shishir/flywaydb
container_name: flywaydb
links:
- mysql_db
OperationEngine:
image: shishir/operationengine:${RELEASE_OTA_VERSION}
container_name: operation_engine
links:
- mysql_db
ports:
- "8080:8080"
environment:
DOCKER_HOST_IP: ${DOCKER_HOST_IP}
JAVA_OPTS: ${JAVA_OPTS}
volumes:
- ${HOME}/operationengine/logs/:/usr/local/tomcat/logs/

docker-compose phpmyadmin kicks out

I'm trying to use a docker-compose.yml for launching mariabd and phpmyadmin. When I edit something on phpmyadmin it kicks me out to login page.
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: Pass123
restart: always
volumes:
- "./.data/db:/var/lib/mysql/:rw"
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:mysql
ports:
- 8181:80
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: Pass123
PMA_HOST: mysql
I've tried with a volume container with busybox to keep data of mysql, changed mariabd for mysql image. But I don't get with the solution. What should I do to solve this?
Thanks in advance
The set of environmental variables supported by the phpmyadmin/phpmyadmin Docker image is different from that of the mariadb image. Try replacing the MYSQL_USERNAME and MYSQL_ROOT_PASSWORD variables of your phpmyadmin service with PMA_USER and PMA_PASSWORD, respectively.
I don't understand the meaning of the link
links:
- db:mysql
The configuration file of phpmyadmin/phpmyadmin (/www/config.inc.php) say by default the host name of database server if 'db' :
$hosts = array('db');
As you name the database server 'db' then link should be write likez this :
links:
- db
If your database name container is not 'db', you should add the environment variable PMA_HOST= (or PMA_HOSTS if multi db servers) with the right name
All over environment variables are useless (even in db config I think)

Resources