Cannot connect Spring Boot application to Docker Mysql container - unknown database - docker

In docker a created network
docker network create mysql-network
Then I create mysql image
docker container run -d -p 3306:3306 --net=mysql-network --name mysql-hibernate -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=test -v hibernate:/var/lib/mysql mysql
When I run docker ps everything seems OK
This is my application.properties
spring.jpa.hibernate.ddl-auto=create
useSSL=false
spring.datasource.url=jdbc:mysql://localhost:3306/test
spring.datasource.username=root
spring.datasource.password=password
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL57Dialect
I also tried
spring.datasource.url=jdbc:mysql://mysql-hibernate:3306/test
But I will always get an error on startup
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'test'
How's that possible that it doesn't know database 'test' ? I specified name in docker like this -e MYSQL_DATABASE=test
What am I missing ?

I know it is bit late but I'll answer anyway so people coming here can benefit ;)
Your configuration overall seems alright. When you get error like this you can add flag param set to true in your application.properties in line where you set datasource url.
So you will come up with something like this:
spring.datasource.url=jdbc:mysql://localhost:3306/test?createDatabaseIfNotExist=true
Hope this helps!

Related

Grafana in Docker: not all Environment variables are transferred to grafana.ini

I have quite a weird problem and I'm really not sure where it comes from. I'm trying to run a Grafana inside a Docker Container and want to set some grafana.ini values through Environment Variables in the Docker run Command.
docker run -d -v grafana_data:/var/lib/grafana -e "GF_SECURITY_ADMIN_PASSWORD=123456" -e "GF_USERS_DEFAULT_THEME=light" --name=grafana -p 3000:3000 grafana/grafana
The default Theme gets changed as wanted, the admin_password stays the same. I've checked for typos like a million times and could not find one. I've tried with '123456' and without, all with the same result. Is there a reason why I can't change this value?
Thanks in Advance!
https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#admin_password
The password of the default Grafana Admin. Set once on first-run.
Please note the last sentence = it is used only on first run. It won't change admin password if you already changed admin password before and that new password is stored in the database.

Fixing my connection a Docker postgres after moving to Apple silicon

I have a local project early in development which uses Nestjs and TypeORM to connect to a Docker postgres instance (called 'my_database_server'). Things were working on my old computer, an older Macbook Pro.
I've just migrated everything onto a new Macbook Pro with the new M2 chip (Apple silicon). I've downloaded the version of Docker Desktop that's appropriate for Apple silicon. It runs fine, it still shows 'my_database_server', it can launch that fine, and I can even use the Terminal to go into its Postgres db and see the data that existed in my old computer.
But, I can't figure out how to adjust the config of my project to get it to connect to this database. I've read from other articles that because Docker is running on Apple silicon now and is using emulation, that the host should be different.
This is what my .env used to look like:
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_USER=postgres
On my new computer, the above doesn't connect. I have tried these other values for POSTGRES_HOST, many inspired by other SO posts, but these all yield Error: getaddrinfo ENOTFOUND _____ errors:
my_database_server (the container name)
docker (since I didn't use a docker-compose.yaml file - see below - I don't know what the 'service name' is in this case)
192.168.65.0/24 (the "Docker subnet" value in Docker Desktop > Preferences > Resources > Network)
Next, for some other values I tried, the code is trying to connect for a longer time, but it's getting stuck on something later in the process. With these, eventually I get Error: connect ETIMEDOUT ______:
192.168.65.0
172.17.0.2 (from another SO post, I tried the terminal command docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 78f6e532b324 - the last part being the container ID of my_database_server)
In case it helps, I originally set up this docker container using the script I found here, not using a docker-compose.yaml file. Namely, I ran this script once at the beginning:
#!/bin/bash
set -e
SERVER="my_database_server";
PW="mysecretpassword";
DB="my_database";
echo "echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]"
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run --name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
-d postgres
# wait for pg to start
echo "sleep wait for pg-server [$SERVER] to start";
SLEEP 3;
# create the db
echo "CREATE DATABASE $DB ENCODING 'UTF-8';" | docker exec -i $SERVER psql -U postgres
echo "\l" | docker exec -i $SERVER psql -U postgres
What should be my new db config settings?
I never figured the above problem out, but it was blocking me so I found a different away around.
Per other SO questions, I decided to go with the more typical route of using a docker-compose.yml file to create the Docker container. In case it helps others in this problem, this is what the main part of my docker-compose.yml looks like:
version: '3'
services:
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DB_NAME}
container_name: postgres-db
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "54320:5432"
I then always run this with docker-compose up -d, not starting the container through the Docker Desktop app (though after that command, you should see the new container light up in the app).
Then in .env, I have this critical part:
POSTGRES_HOST=localhost
POSTGRES_PORT=54320
I mapped Docker's internal 5432 to the localhost-accessible 54320 (a suggestion I found here). Doing "5432:5432" as other articles suggest was not working for me, for reasons I don't entirely understand.
Other articles will suggest changing the host to whatever the service name is in your docker-compose.yml (for the example above, it would be db) - this also did not work for me. I believe the "54320:5432" part maps the ports correctly so that host can remain localhost.
Hope this helps others!

docker phpmyadmin - how to add remote server

I can't seem to find the real installation path of my phpmyadmin.
I bash into the phpmyadmin container like this: (I'm on windows)
winpty docker exec -it pma_container_name sh
And then I got in by default in /var/www/html
and I see all the phpmyadmin files there.
I also noticed that there's also a phpmyadmin in /etc/phpmyadmin containing 3 config files, config.inc.php, congif.secret.inc.php, config.user.inc.php
There's also a phpmyadmin in the /usr/src/phpmyadmin containing all the phpmyadmin files.
Now, In /var/www/html - I just:
cp config.sample.inc.php config.inc.php
Then I created a sample file like:
touch phpinfo.php
and I access it in the browser on localhost:8082/phpmyadmin.php
and it totally works.
Now, since I know initially that it was reading my new file, added some config at the bottom:
$i++;
$cfg['Servers'][$i]['host'] = ''; // remote ip address here
$cfg['Servers'][$i]['user'] = '';
$cfg['Servers'][$i]['password'] = '';
$cfg['Servers'][$i]['auth_type'] = 'cookie';
but still nothing happened in the phpmyadmin.
I can't seem to add or choose a remote server.
Any idea why?
I also noticed that the container is in Alpine.
When you run the container set the PMA_HOST environment variable with the host name of your MySQL server. You can also use PMA_USER and PMA_PASSWORD. For example:
docker run --name myadmin -d -e PMA_HOST=mydatabase.com -e PMA_USER=admin -e PMA_PASSWORD=password -p 8080:80 phpmyadmin/phpmyadmin
If you want a custom configuration file, use:
-v /some/local/directory/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
For more information see the Docker image description:
https://hub.docker.com/r/phpmyadmin/phpmyadmin/

Docker for Windows - Mount directory is coming empty

I have an image with MYSQL installed. I need to map the /var/lib/mysql directory to my host system. Following is the screenshot that I see within that directory, when I use the following command
docker run --rm -it --env-file=envProxy --network mynetwork --name my_db_dev -p 3306:3306 my_db /bin/bash
Now when I try to mount a directory from my host ( Windows 10 ), by running another container from the same image, the mysql directory is blank.
docker run --rm -it --env-file=envProxy --network mynetwork -v D:/docker/data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
Also tried this, but none works
docker run --rm -it --env-file=envProxy --network mynetwork -v D:\docker\data:/var/lib/mysql --name my_db_dev1 -p 3306:3306 my_db /bin/bash
One thing that I see, is that the mysql directory in the path has now root user, instead of mysql as in the previous case.
I wanted all the content from the existing container (mysql directory ) to be copied back to the host mount directory
Is that Possible ? and How can that be achieved ?
Same problem on Docker Desktop(2.0.0.3 (31259)). I'd got the solution from this issues.
I ensured the containers were stopped, opened docker settings, selected "Shared Drives", removed the tick on "C" and added it again. Docker asked for the Windows account credentials and I entered the new ones. After that and starting containers, mount volumes were ok. Problem solved.
It could fix the problem more simply by just reset the credentials in Docker Settings.
If you need to get files from container into host, better use docker cp command: https://docs.docker.com/engine/reference/commandline/cp/
It will look like:
docker cp my_db_dev1:/var/lib/mysql d:\docker\data
UPD
Actually I want to persist the database files across other containers,
so I wanted use volumes
In this case you have to:
Start using docker-compose to orchestrate containers.
In docker-compose.yml you create volume, which is shared between all containers. Something like:
docker-compose.yml
version: '3'
services:
db1:
image: whatever
volumes:
- myvol:/data
db2:
image: whatever2
volumes:
- myvol:/data
volumes:
myvol:
Description: https://docs.docker.com/compose/compose-file/#volume-configuration-reference
Use Windows paths writing with backslash '\' and it is recommended using variables to specify path. On the other side for Linux use slash '/' For example:
docker run -it -v %userprofile%\work\myproj\some-data:/var/data
First create a folder structure like below,
C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG
C:\Users\rajit\MYSQL_DATA\DATA_DIR
then please adjust like below,
docker pull mysql:8.0
docker run --name mysql-docker -v C:\Users\rajit\MYSQL_DATA\MYSQL_CONFIG:/etc/mysql/conf.d --env="MYSQL_ROOT_PASSWORD=root" --env="MYSQL_PASSWORD=root" --env="MYSQL_DATABASE=test_db" -v C:\Users\rajit\MYSQL_DATA\DATA_DIR:/var/lib/mysql -d -p 3306:3306 mysql:8.0 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
try to turn off anti virus program or fire wall. Then click on "reset credentials" under settings/shared drives.
That worked for me.
Best regards.

How to modify the password of elasticsearch in docker

I want to modify the password of the container created by the elasticsearch image,I have executed the following orders
setup-passwords auto
but it did't work
enter image description here
unexpected response code [403] from GET http://172.17.0.2:9200/_xpack/security/_authenticate?pretty
Please help me. Thank you.
When using docker it is usually best to configure services via environment variables. To set a password for the elasticsearch service you can run the container using the env variable ELASTIC_PASSWORD:
docker run -e ELASTIC_PASSWORD=`openssl rand -base64 12` -p 9200:9200 --rm --name elastic docker.elastic.co/elasticsearch/elasticsearch-platinum:6.2.4
openssl rand -base64 12 sets a random value for the password

Resources