I created a docker file to create a pgAdmin4 container and a postgres container.
version: '3.8'
services:
postgres:
container_name: pg_container
image: postgres
restart: always
environment:
POSTGRES_USER: webquiver
POSTGRES_PASSWORD: webquiver
POSTGRES_DB: quiver_db
ports:
- "5432:5432"
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
When running docker compose up I can go to the localhost:5050 to reach pgAdmin4 and login in with my credentials you can see in the code.
But when I use the dropdown menu for servers, it is empty. nothing is created. And I can not create any server there. It does not allow me to. I get the Error:
Unable to connect to server:
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and
accepting
TCP/IP connections on port 5432?
could not connect to server: Address not available
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
please help. THX ^^
greetings
I've reproduced your docker-compose file and everything is fine with it.
I'm guessing there is a misunderstanding how postgres and pgadmin are interacting with each other from scratch.
pgadmin is only the frontend that you can connect to a postgres database, it's not automatically searching for any existing postgres server.
I'm able to add the server inside the pgadmin with this host and port: postgres:5432 As inside the docker will resolve the service name as hostname postgres to the other service pgadmin.
This configuration will be lost, when the composition is restartet, as no volumes to persist the configuration are specified. Therefore, you have to follow steps from Importing-servers and repeat them every time you start the composition.
eg: build you own pgadmin image that specifies the json to import at startup
Use the docker compose build: property to specify a source
Ok after a night of sleep I found the issue. For the server Host I wrote the db name and not the real container name. So pgAdmin could not find the container.
Have a nice day. : )
Related
I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)
I have a simple web app with a docker-compose.yml configuration like this:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "5432:5432"
I can access the database service within the web app container with postgres://db:5432 because both containers share networking.
I'd like to access the database service from my host machine using postgres://db:5432. How do I map the db:5432 host from the container to db:5432 on my local host machine? I've tried adding 127.0.0.1:5432 db:5432 to my /etc/hosts file which does not seem to work.
The /etc/hosts file is simply a way to statically resolve names when no DNS server is present or resolved. It can't map port addresses.
Ref: https://serverfault.com/questions/54357/can-i-specify-a-port-in-an-entry-in-my-etc-hosts-on-os-x
It will work if you do,
127.0.0.1 db
Remove the port and it will work.
Also, you can directly access Postgres with localhost: 5432 from host machine.
I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost
I checked many forum entries (e.g. in stackoverflow too) but I still cannot figure out what the problem is with my docker-compose file.
So when I start my application (content-app) I got the following exception:
Failed to obtain JDBC Connection; nested exception is java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=content-database)(port=3306)(type=master) : Connection refused (Connection refused)
My application is a Spring boot app that tries to connect to the database, the JDBC URL is
url: jdbc:mariadb://content-database:3306/contentdb?autoReconnect=true
The Spring Boot app works fine as locally (when no docker is used) can connect to the local mariadb.
So the content-app container don't see the content-database container. I read that if I specify a network and I assign the containers to the network then they should be able to connect to each other.
When I connect to the running content-app container then I can telnet to content-database
root#894628d7bdd9:/# telnet content-database 3306
Trying 172.28.0.3...
Connected to content-database.
Escape character is '^]'.
n
5.5.5-10.4.3-MariaDB-1:10.4.3+maria~bionip/4X#wW/�#_9<b[~)N.:ymysql_native_passwordConnection closed by foreign host.
My docker-compose yaml file:
version: '3.3'
networks:
net_content:
services:
content-database:
image: content-database:latest
build:
context: .
dockerfile: ./database/Dockerfile
networks:
- net_content
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
content-redis:
image: content-redis:latest
build:
context: .
dockerfile: ./redis/Dockerfile
networks:
- net_content
content-app:
image: content-app:latest
build:
context: .
dockerfile: ./content/Dockerfile
networks:
- net_content
depends_on:
- "content-database"
Any hint please?
Thanks!
I guess MariaDB is listening on default port 3307, this means your application has to connect to this port as well. I guess this is the case as you are mapping the port 3307 of your container to "the outside".
Change the port in your connection string:
url: jdbc:mariadb://content-database:3307/contentdb?autoReconnect=true
You have to expose the port on which content-database is listening in the Dockerfile at ./database/Dockerfile
I try to start Concourse CI with custom docker-compose
version: '2'
services:
concourse-web:
image: concourse/concourse
container_name: concourse-web
command: web
network_mode: host
volumes: ["./keys/web:/concourse-keys"]
environment:
CONCOURSE_BASIC_AUTH_USERNAME: concourse
CONCOURSE_BASIC_AUTH_PASSWORD: changeme
CONCOURSE_EXTERNAL_URL: http://my.internal.ip:8092
CONCOURSE_BIND_PORT: 8092
CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable
concourse-worker:
image: concourse/concourse
container_name: concourse-worker
network_mode: host
privileged: true
command: worker
volumes: ["./keys/worker:/concourse-keys"]
environment:
CONCOURSE_BIND_PORT: 8092
And worker can't connect to web part.
Can you please help me with this.
P.S. Database postgtresql started on 5432 port on host machine, and with connection all right.
Worker errors:
{"timestamp":"1487953300.400844336","source":"tsa","message":"tsa.connection.channel.forward-worker.register.failed-to-fetch-containers","log_level":2,"data":{"error":"invalid character '\u003c' looking for beginning of value","remote":"127.0.0.1:57960","session":"4.1.1.582"}}
You need to set CONCOURSE_TSA_HOST: concourse-web on the worker as environment variable so that it knows which host to connect to. Right now it is trying to connect to the web part on localhost, but that is incorrect.
Another issue with your configuration is that you're trying to connect to Postgres through localhost: CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable,
but your Postgres instance is running on the host machine. The host machine is not available on localhost inside a docker container as a docker container has it's own private network. It should instead be:
CONCOURSE_POSTGRES_DATA_SOURCE: |-
postgres://odoo:odoo#my.internal.ip:5432/concourse?sslmode=disable
|-
postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable
should have that entire prefix removed. Replace with
CONCOURSE_POSTGRES_DATA_SOURCE: postgres://odoo:odoo#localhost:5432/concourse?sslmode=disable