I am using a symfony app and connecting to a local dynamodb instance in a docker container.
I keep getting AWS HTTP error: cURL error 7: Failed to connect to db port 8889: Connection refused error.
My docker-compose file is simply:
version: '3'
services:
web:
depends_on:
- db
build: .
ports:
- "8000:8000"
db:
image: "amazon/dynamodb-local"
ports:
- "8889:8889"
Honestly, I always get confused by the port mapping, but I don't think that should matter here. I'm trying to connect to http://db:8889. To make things simpler, I executed the following inside my web container:
# curl http://db:8889
curl: (7) Failed to connect to db port 8889: Connection refused
I'm kinda stumped, and I think this is such a simple thing most of the docs skim right over it. (or maybe I do)
The image documentation suggests the DynamoDB server runs on port 8000, so you should access it as http://db:8000. You don't have to publish it on port 8000 or on any port at all, but you need to use the container-side port number to reach it from other containers.
Related
I have two docker containers that share the same network. When I ssh into one of the containers make a http call to the other, I get 200 response: curl -i http://app-web.
I need to be able to call app-web container via https: curl https://app-web, however that returns: Failed to connect to app-web port 443: Connection refused.
This is the docker-compose.yml file for the app-web. What am I missing?
version: "3.8"
networks:
local-proxy:
external: true
internal:
external: false
services:
web:
build:
context: ./docker/bin/php
container_name: app-web"
expose:
- "80"
- "443"
networks:
- internal
- local-proxy
As stated by #David Maze
Your application isn't listening on port 443. Compose expose: does
pretty much nothing at all, and you can delete that section of the
file without changing anything about how the containers work.
You need to make sure that the app-web container is set up and actually listening on port 443.
For example, for Apache, this may mean:
Enabling the necessary modules. I.e. a2enmod headers ssl.
Setting up that domain to be able to handle/receive SSL connections.
Restarting your server to implement the changes.
More to that here. How To Create a Self-Signed SSL Certificate for Apache in Ubuntu 18.04
This is for my local docker development. I have two docker hosts and I'm using traefik's reverse proxy to pull them up in browser. One of the hosts is an api which I need to communicate via https calls. The container I'm trying to connect to has the following params:
version: "3.8"
networks:
local-proxy:
external: true
internal:
external: false
services:
web:
build:
context: ./docker/bin/php
container_name: "app-web"
expose:
- "80"
- "443"
networks:
- internal
- local-proxy
I'm able to connect to it via curl when the call is made via non-ssl (http).
curl http://app-web (200 response)
I need to be able to connect via https, in order to keep everything like it's running in production, however it keeps throwing Failed to connect to app-web port 443: Connection refused
Is it possible at all to connect via 443 port from one container to another?
Note: These containers are never deployed to production. They are just for local dev.
I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost
I am attempting to connect to a rest end point of a JaxRS liferay portlet.
If I try and connect through postman using http://localhost:8078/engine-rest/process-definition
It works 200 okay.
I am attempting to connect to the same end point from within another docker container part of the same docker network, I have tried with localhost and I receive the error:
java.net.ConnectException: Connection refused (Connection refused)
I have also tried http://wasp-engine:8078, wasp-engine is the docker name of the container. Still receiving the same error.
Here are the two containers in my compose file:
wasp-engine:
image: in/digicor-engine:test
container_name: wasp-engine
ports:
- "8078:8080"
depends_on:
mysql:
condition: service_healthy
wasp:
image: in/wasp:local2
container_name: Wasp
volumes:
- liferay-document-library:/opt/liferay/data
environment:
- camundaEndPoint=http://wasp-engine:8078
ports:
- "8079:8080"
depends_on:
mysql:
condition: service_healthy
They are both connecting to the mysql fine which is part of the same docker network and referenced via:
jdbc.default.url=jdbc:mysql://mysql/liferay_test
tl;dr
Use http://wasp-engine:8080
The why
In your docker-compose the
ports: - "8078:8080"
field on wasp-engine will expose port 8080 of the docker container to your host computer on port 8078. This is what allows your postman to succeed in connecting to the container over localhost. However, once inside the docker container localhost refers to the docker container itself. This port forwarding no longer applies.
Using docker-compose you can use the name of the container to target the specific docker container. You mentioned you tried this with the URI http://wasp-engine:8078. When you access the container this way the original port is used not the forwarded port for the host machine. This means that the docker container should be targeting port 8080.
Putting it all together, the final URI should be http://wasp-engine:8080.
I checked many forum entries (e.g. in stackoverflow too) but I still cannot figure out what the problem is with my docker-compose file.
So when I start my application (content-app) I got the following exception:
Failed to obtain JDBC Connection; nested exception is java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=content-database)(port=3306)(type=master) : Connection refused (Connection refused)
My application is a Spring boot app that tries to connect to the database, the JDBC URL is
url: jdbc:mariadb://content-database:3306/contentdb?autoReconnect=true
The Spring Boot app works fine as locally (when no docker is used) can connect to the local mariadb.
So the content-app container don't see the content-database container. I read that if I specify a network and I assign the containers to the network then they should be able to connect to each other.
When I connect to the running content-app container then I can telnet to content-database
root#894628d7bdd9:/# telnet content-database 3306
Trying 172.28.0.3...
Connected to content-database.
Escape character is '^]'.
n
5.5.5-10.4.3-MariaDB-1:10.4.3+maria~bionip/4X#wW/�#_9<b[~)N.:ymysql_native_passwordConnection closed by foreign host.
My docker-compose yaml file:
version: '3.3'
networks:
net_content:
services:
content-database:
image: content-database:latest
build:
context: .
dockerfile: ./database/Dockerfile
networks:
- net_content
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
content-redis:
image: content-redis:latest
build:
context: .
dockerfile: ./redis/Dockerfile
networks:
- net_content
content-app:
image: content-app:latest
build:
context: .
dockerfile: ./content/Dockerfile
networks:
- net_content
depends_on:
- "content-database"
Any hint please?
Thanks!
I guess MariaDB is listening on default port 3307, this means your application has to connect to this port as well. I guess this is the case as you are mapping the port 3307 of your container to "the outside".
Change the port in your connection string:
url: jdbc:mariadb://content-database:3307/contentdb?autoReconnect=true
You have to expose the port on which content-database is listening in the Dockerfile at ./database/Dockerfile