I am trying to connect to a cassandra container from a separate container (named main).
This is my docker-compose.yml
version: '3.2'
services:
main:
build:
context: .
image: main-container:latest
depends_on:
- cassandra
links:
- cassandra
stdin_open: true
tty: true
cassandra:
build:
context: .
dockerfile: Dockerfile-cassandra
ports:
- "9042:9042"
- "9160:9160"
image: "customer-core-cassandra:latest"
Once I run this using docker-compose up, I run this command:
docker-compose exec main cqlsh cassandra 9042
but I get this error:
Connection error: ('Unable to connect to any servers', {'172.18.0.2': error(111, "Tried connecting to [('172.18.0.2', 9042)]. Last error: Connection refused")})
I figured out the answer. Basically, in the cassandra.yaml file it sets the default rpc_address to localhost. If this is the case, Cassandra will only listen for requests on localhost, and will not allow connections from anywhere else. In order to change this, I had to set rpc_address to my "cassandra" container so my main container (and any other containers) could access Cassandra using the cassandra container ip address.
rpc_address: cassandra
Related
How do you launch Postgres from Docker, using docker-compose?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
container_name: db
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
ports:
- "5432:5432"
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
web:
container_name: web
build:
context: ..
dockerfile: test_tools/Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
depends_on:
- db
This is a simple test environment to mimic a web server and a database server.
Yet when I build this, it fails with:
Creating db ... error
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint db (bdaebf844ee8ddd593b6bc75733d8aa6196112b62f7909be060017a9a33b3c34): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use
Why is my Postgres container trying to allocate a port on the host?
I do have Postgres running on port 5432 of the host, but why would this be interfering? These are just test containers that only need to talk to each other, and should not be accessible to the host, much less allocate host ports.
I've confirmed with docker ps -a that there are no other containers that might also be consuming port 5432.
ports:
- 5432
will start your Postgres, but on a random (free) host port.
Try to map postgres to different port on host for example
ports:
5432:15432
will make your db works on port 15432 on your host.
I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/
I use docker to compose Vapor, PostgreSQL and Nginx for a project, my docker-compose.yml like this:
version: "3.6"
services:
vapor:
build:
context: ./vapor
image: ${CURRENT_VAPOR_IMG}
ports:
- 8080:8080
volumes:
- ${HOST_ROOT}:${CONTAINER_ROOT}
working_dir: ${CONTAINER_ROOT}
tty: true
entrypoint: bash
networks:
- x-net
nginx:
build:
context: ./nginx
image: ${CURRENT_NGINX_IMG}
ports:
- ${HOST_HTTP_PORT}:80
volumes:
- ${HOST_ROOT}:${CONTAINER_ROOT}
networks:
- x-net
psql:
image: ${CURRENT_DB_IMG}
ports:
- 5432:5432
environment:
- POSTGRES_DB=xxx
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=pass
volumes:
- ~/x/x-db:/var/lib/postgresql/data
networks:
- x-net
networks:
x-net:
driver: bridge
After I start all the container by running docker-compose up, then enter to vapor's container to build && run the project, it will prompt an error to the console:
NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "localhost", port: 5432, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:5432, error: connection reset (error set): Connection refused (errno: 61)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:5432, error: connection reset (error set): Connection refused (errno: 61))]))
Then I run the vapor project on local machine and keep the psql container running, it works normally, such as finished the first migration with models.
Is there any mistakes on my configuration of docker or any others?
To connect to database inside container dont use localhost as a db host but your db container name. So in your case host is psql. Here your docker compose is not well formatted psql and nginx must have one tab more. But maybe its just SO formatting wrong.
You can not have localhost in docker compose, host for your db is psql in this case.
I checked many forum entries (e.g. in stackoverflow too) but I still cannot figure out what the problem is with my docker-compose file.
So when I start my application (content-app) I got the following exception:
Failed to obtain JDBC Connection; nested exception is java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=content-database)(port=3306)(type=master) : Connection refused (Connection refused)
My application is a Spring boot app that tries to connect to the database, the JDBC URL is
url: jdbc:mariadb://content-database:3306/contentdb?autoReconnect=true
The Spring Boot app works fine as locally (when no docker is used) can connect to the local mariadb.
So the content-app container don't see the content-database container. I read that if I specify a network and I assign the containers to the network then they should be able to connect to each other.
When I connect to the running content-app container then I can telnet to content-database
root#894628d7bdd9:/# telnet content-database 3306
Trying 172.28.0.3...
Connected to content-database.
Escape character is '^]'.
n
5.5.5-10.4.3-MariaDB-1:10.4.3+maria~bionip/4X#wW/�#_9<b[~)N.:ymysql_native_passwordConnection closed by foreign host.
My docker-compose yaml file:
version: '3.3'
networks:
net_content:
services:
content-database:
image: content-database:latest
build:
context: .
dockerfile: ./database/Dockerfile
networks:
- net_content
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
content-redis:
image: content-redis:latest
build:
context: .
dockerfile: ./redis/Dockerfile
networks:
- net_content
content-app:
image: content-app:latest
build:
context: .
dockerfile: ./content/Dockerfile
networks:
- net_content
depends_on:
- "content-database"
Any hint please?
Thanks!
I guess MariaDB is listening on default port 3307, this means your application has to connect to this port as well. I guess this is the case as you are mapping the port 3307 of your container to "the outside".
Change the port in your connection string:
url: jdbc:mariadb://content-database:3307/contentdb?autoReconnect=true
You have to expose the port on which content-database is listening in the Dockerfile at ./database/Dockerfile
I am running my both program inside docker on local host.
While I send request from one container to another I am getting connection refused error.
One is running on 8000 port and another one is running on 8001.
I run my image using command docker run -p 8000:8000 service1 and vice versa.
I am trying to connect service running on 8000 from 8001.
I am getting error like:
connect ECONNREFUSED 0.0.0.0:8000
You need to use Docker-Compose with network mode as Host
network_mode: "host"
Check sample Docker-compose file :-
version: '2.1'
services:
#Governing microservices
api-gateway:
build: zuul-apigateway/
depends_on:
eureka-server:
condition: service_healthy
restart: always
network_mode: "host"
image: demo-zuul-service
hostname: localhost
ports:
- 9085:9085
healthcheck:
test: "exit 0"
eureka-server:
build: eureka-server/
restart: always
network_mode: "host"
image: demo-eureka-service
hostname: localhost
ports:
- 9083:9083
healthcheck:
test: "exit 0"
Now these both containers can communicate with each other As the are in Host network
Referance:-
https://github.com/thoopalliamar/Juggler/blob/master/docker-compose.yml
This is 13 microservice application which can commnicate with each other.