I have a docker compose file set up with 3 separate containers (Flask, Nginx and Solr)
After starting up all 3 run successfully but my Flask application can't connect to my Solr instance and when I run:
wget -S http://localhost:8983/solr/CORE_NAME/select
I get the error "Connecting to localhost (localhost)|127.0.0.1|:8983... failed: Connection refused."
I am fairly new to docker and been around a few different forums looking at this issue but nothing has worked so far. I have tried creating a network also but running into the same issue.
Here is my docker-compose.yml.
version: "2.7"
services:
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
container_name: nginx
ports:
- "80:80"
- "8181:8181"
volumes:
- ./:/opt/ee1
- ee1-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
depends_on:
- flask
flask:
build:
context: .
dockerfile: Dockerfile-flask
entrypoint: ["/bin/bash", "./system/start-uwsgi-docker.bash"]
container_name: flask
user: root
restart: always
volumes:
- ./:/opt/ee1
- ./ee1config.ini:/opt/ee1config.ini
- ee1jobs-logs-volume:/var/log/ee1
- ./:/usr/local/websites/ee1
- sockets-volume:/tmp
links:
- solr
solr:
build:
context: .
dockerfile: Dockerfile-solr
container_name: solr
volumes:
- data:/var/solr
entrypoint:
- bash
- "-c"
- "precreate-core ee1_1; precreate-core ee1_2; exec solr -f"
ports:
- "8983:8983"
volumes:
sockets-volume: {}
ee1-logs-volume: {}
data:
Every docker container is - network wise - a separate host with it's own IP.
Traffic to localhost or 127.0.0.1 will definitely never leave that container.
So what you need to find out is the IP of the server container (solr) you actually want to talk to, then configure the client container (flask) accordingly. This can be done by e.g. docker inspect. Be aware that upon container restart the IPs can change. You will want to use something like DNS rather than raw IPs.
Since you use docker compose, each container for a service joins the same network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
For more details check out
https://docs.docker.com/compose/networking/
https://docs.docker.com/network/
Related
I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300
I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001
I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.
I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080
I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.