Vapor + PostgreSQL + Nginx build on Docker not operation properly - vapor

I use docker to compose Vapor, PostgreSQL and Nginx for a project, my docker-compose.yml like this:
version: "3.6"
services:
vapor:
build:
context: ./vapor
image: ${CURRENT_VAPOR_IMG}
ports:
- 8080:8080
volumes:
- ${HOST_ROOT}:${CONTAINER_ROOT}
working_dir: ${CONTAINER_ROOT}
tty: true
entrypoint: bash
networks:
- x-net
nginx:
build:
context: ./nginx
image: ${CURRENT_NGINX_IMG}
ports:
- ${HOST_HTTP_PORT}:80
volumes:
- ${HOST_ROOT}:${CONTAINER_ROOT}
networks:
- x-net
psql:
image: ${CURRENT_DB_IMG}
ports:
- 5432:5432
environment:
- POSTGRES_DB=xxx
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=pass
volumes:
- ~/x/x-db:/var/lib/postgresql/data
networks:
- x-net
networks:
x-net:
driver: bridge
After I start all the container by running docker-compose up, then enter to vapor's container to build && run the project, it will prompt an error to the console:
NIO.ChannelError.connectFailed(NIO.NIOConnectionError(host: "localhost", port: 5432, dnsAError: nil, dnsAAAAError: nil, connectionErrors: [NIO.SingleConnectionFailure(target: [IPv6]localhost/::1:5432, error: connection reset (error set): Connection refused (errno: 61)), NIO.SingleConnectionFailure(target: [IPv4]localhost/127.0.0.1:5432, error: connection reset (error set): Connection refused (errno: 61))]))
Then I run the vapor project on local machine and keep the psql container running, it works normally, such as finished the first migration with models.
Is there any mistakes on my configuration of docker or any others?

To connect to database inside container dont use localhost as a db host but your db container name. So in your case host is psql. Here your docker compose is not well formatted psql and nginx must have one tab more. But maybe its just SO formatting wrong.

You can not have localhost in docker compose, host for your db is psql in this case.

Related

How to stop Docker container from allocating host ports?

How do you launch Postgres from Docker, using docker-compose?
My docker-compose.yml looks like:
version: "3.6"
services:
db:
container_name: db
image: postgres:14-alpine
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
ports:
- "5432:5432"
command: -c fsync=off -c synchronous_commit=off -c full_page_writes=off --max-connections=200 --shared-buffers=4GB --work-mem=20MB
tmpfs:
- /var/lib/postgresql
web:
container_name: web
build:
context: ..
dockerfile: test_tools/Dockerfile
shm_size: '2gb'
volumes:
- /dev/shm:/dev/shm
depends_on:
- db
This is a simple test environment to mimic a web server and a database server.
Yet when I build this, it fails with:
Creating db ... error
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint db (bdaebf844ee8ddd593b6bc75733d8aa6196112b62f7909be060017a9a33b3c34): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use
Why is my Postgres container trying to allocate a port on the host?
I do have Postgres running on port 5432 of the host, but why would this be interfering? These are just test containers that only need to talk to each other, and should not be accessible to the host, much less allocate host ports.
I've confirmed with docker ps -a that there are no other containers that might also be consuming port 5432.
ports:
- 5432
will start your Postgres, but on a random (free) host port.
Try to map postgres to different port on host for example
ports:
5432:15432
will make your db works on port 15432 on your host.

How i can to connect network between webserver and database in docker-compose?

I have a problem about network in docker. In the docker-compose.yml includes 2 instance below
webserver (frontend + backend)
database
But i tried to bridge network and default but not working at all.The backend cannot connect to database show error "connection refuse". then i tried to docker exec -t .. into webserver and then ping to database it show "timeout".
I cannot connect database with ip address (i got a database ip address from docker exec and then hostname -i) but i connected success using "localhost"
this my docker-compose.yml
version: '3.8'
services:
postgres_server:
container_name: postgres14-4_container
image: postgres:14.4
command: postgres -c 'max_connections=200'
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- '5222:5432'
volumes:
- db:/var/lib/postgresql14/data
networks:
- web_network
webserver:
container_name: frontend_backend_container
image: webserver
ports:
- '9090:80'
- '8081:8081'
env_file:
- backend_env
depends_on:
- postgres_server
restart: always
networks:
- web_network
volumes:
db:
driver: local
networks:
web_network:
driver: bridge
To configure remote connections to postgres, you have to adjust pg_hba.conf. For example add:
# Remote access
host all all 0.0.0.0/0 trust
where is your backend_env file?
I guess you have there the host + port to connect to the db.
You don't need to define anything special (like the bridge).
The webserver container should be able to access the postgres_server via postgres_server:5432 (not localhost and not 5222).

Unable to connect mysql from docker container?

I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.

consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno 111] Connection refused

I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.

Can't to connect to postgres container

I define postgres server in docker-compose.yml:
db:
image: postgres:9.5
expose:
- 5432
Then in another docker container I tried to connect to this postgres container. But it gives an error with warning:
Is the server running on host "db" (172.22.0.2) and accepting
data-service_1 | TCP/IP connections on port 5432?
Why container can't to connect to another by provided information (host="db" and port=5432)?
PS
Full docker-compose.yml:
version: "2"
services:
data-service:
build: .
depends_on:
- db
ports:
- "50051:50051"
db:
image: postgres:9.5
depends_on:
- data-volume
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
volumes_from:
- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
Solution #1. Same file.
To be able to access the db container, you have to define your other containers in context of docker-compose.yml. When containers are started, each container gets all other containers mapped in /etc/hosts.
Just do
version: '2'
services:
web:
image: your/image
db:
image: postgres:9.5
If you do not wish to put your other containers into the same docker-compose.yml, there are other solutions:
Solution #2. IP
Do docker inspect <name of your db container> and look for IPAddress directive in the result listing. Use that IPAddress as host to connect to.
Solution #3. Networks
Make your containers join same network. For that, under each service, define:
services:
db:
networks:
- myNetwork
Don't forget to change db for each container you are starting.
I usually go with the first solution during development. I use apache+php as one container and pgsql as another one, a separate DB for every project. I do not start more than one setting of docker-compose.yml, so in this case defining both containers in one .yml config is perfect.
the depends on is not correct. i would try to use other paramters like LINKS and environment:
version: "2"
services:
data-service:
build: .
links:
- db
ports:
- "50051:50051"
volumes_from: ["db"]
environment:
DATABASE_HOST: db
db:
image: postgres:9.5
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
#volumes_from:
#- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
this one works for me (not postgres but mysql)

Resources