docker-compose - Not able to connect to postgres db - docker

I am trying to run the docker-compose command on Jenkins slave but it fails while running the command pytest tests/integration.
The command run integration tests with backend as postgres.
Dockerfile is
version: "3.4"
services:
test:
build:
context: ../..
dockerfile: Dockerfile
depends_on:
- postgres_db
environment:
PG_UNITTEST_DB: "postgresql://testuser:testpassword#postgres_db/testdb"
command: pytest tests/integration
postgres_db:
image: postgis/postgis
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: testpassword
POSTGRES_USER: testuser
POSTGRES_DB: testdb
And the error I am getting is
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "postgres_db" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I tried exposing port 5432 in docker-compose file in postgres_db section but didn't help. The same code works fine locally. The command I run is
docker-compose -f tests/integration/docker-compose.yml up --build --exit-code-from test

Postgres takes a moment to start up before it can start servicing requests. It is likely that your code in the test container is attempting to connect before Postgres is ready.
Your best option is probably to add some retry logic to your integration test. E.g., add something to yoursetup method that loops until it is able to establish a successful database connection.

You need to define the order of starting the service so that your postgres container is up before the test container. For detailed info, you can refer to the docs: https://docs.docker.com/compose/startup-order/

Related

Google Cloud Run health check fails with Docker-Compose

I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).

Can Docker image postgres:11.7-alpine talk to two servers at the same time?

I am experiencing a connection error when trying to connect to my postgres instance which is a postgres:11.7-alpine image inside a container.
my understanding:
I have a codebase.
I have a container with a postgres:11.7-alpine image running inside it on port 5432:5432
I have a container with an image built from the codebase at point 1 above running on port 8000:8000.
Inside the containers everything is running fine (meaning no errors and postgress is connected to point 3. I used docker-compose up --build
When I try to start up my codebase (point 1) it gets a connection error. I suspect it is trying to connect to postgres (point 2) but the postgres inside the container is already connected to my replica codebase (point 3)
How to replicate
docker-compose up --build
result everything runs fine
then I startup my codebase (point 1) and it gets a connection error.
Expect behaviour
docker-compose up --build
result everything runs fine
then I startup my codebase (point 1) and is also able to connect to the postgres instance within the docker container.
version: '3.6'
services:
# App Backend PostgreSQL
postgres:
container_name: sportsAppApiDb
image: postgres:11.7-alpine
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_URL: postgres://admin:password#localhost:5432/sportsappapi
POSTGRES_DB: sportsappapi
POSTGRES_HOST: postgres
ports:
- "5432:5432"
# App Backend
sports-app-api:
container_name: sportsAppApi
build: ./
volumes:
- ./:/usr/src/sports-app-api
command: sbt run
working_dir: /usr/src/sports-app-api
ports:
- "8000:8000"
environment:
POSTGRES_URI: postgres://admin:password#postgres:5432/sportsappapi
Okay so you want to connect to the postgres from a docker container as well as from an app that is being run from the IDE. Since you are binding the postgres port from container to host machine, to connect another app, you can use the 5432 port of the host machine.
Connection string for codebase in the IDE would be
postgres://admin:password#0.0.0.0:5432/sportsappapi
But this will only work if docker-compose up --build is up and running.

Test the connection between docker containers within docker-composed environment

We are using docker-compose to set up the services for our app:
version: "3"
services:
db:
container_name: db
image: postgres:11.1
environment:
POSTGRES_USER: xxx
POSTGRES_PASSWORD: xxx
POSTGRES_DB: xxx
PGPASSWORD: xxx
volumes:
- pgdata:/var/lib/postgresql/data
- ./data/dbdump:/dbdump
networks:
- zenet
ports:
- "5432:5432"
# The React web application
web:
container_name: web
build:
context: .
dockerfile: devenv/web/Dockerfile
volumes:
- ./src/client-app:/usr/local/abc
- /usr/local/abc/node_modules
networks:
- zenet
ports:
- "3000:3000"
command: npm run startindocker
# The Django Rest Framework API
api:
container_name: api
build:
context: .
dockerfile: devenv/api/Dockerfile
environment:
DJANGO_SETTINGS_MODULE: abc.settings.dev
PYTHONSTARTUP: /root/pythonstartup.sh
PYTHONIOENCODING: UTF-8
volumes:
- .:/usr/local/borrow-a-boat
- ./devenv/api/pythonrc.py:/root/pythonstartup.sh
networks:
- zenet
depends_on:
- "db"
ports:
- "9000:9000"
command:
python3 /usr/local/borrow-a-boat/src/django/abc/manage.py runserver 0.0.0.0:9000
tty: true
volumes:
pgdata:
customboatdata:
networks:
zenet:
(sensitive info has been replaced)
My colleagues have the setup running fine. I setup the app & the volumes & containers are up & running. I can hit the service api at port 9000 fine from browser & confirm that the db is populated. However, my web service is unable to get the data from the api. How can I confirm that the above assertion is correct & that the web really cannot communicate with the api service.
And how can I fix this & get the web to receive the data from api. Apologies for the newbie question.
EDIT:
When I run ping api from within the web container using docker exec -it [containerID] /bin/sh, I am recieving a response in the form of :
64 bytes from 172.18.0.4: seq=139 ttl=64 time=0.084 ms
So, clearly, my assertion is incorrect. Why is web service unable to get a response from api service. When I load the web app in browser, I do not get any log display in the terminal of the api being hit.
EDIT-2 :
As per #runwuf question & my response, clearly, the 'web' is able to communicate with the 'api' service. So, something else is wrong. Here are the steps, we follow to setup the stack on our systems. I use a Linux Mint 19.2 OS, while the team uses Macs. The commands are:
docker kill $(docker_container_names)
docker rm -v $(docker_container_names)
docker volume rm abc_pgdata
docker image rm abc_api
docker image rm abc_web
docker-compose build
docker-compose up -d db api web
ssh abc#abc.com 'pg_dump abc | gzip' | gunzip | docker-compose run --rm db psql --host db --username abc
docker-compose run --rm db psql --host db --username abc -c "update core_photo set image_base = 'sample.jpg'"
docker-compose run --rm db psql --host db --username abc -c "update core_experienceimage set image_base = 'sample.jpg'"
In the end, it was a case of env variable not accessible within web service. All it took was to read the console logs in the browser which showed the undefined variable.
The lesson for me is to when it comes to problem solving, no matter how new the technology, don't forget to use the tools you are familiar with.

Where do I specify to Postgres within Docker that it should run and accept connections?

I'm trying to dockerize an existing Rails app that uses Postgresql 9.5 as its database. In my docker-compose.yml. After a successful "docker-compose build" I can run the "docker-compose up" command and see the connection but when I navigate to localhost I get the following error.
PG::ConnectionBad
could not connect to server: No such file or directory Is the server running >locally and accepting connections on Unix domain socket >"/var/run/postgresql/.s.PGSQL.5432"?
Here is what is in my docker-compose.yml
version: '2'
services:
db:
image: postgres:9.5
restart: always
volumes:
- ./tmp/db:/var/lib/postgresql/9.5/main
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: hardware_development
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
From what I've seen I need to do some specification somewhere in my dockerfile or the docker-compose.yml but I either don't see a change or end up back at the same error.
I've been able to use Docker's own docs to use Docker Compose to create a new rails app with postgres where I see the "yay you're on rails!" page but now with my own code I can't see anything. Running the app outside of docker shows me the test page as well so its not the code within my rails app or the Postgres evnironment outside of Docker.
Your db docker-compose entry isn't exposing any ports. It needs to expose 5432. Add a ports line for that just like you have for web.
Edit: also I don't know why you added restart: always to your database container, but I wouldn't recommend that for rails or pretty much anything.

Unable to connect mysql from docker container?

I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.

Resources