gitlab ci: access DB service from docker - docker

I am trying to spin up a Postgres service and access it from within a docker container. This is my .gitlab-ci.yml:
image: docker:dind
stages:
- build
services:
- docker:dind
- postgres:11-alpine
variables:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: gitlabci
build_and_test:
when: manual
stage: build
script:
- docker run --rm postgres psql postgresql://postgres#postgres/gitlabci -c "SELECT 1;"
however, when I run this job I get an error:
psql: error: could not translate host name "postgres" to address: Name or service not known
How do I specify hostname from within a docker container?

You can simplify your setup to
build_and_test:
when: manual
stage: build
services:
- name: postgres:11-alpine
alias: postgres
variables:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: gitlabci
POSTGRES_USER: postgres
script:
- apk add postgresql-client
- psql -h postgres -U postgres -d gitlabci -c "select 1;"
I assume your setup does not work because of different networks. Also, since you are using non-default postgres image, its auto-generated name could be different.
You can do
services:
- name: postgres:11-alpine
alias: postgres

Related

On a drone build: how to get files from a service container?

I am having hard time getting this to work: I have a postgresql container, for testing, that comes with SSL certificates. On my current CI (bash) scripts I do
docker cp postgres:/TLS .
and I can execute my program, that will use those certificates to connect to postgresql. In my .drone.yaml I have, right now:
kind: pipeline
name: default
type: docker
steps:
- name: Run pylint and coverage tests
image: my_registry/ci_test
commands:
- scripts/run_pylint.sh
- scripts/run_backend_tests_coverage.sh
- #docker cp postgres:/TLS .
- find ./TLS -name "*.key" | xargs chmod 600
services:
- name: postgres
image: my_registry/postgresql
environment:
POSTGRES_USER: postgres_admin
POSTGRES_PASSWORD: postgres_admin_password
POSTGRES_DB: myDB
POSTGRES_HOST: 127.0.0.1
POSTGRES_FLASK_USER: postgres_user
POSTGRES_FLASK_PASSWORD: postgres_password
After having check the documentation, I have no idea on what to execute where I currently have docker cp postgres:/TLS ..
Is this possible at all?

Running integration tests in Github actions: issues with connecting with postgres

I have some integration tests that, in order to succesfully run, require a running postgres database, setup via docker-compose, and my go app running from main.go. Here is my docker-compose:
version: "3.9"
services:
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
and my Github Actions are as follows:
jobs:
unit:
name: Test
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12.5
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- 5432:5432
env:
GOMODCACHE: "${{ github.workspace }}/.go/mod/cache"
TEST_RACE: true
steps:
- name: Initiate Database
run: psql -f initdb/init.sql postgresql://postgres:password#localhost:5432/my-db
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
- name: Authenticate with GCP
id: auth
uses: "google-github-actions/auth#v0"
with: credentials_json: ${{ secrets.GCP_ACTIONS_SECRET }}
- name: Configure Docker
run: |
gcloud auth configure-docker "europe- docker.pkg.dev,gcr.io,eu.gcr.io"
- name: Set up Docker BuildX
uses: docker/setup-buildx-action#v1
- name: Start App
run: |
VERSION=latest make images
docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-db?sslmode=disable' --name='app' image/app
- name: Tests
env:
POSTGRES_DB_URL: //postgres:password#localhost:5432/my-db?sslmode=disable
GOMODCACHE: ${{ github.workspace }}/.go/pkg/mod
run: |
make test-integration
docker stop app
My tests run just fine locally firing off the docker-compose with docker-compose up and running the app from main.go. However, in Github actions I am getting the following error:
failed to connect to `host=/tmp user=nonroot database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory
What am I missing? Thanks
I think this code has more than one problem.
Problem one:
In your code I don't see you run docker-compose up, therefore I would assume that Postgres is not running.
Problem two:
is in this line: docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-app?sslmode=disable' --name='app' image/app
You point the host of Postgres to localhost, which on your local machine works. As there localhost is your local comuter. Though, as you use docker run you are not running this on your local machine, but in a docker container. There localhost is pointing to inside the conmtainer.
Posible solution for both
As you are already using docker-compose I suggest you to also add your test web server there.
Change your docker-compose file to:
version: "3.9"
services:
webapp:
build: image/app
environment:
POSTGRES_DB_URL='//postgres:password#postgres:5432/my-app?sslmode=disable'
ports:
- "3000:3000"
depends_on:
- "postgres"
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-app
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
If you now run docker-compose up, both services will be available. And it should work. Though I am not a github-actions expert, so I might have missed something. At least like this, you can run your tests locally the same way as in CI, something that I always see as a big plus.
What you are missing is setting up the actual Postgres Client inside the Github Actions server (that is why there is no psql tool to be found).
Set it up as a step.
- name: Install PostgreSQL client
run: |
apt-get update
apt-get install --yes postgresql-client
Apart from that, if you run everything through docker-compose you will need to wait for postgres to be up and running (healthy & accepting connections).
Consider the following docker-compose:
version: '3.1'
services:
api:
build: .
depends_on:
- db
ports:
- 8080:8080
environment:
- RUN_UP_MIGRATION=true
- PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable
command: ./entry
db:
image: postgres:9.5-alpine
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- ./db:/docker-entrypoint-initdb.d/
There are a couple of things you need to notice. First of all, in the environment section of the api we have PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable which is the connection string to the db being passed as an env variable. Notice the host is host.docker.internal.
Besides that we have command: ./entry in the api section. The entry file contains the following #!/bin/ash script:
#!/bin/ash
NOT_READY=1
while [ $NOT_READY -gt 0 ] # <- loop that waits till postgres is ready to accept connections
do
pg_isready --dbname=gotstockapi --host=host.docker.internal --port=5432 --username=gotstock_user
NOT_READY=$?
sleep 1
done;
./gotstock-api # <- actually executes the build of the api
sleep 10
go test -v ./it # <- executes the integration-tests
And finally, in order for the psql client to work in the above script, the docker file of the api is looking like this:
# syntax=docker/dockerfile:1
FROM golang:1.19-alpine3.15
RUN apk add build-base
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN apk add postgresql-client
RUN go build -o gotstock-api
EXPOSE 8080
Notice RUN apk add postgresql-client which installs the client.
Happy hacking! =)

Gitlab CI can't establish connection on port that docker is running

I spent a whole day to this point, still struggling, the error says "Failed to connect to localhost port 9000 after".
I have a Nodejs app, which uses Postgres as DB. I was able to connect them together. And, the app runs in attach mode very well. When, I run it on de-attach mode, and curl it, I get the error. I even put a long time sleep to make sure it has enough time to start the docker but still failed to connect to the port
The main line is how I run the docker, and get Postgres as a service. I have checked the health of the service. I am not sure if this is a firewall or networking issue i.e. the interfaces here.
- docker run -d -e POSTGRES_HOST=$POSTGRES_PORT_5432_TCP_ADDR -p 9000:9000 $DOCKER_TEST_IMAGE_API
image: docker:19.03.12
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
DOCKER_TEST_IMAGE_API: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
DOCKER_RELEASE_IMAGE_API: $CI_REGISTRY_IMAGE
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
api-component-api:
stage: build
services:
- docker:19.03.12-dind
variables:
PORT: '9000'
script:
- docker build --pull -t $DOCKER_TEST_IMAGE_API api/.
- docker push $
enter code here
api-component-api:
stage: build
services:
- docker:19.03.12-dind
variables:
PORT: '9000'
script:
- docker build --pull -t $DOCKER_TEST_IMAGE_API api/.
- docker push $DOCKER_TEST_IMAGE_API
api-component-tests:
stage: test
services:
- name: 'postgres:11.9'
alias: postgres
- name: 'docker:19.03.12-dind'
alias: docker
variables:
# POSTGRES Service
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_HOST: 'postgres'
POSTGRES_HOST_AUTH_METHOD: trust
script:
- env | grep POSTGRES_PORT_5432_TCP_ADDR
- docker run -d -e POSTGRES_HOST=$POSTGRES_PORT_5432_TCP_ADDR -p 9000:9000 $DOCKER_TEST_IMAGE_API
- sleep 60
- docker ps -a
- docker network ls
- curl -X GET "http://localhost:9000/rooms/1000ef5c-1657-46b2-bb36-c74080e00c01"
- cd end-to-end-tests
- yarn install
- yarn test
Digest: sha256:f8d84da7264faf570184929a441e448d680ccbfd297bcd0aef0d7f455c360614
Status: Downloaded newer image for registry.gitlab.com/.../simple-room-booking:main
f24ef88e36e16beb7f32acb03f7cda5775742b6639232a9692e4ef494fe22e93
$ sleep 60
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f24ef88e36e1 registry.gitlab.com/.../simple-room-booking:main "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:9000->9000/tcp lucid_yalow
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
c7d6ee26e52e bridge bridge local
6760c4e13b56 host host local
3bb3bbf3cd42 none null local
$ curl -X GET "http://localhost:9000/rooms/1000ef5c-1657-46b2-bb36-c74080e00c01"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 9000 after 6 ms: Connection refused
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 7
I do not have much experience with GitLab.
I did spend two days on this. The problem was from networking between the docker.
Source: https://docs.gitlab.com/ee/ci/services/#using-services-with-docker-run-docker-in-docker-side-by-side
variables:
FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking
This works after some refactoring, but the main piece was this feature flag.
api-component-tests:
stage: test
services:
- name: 'postgres:11.9'
alias: postgres
- name: 'docker:19.03.12-dind'
alias: docker
- name: $DOCKER_TEST_IMAGE_API
alias: api
variables:
# POSTGRES Service
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_HOST: 'postgres'
POSTGRES_HOST_AUTH_METHOD: trust # activate container-to-container networking
# API Service
POSTGRES_DSN: 'postgresql://postgres/postgres?sslmode=disable&user=postgres&password=password'
FF_NETWORK_PER_BUILD: "true"
script:
- apk --update add postgresql-client
- apk add nodejs yarn curl
- sleep 10
- curl -X GET "http://api:9000/rooms/1000ef5c-1657-46b2-bb36-c74080e00c01"
- cd end-to-end-tests
- export apiBaseUrl='http://api:9000'
- yarn install
- yarn test
Gitlab piepline + docker : (7) Failed to connect to localhost port 9000: Connection refuse
I had the same issue. Please see the last answer. it worked very nice for me.

Docker unable to connect to postgres, but same command works fine when running from containers bash

When building the docker file, I have the command:
CMD ["/app/database/updateLocalDocker.sh"]
The shell script should connect to the postgres service using liquibase but fails with the error connection refused...
When i comment out the above CMD and run the same script directory from the container via docker exec -t -i f42c4bbcd95d /bin/bash, it works fine.
The URL i'm trying to connect to is: jdbc:postgresql://localhost:5432/service_x"
I have a feeling that it's related to either the service not being started or a network issue, when trying to execute the CMD during the docker-compose build stage.
Any guidance would be much appreicated.
docker-compose.yml:
version: "3.8"
services:
db:
image: local.db
build:
context: .
ports:
- 15432:5432
environment:
POSTGRES_PASSWORD: password
networks:
- a
networks:
a:
name: a
external: true
To access your database from your localhost you need to use the port 15432 instead of 5432.
services:
db:
image: local.db
build:
context: .
ports:
- 15432:5432 <--- Here
environment:
POSTGRES_PASSWORD: password
networks:
- a
The first port is your host and the second is the port used in your container.
You can also access it with the container name and the port used in it.
Docker port mapping documentation :
https://docs.docker.com/config/containers/container-networking/
Instead of putting the command in the Dockerfile, you can directly put the command in the docker-compose file and remove CMD ["/app/database/updateLocalDocker.sh"].
docker-compose.yml
version: "3.8"
services:
db:
image: local.db
build:
context: .
command: sh -c "<Enter-your-command>"
ports:
- 15432:5432
environment:
POSTGRES_PASSWORD: password
networks:
- a
networks:
a:
name: a
external: true
If you have one command execute
command: <command>
OR
If you have more than one command, it should be separated by &&.
Syntax:
sh -c "<command-1> && <command-2> && <command-3>"

How use gitlab ci to test and deploy my php application?

I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php

Resources