I am having hard time getting this to work: I have a postgresql container, for testing, that comes with SSL certificates. On my current CI (bash) scripts I do
docker cp postgres:/TLS .
and I can execute my program, that will use those certificates to connect to postgresql. In my .drone.yaml I have, right now:
kind: pipeline
name: default
type: docker
steps:
- name: Run pylint and coverage tests
image: my_registry/ci_test
commands:
- scripts/run_pylint.sh
- scripts/run_backend_tests_coverage.sh
- #docker cp postgres:/TLS .
- find ./TLS -name "*.key" | xargs chmod 600
services:
- name: postgres
image: my_registry/postgresql
environment:
POSTGRES_USER: postgres_admin
POSTGRES_PASSWORD: postgres_admin_password
POSTGRES_DB: myDB
POSTGRES_HOST: 127.0.0.1
POSTGRES_FLASK_USER: postgres_user
POSTGRES_FLASK_PASSWORD: postgres_password
After having check the documentation, I have no idea on what to execute where I currently have docker cp postgres:/TLS ..
Is this possible at all?
Related
I have some integration tests that, in order to succesfully run, require a running postgres database, setup via docker-compose, and my go app running from main.go. Here is my docker-compose:
version: "3.9"
services:
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
and my Github Actions are as follows:
jobs:
unit:
name: Test
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12.5
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- 5432:5432
env:
GOMODCACHE: "${{ github.workspace }}/.go/mod/cache"
TEST_RACE: true
steps:
- name: Initiate Database
run: psql -f initdb/init.sql postgresql://postgres:password#localhost:5432/my-db
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
- name: Authenticate with GCP
id: auth
uses: "google-github-actions/auth#v0"
with: credentials_json: ${{ secrets.GCP_ACTIONS_SECRET }}
- name: Configure Docker
run: |
gcloud auth configure-docker "europe- docker.pkg.dev,gcr.io,eu.gcr.io"
- name: Set up Docker BuildX
uses: docker/setup-buildx-action#v1
- name: Start App
run: |
VERSION=latest make images
docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-db?sslmode=disable' --name='app' image/app
- name: Tests
env:
POSTGRES_DB_URL: //postgres:password#localhost:5432/my-db?sslmode=disable
GOMODCACHE: ${{ github.workspace }}/.go/pkg/mod
run: |
make test-integration
docker stop app
My tests run just fine locally firing off the docker-compose with docker-compose up and running the app from main.go. However, in Github actions I am getting the following error:
failed to connect to `host=/tmp user=nonroot database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory
What am I missing? Thanks
I think this code has more than one problem.
Problem one:
In your code I don't see you run docker-compose up, therefore I would assume that Postgres is not running.
Problem two:
is in this line: docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-app?sslmode=disable' --name='app' image/app
You point the host of Postgres to localhost, which on your local machine works. As there localhost is your local comuter. Though, as you use docker run you are not running this on your local machine, but in a docker container. There localhost is pointing to inside the conmtainer.
Posible solution for both
As you are already using docker-compose I suggest you to also add your test web server there.
Change your docker-compose file to:
version: "3.9"
services:
webapp:
build: image/app
environment:
POSTGRES_DB_URL='//postgres:password#postgres:5432/my-app?sslmode=disable'
ports:
- "3000:3000"
depends_on:
- "postgres"
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-app
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
If you now run docker-compose up, both services will be available. And it should work. Though I am not a github-actions expert, so I might have missed something. At least like this, you can run your tests locally the same way as in CI, something that I always see as a big plus.
What you are missing is setting up the actual Postgres Client inside the Github Actions server (that is why there is no psql tool to be found).
Set it up as a step.
- name: Install PostgreSQL client
run: |
apt-get update
apt-get install --yes postgresql-client
Apart from that, if you run everything through docker-compose you will need to wait for postgres to be up and running (healthy & accepting connections).
Consider the following docker-compose:
version: '3.1'
services:
api:
build: .
depends_on:
- db
ports:
- 8080:8080
environment:
- RUN_UP_MIGRATION=true
- PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable
command: ./entry
db:
image: postgres:9.5-alpine
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- ./db:/docker-entrypoint-initdb.d/
There are a couple of things you need to notice. First of all, in the environment section of the api we have PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable which is the connection string to the db being passed as an env variable. Notice the host is host.docker.internal.
Besides that we have command: ./entry in the api section. The entry file contains the following #!/bin/ash script:
#!/bin/ash
NOT_READY=1
while [ $NOT_READY -gt 0 ] # <- loop that waits till postgres is ready to accept connections
do
pg_isready --dbname=gotstockapi --host=host.docker.internal --port=5432 --username=gotstock_user
NOT_READY=$?
sleep 1
done;
./gotstock-api # <- actually executes the build of the api
sleep 10
go test -v ./it # <- executes the integration-tests
And finally, in order for the psql client to work in the above script, the docker file of the api is looking like this:
# syntax=docker/dockerfile:1
FROM golang:1.19-alpine3.15
RUN apk add build-base
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN apk add postgresql-client
RUN go build -o gotstock-api
EXPOSE 8080
Notice RUN apk add postgresql-client which installs the client.
Happy hacking! =)
I am trying to spin up a Postgres service and access it from within a docker container. This is my .gitlab-ci.yml:
image: docker:dind
stages:
- build
services:
- docker:dind
- postgres:11-alpine
variables:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: gitlabci
build_and_test:
when: manual
stage: build
script:
- docker run --rm postgres psql postgresql://postgres#postgres/gitlabci -c "SELECT 1;"
however, when I run this job I get an error:
psql: error: could not translate host name "postgres" to address: Name or service not known
How do I specify hostname from within a docker container?
You can simplify your setup to
build_and_test:
when: manual
stage: build
services:
- name: postgres:11-alpine
alias: postgres
variables:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_DB: gitlabci
POSTGRES_USER: postgres
script:
- apk add postgresql-client
- psql -h postgres -U postgres -d gitlabci -c "select 1;"
I assume your setup does not work because of different networks. Also, since you are using non-default postgres image, its auto-generated name could be different.
You can do
services:
- name: postgres:11-alpine
alias: postgres
I am attempting to run e2e tests in the gitlab ci that use a React frontend, Java Spring backend and PostgreSQL.
The relevant pieces of the .gitlab-ci -config are as follows:
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
FF_NETWORK_PER_BUILD: 1
docker-backend-build:
image: docker:latest
services:
- docker:dind
stage: package
dependencies:
- backend-build
script:
- docker build -t registry.gitlab.com/repo-name .
- docker tag registry.gitlab.com/repo-name $IMAGE_NAME
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push $IMAGE_NAME
end-to-end-test:
stage: integration-test
image: node:latest
services:
- name: postgres:9.6
- name: $IMAGE_NAME
alias: backend
variables:
DB_USERNAME: postgres
DB_PASSWORD: postgres
JDBC_CONNECTION_STRING: 'jdbc:postgresql://postgres:5432/database?stringtype=unspecified'
dependencies:
- frontend-build
script:
- cd frontend
- yarn start:ci & ./node_modules/wait-on/bin/wait-on http://backend:9070/api/health http://localhost:3000
- yarn run cy:run
artifacts:
when: always
paths:
- frontend/cypress/videos/*.mp4
- frontend/cypress/screenshots/**/*.png
expire_in: 1 day
The Dockerfile for the backend is as follows:
FROM tomcat:latest
ADD backend/target/server.war /usr/local/tomcat/webapps/
RUN sed -i 's/port="8080"/port="9070"/' /usr/local/tomcat/conf/server.xml
EXPOSE 9070
CMD ["catalina.sh", "run"]
The server.war is created on an earlier stage in the CI-pipeline.
The server.war is set to listen to port 9070, and the Dockerfile succesfully changes the Tomcat port to 9070 as well. The Tomcat instance is able to connect to the postgres instance via postgres:5432 because of the FF_NETWORK_PER_BUILD -flag, but for some reason this script hangs on the wait-on http://backend:9070/api/health command forever. It can not connect to backend:9070 even though the server is up and running. (and the health-endpoint exists). The server doesn't receive any indication that it is trying to be connected to.
What could I be doing wrong? I also tried to connect to http://localhost:9070/api/health but that didn't work either.
The answer for me was simply changing the Dockerfile as follows:
- ADD backend/target/server.war /usr/local/tomcat/webapps/
+ ADD backend/target/server.war /usr/local/tomcat/webapps/ROOT.war
because without that, the server was actually listening in http://backend:9070/api/health/server. Silly me.
I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php
I'm having an issue with my travis-ci before_script while trying to connect to my docker postgres container:
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
I've seen this problem raised but never fully addressed around SO and github issues, and i'm not clear whether it is specific to docker or travis. One linked issue (below) works around it by using 5433 as the host postgres address but i'd like to know for sure what is going on before i jump into something.
my travis.yml:
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.7.1
DOCKER_VERSION: 1.11.1-0~trusty
before_install:
# list docker-engine versions
- apt-cache madison docker-engine
# upgrade docker-engine to specific version
- sudo apt-get -o Dpkg::Options::="--force-confnew" install -y docker-engine=${DOCKER_VERSION}
# upgrade docker-compose
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
before_script:
- echo "Before Script:"
- docker-compose -f docker-compose.ci.yml build
- docker-compose -f docker-compose.ci.yml run app rake db:setup
- docker-compose -f docker-compose.ci.yml run app /bin/sh
script:
- echo "Running Specs:"
- rake spec
my docker-compose.yml for ci:
postgres:
image: postgres:9.4.5
environment:
POSTGRES_USER: web
POSTGRES_PASSWORD: yourpassword
expose:
- '5432' # added this as an attempt to open the port
ports:
- '5432:5432'
volumes:
- web-postgres:/var/lib/postgresql/data
redis:
image: redis:3.0.5
ports:
- '6379:6379'
volumes:
- web-redis:/var/lib/redis/data
web:
build: .
links:
- postgres
- redis
volumes:
- ./code:/app
ports:
- '8000:8000'
# env_file: # setting these directly in the environment
# - .docker.env # (they work fine locally)
sidekiq:
build: .
command: bundle exec sidekiq -C code/config/sidekiq.yml
links:
- postgres
- redis
volumes:
- ./code:/app
Docker & Postgres: Failed to bind tcp 0.0.0.0:5432 address already in use
How to get Docker host IP on Travis CI?
It seems that Postgres service is enabled by default in Travis CI.
So you could :
Try to disable the Postgres service in your Travis config. See How to stop services on Travis CI running by default?. See also https://docs.travis-ci.com/user/database-setup/#PostgreSQL .
Or
Map your postgres container to another host port (!= 5432). Like -p 5455:5432.
It could also be useful to check if the service is already running : Check If a Particular Service Is Running on Ubuntu
Do you use Travis' Postgres?
services:
- postgresql
Would be easier if you provide travis.yml