I am trying to use GitHub Actions to fire up a Postgres container for my tests. I have a script called build.sh that gets called when npm run build is called via GitHub actions. This script calls upon restore-schema.sh (shown below).
The issue here is when restore-schema.sh gets ran, I keep getting Error: no such container: postgres. GitHub actions is naming the container some arbitrary string. Is there not a way I can run docker exec on an image or somehow name the postgres container that GitHub actions is creating? I've looked through both documentations to no avail.
How should I go about this? I noticed that in the Docker run ps screenshot, it shows command docker-entrypoint.sh. Should I use this instead? Do I specify the Dockerfile inside .github/workflows/?
I tried to include as much relevant information as possible - comment if you need any other information please.
Screenshots from GitHub Actions
Initialize containers
Docker run ps <- docker ps showing name postgres
Run npm run build --if-present <- WHERE THE ISSUE IS OCCURING
build.sh
#!/bin/sh
# Import core db schema
./.deploy/postgres/restore-schema.sh
.deploy/postgres/restore-schema.sh
#!/bin/sh
docker exec -it postgres psql \
--username postgres \
--password dev \
coredb < .deploy/postgres/db-schema.sql
.github/workflows/test-api-gateway.yml
name: API Gateway CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-latest
services: # Serivce containers to run with `container-job`
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_USER: postgres
POSTGRES_DB: coredb
POSTGRES_PASSWORD: dev
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: docker ps
- run: chmod +x build.sh .deploy/postgres/restore-schema.sh
- run: npm ci
- run: npm run build --if-present
- run: npm test
Try the --name option
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
--name postgres
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
jobs.<job_id>.services.options: Additional Docker container resource options. For a list of options, see "docker create options."
Another solution I've seen is using last created container
docker exec -it $(docker ps --latest --quiet) bash
Related
I want to automatically create an image using github actions and upload it after testing
However, when this build is executed inside the container, Unable to connect to newly launched container. (probably the execution was successful)
Of course it works when I try to do the same thing without using containers (on the machine).
Why the connection was failed when I tried in docker container?
jobs:
build_and_test:
runs-on: my-runner
container:
image: my-image:tag
credentials:
username: ${{ secrets.ID }}
password: ${{ secrets.PW }}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
options: --user root
steps:
- uses: actions/checkout#v2
- name: build_image
run: |
...
# build image
...
- name: bvt_test
run: |
container=$(docker run --rm -itd -p ${port}:1234 ${new_image})
...
# test, but the connection failed
Thanks.
When trying to wait for the mysql docker container, I'm met with: Problem with dial: dial tcp 127.0.0.1:3306: connect: connection refused. Sleeping 1s
# This config is equivalent to both the '.circleci/extended/orb-free.yml' and the base '.circleci/config.yml'
version: 2.1
# Orbs are reusable packages of CircleCI configuration that you may share across projects, enabling you to create encapsulated, parameterized commands, jobs, and executors that can be used across multiple projects.
# See: https://circleci.com/docs/2.0/orb-intro/
orbs:
node: circleci/node#5.0.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
version: 2
node: # This is the name of the workflow, feel free to change it to better match your workflow.
# Inside the workflow, you define the jobs you want to run.
jobs:
- build_and_test:
# This is the node version to use for the `cimg/node` tag
# Relevant tags can be found on the CircleCI Developer Hub
# https://circleci.com/developer/images/image/cimg/node
# If you are using yarn, change the line below from "npm" to "yarn"
filters:
branches:
only:
- master
executors:
node:
docker:
- image: cimg/node:16.14.2
jobs:
build_and_test:
executor: node
docker:
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/v$DOCKERIZE_VERSION/dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz
- run:
name: Wait for db
command: dockerize -wait tcp://127.0.0.1:3306 -timeout 10s
I do see that the container is installed under the spin-up environment step, so I believe it should be running:
Starting container cimg/mysql:8.0
cimg/mysql:8.0:
using image cimg/mysql#sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
pull stats: Image was already available so the image was not pulled
time to create container: 21ms
image is cached as cimg/mysql:8.0, but refreshing...
8.0: Pulling from cimg/mysql
Digest: sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
Status: Image is up to date for cimg/mysql:8.0
Time to upload agent and config: 369.899813ms
Time to start containers: 407.510271ms
However, nothing I've been able to look into has pointed me in the direction of coming up with a solution at this point.
look over here, your job should define like following, for test out sql container, you could just use nc -vz localhost 3306, but the sql docker take time to initialize, so wait for about 2 minutes before test that.
jobs:
build_and_test:
docker:
# Primary container image where all steps run.
- image: cimg/node:16.14.2
# Secondary container image on common network.
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run: sleep 120 && nc -vz localhost 3306
I have the following Github Actions pipeline:
name: Elixir CI
on:
push:
branches:
- '*'
pull_request:
branches:
- '*'
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_PORT: 5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v1.6.0
with:
install: true
- name: building image
env:
DATABASE_HOST: postgres
DATABASE_PORT: 5432
run: |
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
I have a single build step for an Elixir app: the dockerfile is a multistage one, the first stage runs the tests and builds the production app, and the second copies the application folder/tar.
DATABASE_HOST is the variable that my Elixir app looks for to connect to the test environment.
I have the need to run tests against Postgres, so I spawn a container service with it. I have executed the build both in a container and outside of it, but I always have the following error:
...
#19 195.9 14:10:58.624 [error] GenServer #PID<0.9316.0> terminating
#19 195.9 ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
#19 195.9 (db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
#19 195.9 (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
#19 195.9 (stdlib 3.14.2.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
#19 195.9 Last message: nil
...
So apparently postgres:5432 is not reachable, am I missing something ?
The problem is in DATABASE_HOST: postgres I think.
The service container exports 5432 port to host, so for docker build, it should use host's ip address to visit that postgres service like next:
- name: building image
env:
DATABASE_PORT: 5432
run: |
DATABASE_HOST=$(ifconfig -a eth0|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:")
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
Above will first use ifconfig to get virtual machine's ip(docker host's ip), then pass to docker build to let build container to visit the postgres.
I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.
What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?
My currently known options are:
Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR
If you know more options, that may be cleaner or more efficient please let me know!
Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.
GitHub Action for single Container
name: Github Container Registry to DigitalOcean Droplet
on:
# Trigger the workflow via push on main branch
push:
branches:
- main
# use only trigger action if the backend folder changed
paths:
- "backend/**"
- ".github/workflows/**"
jobs:
# Builds a Docker Image and pushes it to Github Container Registry
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
# use the backend folder as the default working directory for the job
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
# Setting up Docker Builder
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
# Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
# Then go to repository setings and add the copied token as a secret called "CR_PAT"
# https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
# ! While GHCR is in Beta make sure to enable the feature
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
# Push to Github Container Registry
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
# Connect to existing Droplet via SSH and (re)installs add. runs the image
# ! Ensure you have installed the preconfigured Droplet with Docker
# ! Ensure you have added SSH Key to the Droplet
# ! - its easier to add the SSH Keys bevore createing the droplet
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
# Stop all running Docker Containers
docker kill $(docker ps -q)
# Free up space
docker system prune -a
# Login to Github Container Registry
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
# Pull the Docker Image
docker pull ghcr.io/${{ github.repository }}:latest
# Run a new container from a new image
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Current Docker-Compose
version: "3"
services:
api:
build:
context: ./backend/api
networks:
api-network:
aliases:
- api-net
nginx:
build:
context: ./backend/nginx
ports:
- "80:80"
- "443:443"
networks:
api-network:
aliases:
- nginx-net
depends_on:
- api
networks:
api-network:
Thought I'd post this as an answer instead of a comment since it was cleaner.
Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101
name: Server Build & Push
on:
push:
branches: [main]
paths:
- 'server/**'
- 'shared/**'
- docker-compose.prod.yml
- Dockerfile
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Create env file
run: |
touch .env
echo "${{ secrets.SERVER_ENV_PROD }}" > .env
cat .env
- name: Build image
run: docker compose -f docker-compose.prod.yml build
- name: Install doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to DO Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to DO Container Registry
run: docker compose -f docker-compose.prod.yml push
- name: Deploy Stack
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.GL_SSH_HOST }}
username: ${{ secrets.GL_SSH_USERNAME }}
key: ${{ secrets.GL_SSH_SECRET }}
port: ${{ secrets.GL_SSH_PORT }}
script: |
cd /srv/www/game
./init.sh
In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.
My init.sh simply contains:
docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth
The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.
With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!
This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.
When I run CircleCI, the first few tests fail due to Elasticsearch not being fully set up yet.
Usually I would use the dockerize library to wait for Elasticsearch to be finished, however this does not seem to detect Elasticsearch. Any ideas why? The dockerize command simply times out. However, the Elasticsearch container seems to be running because if I do not wait, eventually Elasticsearch starts to work with the tests.
Here is my docker file
version: 2
jobs:
build:
parallelism: 3
working_directory: ~/export-opportunities
docker:
- image: circleci/ruby:2.5.5-node
environment:
BUNDLE_JOBS: 3
BUNDLE_RETRY: 3
BUNDLE_PATH: vendor/bundle
PGHOST: localhost
PGUSER: user
RAILS_ENV: test
- image: circleci/postgres:latest
environment:
POSTGRES_USER: user
POSTGRES_DB: circle_test
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
- image: circleci/redis:4.0.9
environment:
REDIS_URL: "redis://localhost:6379/"
- image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
cluster.name: elasticsearch
xpack.security.enabled: false
transport.host: localhost
network.host: 127.0.0.1
http.port: 9200
discovery.type: single-node
branches:
only: chore/XOT-597-circleci
steps:
- checkout # check out the code in the project directory
# restore bundle cache
- restore_cache:
keys:
- exops-{{ checksum "Gemfile.lock" }}
- run:
name: Bundle Install
command: bundle check || bundle install
# store bundle cache
- save_cache:
key: exops-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
# Database setup
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && sudo tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: |
bundle exec rake db:create
bundle exec rake db:migrate
# Redis setup
- run:
name: Wait for Redis
command: dockerize -wait tcp://localhost:6379 -timeout 1m
# DOES NOT WORK:
- run:
name: Wait for Elasticsearch
command: dockerize -wait http://localhost:9200 -timeout 2m
# Run rspec in parallel
- run: |
echo Running test ...
bundle exec rspec --profile 10 \
--format RspecJunitFormatter \
--out test_results/rspec.xml \
--format progress \
$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)
# Save test results for timing analysis
- store_test_results:
path: test_results
Note I've also tried dockerize -wait tcp://localhost:9200 -timeout 2m, and dockerize -wait http://127.0.0.1:9200 -timeout 2m, and dockerize -wait tcp://127.0.0.1:9200 -timeout 2m to no effect.
I tried adding sleep 10 and sleep 100 however the issue persisted.
The issue was that the tests were running before the index was created. The index creation was being triggered by the first test to run, but took a few seconds, so the first few tests always failed.
My solution with this was to add the following code to trigger building the index, if not present, in rails_helper.rb which runs on starting the rails environment. For most other environments, the indices do exist so it does not slow down other processes.
# Build initial indices if not present, e.g. CircleCI
[Opportunity, Subscription].each do |model|
unless model.__elasticsearch__.index_exists? index: model.__elasticsearch__.index_name
model.__elasticsearch__.create_index!(force: true)
sleep 2
end
end