I have the following Github Actions pipeline:
name: Elixir CI
on:
push:
branches:
- '*'
pull_request:
branches:
- '*'
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_PORT: 5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v1.6.0
with:
install: true
- name: building image
env:
DATABASE_HOST: postgres
DATABASE_PORT: 5432
run: |
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
I have a single build step for an Elixir app: the dockerfile is a multistage one, the first stage runs the tests and builds the production app, and the second copies the application folder/tar.
DATABASE_HOST is the variable that my Elixir app looks for to connect to the test environment.
I have the need to run tests against Postgres, so I spawn a container service with it. I have executed the build both in a container and outside of it, but I always have the following error:
...
#19 195.9 14:10:58.624 [error] GenServer #PID<0.9316.0> terminating
#19 195.9 ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
#19 195.9 (db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
#19 195.9 (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
#19 195.9 (stdlib 3.14.2.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
#19 195.9 Last message: nil
...
So apparently postgres:5432 is not reachable, am I missing something ?
The problem is in DATABASE_HOST: postgres I think.
The service container exports 5432 port to host, so for docker build, it should use host's ip address to visit that postgres service like next:
- name: building image
env:
DATABASE_PORT: 5432
run: |
DATABASE_HOST=$(ifconfig -a eth0|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:")
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
Above will first use ifconfig to get virtual machine's ip(docker host's ip), then pass to docker build to let build container to visit the postgres.
Related
I want to automatically create an image using github actions and upload it after testing
However, when this build is executed inside the container, Unable to connect to newly launched container. (probably the execution was successful)
Of course it works when I try to do the same thing without using containers (on the machine).
Why the connection was failed when I tried in docker container?
jobs:
build_and_test:
runs-on: my-runner
container:
image: my-image:tag
credentials:
username: ${{ secrets.ID }}
password: ${{ secrets.PW }}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
options: --user root
steps:
- uses: actions/checkout#v2
- name: build_image
run: |
...
# build image
...
- name: bvt_test
run: |
container=$(docker run --rm -itd -p ${port}:1234 ${new_image})
...
# test, but the connection failed
Thanks.
When trying to wait for the mysql docker container, I'm met with: Problem with dial: dial tcp 127.0.0.1:3306: connect: connection refused. Sleeping 1s
# This config is equivalent to both the '.circleci/extended/orb-free.yml' and the base '.circleci/config.yml'
version: 2.1
# Orbs are reusable packages of CircleCI configuration that you may share across projects, enabling you to create encapsulated, parameterized commands, jobs, and executors that can be used across multiple projects.
# See: https://circleci.com/docs/2.0/orb-intro/
orbs:
node: circleci/node#5.0.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
version: 2
node: # This is the name of the workflow, feel free to change it to better match your workflow.
# Inside the workflow, you define the jobs you want to run.
jobs:
- build_and_test:
# This is the node version to use for the `cimg/node` tag
# Relevant tags can be found on the CircleCI Developer Hub
# https://circleci.com/developer/images/image/cimg/node
# If you are using yarn, change the line below from "npm" to "yarn"
filters:
branches:
only:
- master
executors:
node:
docker:
- image: cimg/node:16.14.2
jobs:
build_and_test:
executor: node
docker:
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/v$DOCKERIZE_VERSION/dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz
- run:
name: Wait for db
command: dockerize -wait tcp://127.0.0.1:3306 -timeout 10s
I do see that the container is installed under the spin-up environment step, so I believe it should be running:
Starting container cimg/mysql:8.0
cimg/mysql:8.0:
using image cimg/mysql#sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
pull stats: Image was already available so the image was not pulled
time to create container: 21ms
image is cached as cimg/mysql:8.0, but refreshing...
8.0: Pulling from cimg/mysql
Digest: sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
Status: Image is up to date for cimg/mysql:8.0
Time to upload agent and config: 369.899813ms
Time to start containers: 407.510271ms
However, nothing I've been able to look into has pointed me in the direction of coming up with a solution at this point.
look over here, your job should define like following, for test out sql container, you could just use nc -vz localhost 3306, but the sql docker take time to initialize, so wait for about 2 minutes before test that.
jobs:
build_and_test:
docker:
# Primary container image where all steps run.
- image: cimg/node:16.14.2
# Secondary container image on common network.
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run: sleep 120 && nc -vz localhost 3306
I build a spring boot project and I want to deploy it to minikube using GitLab CI/CD. I'm able to deploy the application by directly accessing the deployment.yml from local machine.
But I'm getting the following error when I tried to deploy it from GitLab.
Error
$ kubectl apply -f deployment.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-management
spec:
# the target number of Pods
replicas: 2
selector:
matchLabels:
app: user-management
template:
metadata:
labels:
app: user-management
spec:
containers:
- name: user-management7
image: registry.gitlab.com/PROFILE_NAME/user-management
imagePullPolicy: Always
ports:
- containerPort: 8082
imagePullSecrets:
- name: registry.gitlab.com
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
- mysql:8
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- test
- deploy-tb
- deploy-prod
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t registry.gitlab.com/PROFILE_NAME/user-management .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/PROFILE_NAME/user-management
test:
image: maven:3-jdk-8
services:
- mysql:8
script:
- "mvn clean test"
artifacts:
when: always
reports:
junit:
- target/surefire-reports/TEST-*.xml
deploy-tb:
image:
name: bitnami/kubectl:latest
entrypoint: [ "" ]
stage: deploy-tb
script:
- kubectl apply -f deployment.yml
environment:
name: prod
url: registry.gitlab.com/PROFILE_NAME/user-management
I don't know what I'm missing here.
According to GitLab documentation, you need first to install the GitLab Agent for Kubernetes.
These are the steps for the installation process:
To install the Agent in your cluster:
Define a configuration repository.
Register an agent with GitLab.
Install the agent into the cluster.
Note: On self-managed GitLab instances, a GitLab administrator needs to set up the GitLab Agent Server (KAS).
I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.
What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?
My currently known options are:
Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR
If you know more options, that may be cleaner or more efficient please let me know!
Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.
GitHub Action for single Container
name: Github Container Registry to DigitalOcean Droplet
on:
# Trigger the workflow via push on main branch
push:
branches:
- main
# use only trigger action if the backend folder changed
paths:
- "backend/**"
- ".github/workflows/**"
jobs:
# Builds a Docker Image and pushes it to Github Container Registry
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
# use the backend folder as the default working directory for the job
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
# Setting up Docker Builder
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
# Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
# Then go to repository setings and add the copied token as a secret called "CR_PAT"
# https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
# ! While GHCR is in Beta make sure to enable the feature
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
# Push to Github Container Registry
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
# Connect to existing Droplet via SSH and (re)installs add. runs the image
# ! Ensure you have installed the preconfigured Droplet with Docker
# ! Ensure you have added SSH Key to the Droplet
# ! - its easier to add the SSH Keys bevore createing the droplet
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
# Stop all running Docker Containers
docker kill $(docker ps -q)
# Free up space
docker system prune -a
# Login to Github Container Registry
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
# Pull the Docker Image
docker pull ghcr.io/${{ github.repository }}:latest
# Run a new container from a new image
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Current Docker-Compose
version: "3"
services:
api:
build:
context: ./backend/api
networks:
api-network:
aliases:
- api-net
nginx:
build:
context: ./backend/nginx
ports:
- "80:80"
- "443:443"
networks:
api-network:
aliases:
- nginx-net
depends_on:
- api
networks:
api-network:
Thought I'd post this as an answer instead of a comment since it was cleaner.
Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101
name: Server Build & Push
on:
push:
branches: [main]
paths:
- 'server/**'
- 'shared/**'
- docker-compose.prod.yml
- Dockerfile
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Create env file
run: |
touch .env
echo "${{ secrets.SERVER_ENV_PROD }}" > .env
cat .env
- name: Build image
run: docker compose -f docker-compose.prod.yml build
- name: Install doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to DO Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to DO Container Registry
run: docker compose -f docker-compose.prod.yml push
- name: Deploy Stack
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.GL_SSH_HOST }}
username: ${{ secrets.GL_SSH_USERNAME }}
key: ${{ secrets.GL_SSH_SECRET }}
port: ${{ secrets.GL_SSH_PORT }}
script: |
cd /srv/www/game
./init.sh
In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.
My init.sh simply contains:
docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth
The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.
With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!
This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.
I am trying to use GitHub Actions to fire up a Postgres container for my tests. I have a script called build.sh that gets called when npm run build is called via GitHub actions. This script calls upon restore-schema.sh (shown below).
The issue here is when restore-schema.sh gets ran, I keep getting Error: no such container: postgres. GitHub actions is naming the container some arbitrary string. Is there not a way I can run docker exec on an image or somehow name the postgres container that GitHub actions is creating? I've looked through both documentations to no avail.
How should I go about this? I noticed that in the Docker run ps screenshot, it shows command docker-entrypoint.sh. Should I use this instead? Do I specify the Dockerfile inside .github/workflows/?
I tried to include as much relevant information as possible - comment if you need any other information please.
Screenshots from GitHub Actions
Initialize containers
Docker run ps <- docker ps showing name postgres
Run npm run build --if-present <- WHERE THE ISSUE IS OCCURING
build.sh
#!/bin/sh
# Import core db schema
./.deploy/postgres/restore-schema.sh
.deploy/postgres/restore-schema.sh
#!/bin/sh
docker exec -it postgres psql \
--username postgres \
--password dev \
coredb < .deploy/postgres/db-schema.sql
.github/workflows/test-api-gateway.yml
name: API Gateway CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-latest
services: # Serivce containers to run with `container-job`
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_USER: postgres
POSTGRES_DB: coredb
POSTGRES_PASSWORD: dev
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: docker ps
- run: chmod +x build.sh .deploy/postgres/restore-schema.sh
- run: npm ci
- run: npm run build --if-present
- run: npm test
Try the --name option
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
--name postgres
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
jobs.<job_id>.services.options: Additional Docker container resource options. For a list of options, see "docker create options."
Another solution I've seen is using last created container
docker exec -it $(docker ps --latest --quiet) bash