How to pass env to container building with Github Actions? - docker

I'm create an CI/CD process with using docker, heroku and Github Action but I've got an issue with envs.
Right now when I'm running heroku logs I see that MongoServerError: bad auth : Authentication failed. and I think that this problem is in passing envs to my container and then using in github actions cause in code I pass simply process.env.MONGODB_PASS.
In docker-compose.yml I'm using envs from .env file, but Github Actions can't use this file because I put this file into .gitignore...
Here is my config:
.github/workflows/main.yml
name: Deploy
on:
push:
branches:
- develop
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "---"
heroku_email: "---"
usedocker: true
docker-compose.yml
version: '3.7'
networks:
proxy:
external: true
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "---"]
networks:
- proxy
worker:
container_name: ---
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
can someone tell me how to resolve this issue? thanks for any help!

Related

docker compose networks inside GitHub actions

So, I'm attempting to replicate a flow for setting up my docker stack that I have working well locally, inside a Github action to perform some testing under as close to production / real world scenarios as possible.
However, I'm running into the following issue on the GitHub action workflow which causes a failure:
psycopg2.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution
Essentially, what works in terms of networks locally - doesn't work inside the Github action. The only seeming different is potentially the version of docker I am using within the Github action.
This is my workflow/ci.yml file:
name: perseus/ci
on:
pull_request:
branches:
- main
paths-ignore:
- '__pycache__'
- '.pytest_cache'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
name: CI/CD Build & Test w/pytest
strategy:
matrix:
os: [ ubuntu-latest ]
runs-on: ${{ matrix.os }}
env:
PROJECT_NAME: "Perseus FastAPI"
FIRST_SUPERUSER_EMAIL: ${{ secrets.FIRST_SUPERUSER_EMAIL }}
FIRST_SUPERUSER_PASSWORD: ${{ secrets.FIRST_SUPERUSER_PASSWORD }}
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
POSTGRES_SERVER: "db"
POSTGRES_PORT: "5432"
POSTGRES_DB: "postgres"
SENTRY_DSN: ${{ secrets.SENTRY_DSN }}
SERVER_NAME: "perseus"
SERVER_HOST: "https://perseus.observerly.com"
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Environment File
run: |
touch .env
echo PROJECT_NAME=${PROJECT_NAME} > .env
echo FIRST_SUPERUSER_EMAIL=${FIRST_SUPERUSER_EMAIL} > .env
echo FIRST_SUPERUSER_PASSWORD=${FIRST_SUPERUSER_PASSWORD} > .env
echo POSTGRES_USER=${POSTGRES_USER} > .env
echo POSTGRES_PASSWORD=${POSTGRES_PASSWORD} > .env
echo POSTGRES_SERVER=${POSTGRES_SERVER} > .env
echo POSTGRES_PORT=${POSTGRES_PORT} > .env
echo POSTGRES_DB=${POSTGRES_DB} > .env
echo SENTRY_DSN=${SENTRY_DSN} > .env
echo SERVER_NAME=${SERVER_NAME} > .env
echo SERVER_HOST=${SERVER_HOST} > .env
cat .env
- name: Docker Compose Build
run: docker compose -f local.yml build --build-arg INSTALL_DEV="true"
- name: Docker Compose Up
run: docker compose -f local.yml up -d
- name: Alembic Upgrade Head (Run Migrations)
run: docker compose -f local.yml exec api alembic upgrade head
- name: Seed Body (Stars, Galaxies etc) Data
run: docker compose -f local.yml exec api ./scripts/init_db_seed.sh
Essentially, all steps are working up until the api service needs to talk to the db service, over the network e.g., db:5432
My local.yml file is as follows:
version: '3.8'
services:
traefik:
image: traefik:latest
container_name: traefik_proxy
restart: always
security_opt:
- no-new-privileges:true
command:
## API Settings - https://docs.traefik.io/operations/api/, endpoints - https://docs.traefik.io/operations/api/#endpoints ##
- --api.insecure=true # <== Enabling insecure api, NOT RECOMMENDED FOR PRODUCTION
- --api.dashboard=true # <== Enabling the dashboard to view services, middlewares, routers, etc...
- --api.debug=true # <== Enabling additional endpoints for debugging and profiling
## Log Settings (options: ERROR, DEBUG, PANIC, FATAL, WARN, INFO) - https://docs.traefik.io/observability/logs/ ##
- --log.level=ERROR # <== Setting the level of the logs from traefik
## Provider Settings - https://docs.traefik.io/providers/docker/#provider-configuration ##
labels:
# Enable traefik on itself to view dashboard and assign subdomain to view it
- traefik.enable=false
# Setting the domain for the dashboard
- traefik.http.routers.api.rule=Host("traefik.docker.localhost")
# Enabling the api to be a service to access
- traefik.http.routers.api.service=api#internal
ports:
# HTTP
- 80:80
# HTTPS / SSL port
- 443:443
volumes:
# Volume for docker admin
- /var/run/docker.sock:/var/run/docker.sock:ro
# Map the static configuration into the container
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
# Map the configuration into the container
- ./traefik/config.yml:/etc/traefik/config.yml:ro
# Map the certificats into the container
- ./certs:/etc/certs:ro
networks:
- web
api:
build: .
command: uvicorn app.main:app --host 0.0.0.0 --port 5000 --reload --workers 1 --ssl-keyfile "./certs/local-key.pem" --ssl-certfile "./certs/local-cert.pem" --ssl-cert-reqs 1
container_name: perseus_api
restart: always
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
links:
- db:db
env_file:
- .env
labels:
# The following labels define the behavior and rules of the traefik proxy for this container
# For more information, see: https://docs.traefik.io/providers/docker/#exposedbydefault
# Enable this container to be mapped by traefik:
- traefik.enable=true
# URL to reach this container:
- traefik.http.routers.web.rule=Host("perseus.docker.localhost")
# URL to reach this container for secure traffic:
- traefik.http.routers.websecured.rule=Host("perseus.docker.localhost")
# Defining entrypoint for https:
- traefik.http.routers.websecured.entrypoints=websecured
networks:
- web
- api
db:
image: postgres:14-alpine
container_name: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
- ./scripts/init_pgtrgm_extension.sql:/docker-entrypoint-initdb.d/init_pgtrgm_extension.sql
ports:
- 5432:5432
env_file:
- .env
networks:
- api
volumes:
postgres_data:
networks:
web:
name: web
api:
name: api
driver: bridge
Are there any networking tips or GitHub action tips that could help me get over this seemingly small hurdle?
I've done as much research as I can into the problem - but I can't seem to see what the solution could be ...

Gitlab CI shell runner fails with docker-compose up

I'm trying to start multiple docker containers with a wsl shell runner. After running
compose_job:
tags:
- wsl
stage: compose
script:
- cd /pathToComposeFile
- docker-compose up
dependencies:
- pull_job
the runner excited with following error:
$ docker-compose up
docker: invalid reference format: repository name must be lowercase.
the docker-compose.yml is:
version: '3'
services:
cron:
build: cron/.
container_name: cron
image: cron_image
ports:
- 6040:6040
The referenced images are all written in lowercase and the same command excited as expected
if run manually. I already checked that docker-compose is accessible and that the docker-compose.yml is readable. How can I resolve this issue? Thank you in advance!
I think service_name, container_name and env must be lowercase.
Look like
version: '3'
services:
perihubapi:
build:
context: api/.
args:
EXTERNAL: ${external}
FASERVICES: ${faservices}
container_name: perihubapi
image: peri_hub_api_image
ports:
- 6020:6020
networks:
- backend
volumes:
- peridigm_volume:/app/peridigmJobs
- paraView_volume:/app/paraView
- secrets:/app/certs
perihubgui:
build: gui/.
container_name: perihubgui
image: peri_hub_gui_image
ports:
- 6010:6010
networks:
- backend
volumes:
- secrets:/app/certs
peridigm:
build:
context: peridigm/.
args:
GITLAB_USER: ${gitlab_user}
GITLAB_TOKEN: ${gitlab_token}
PERIDOX: ${peridox}
container_name: peridigm
image: peridigm_image
ports:
- 6030:6030
networks:
- backend
volumes:
- peridigm_volume:/app/peridigmJobs
paraview:
build:
context: paraview/.
container_name: paraview
image: paraview_image
volumes:
- paraView_volume:/app/paraView
cron:
build: cron/.
container_name: cron
image: cron_image
ports:
- 6040:6040
networks:
- backend
networks:
backend:
driver: bridge
volumes:
peridigm_volume:
paraView_volume:
secrets:
external: true

Docker-compose toolbox secrets files not mounting properly

I am trying to compose a stack using secrets
for development, i use local files in docker/secrets/FILE_NAME
I had this working in windows 10, but I'm struggling to get it to work under win7 toolbox.
I get an error:
Cannot create container for service db:
invalid volume specification:
'C:\project\docker\secrets\DB_USERNAME:/run/secrets/db-username:ro'
I was trying to set COMPOSE_CONVERT_WINDOWS_PATH but unfortunately, this does not change anything. I will get the same output with true or false.
Setting absolute paths did not help either.
Docker compose version 1.16.1 build 6d1ac219
Docker version 17.10.0-ce, build f4ffd25
My docker-compose.yml
version: '3.3'
services:
db:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
- POSTGRES_USER_FILE=/run/secrets/db-username
- POSTGRES_DB_FILE=/run/secrets/db-name
secrets:
- db-username
- db-password
- db-name
web:
build:
context: ./docker/machines/django/
args:
buildno: 1
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./Server:/Server
ports:
- "8000:8000"
depends_on:
- db
secrets:
- db-username
- db-password
- db-name
secrets:
db-username:
file: ./docker/secrets/DB_USERNAME
db-password:
file: ./docker/secrets/DB_PASSWORD
db-name:
file: ./docker/secrets/DB_NAME
volumes:
db-data:
driver: "local"

Docker essentials: Why is `docker-compose pull` not enough? Why do I need `git pull` too?

I have a problem where push (on dev) and pull (on production) via docker-compose is not enough and old stuff is presented by my Nginx web service. When I push and pull via Git too, everything works. Why?
Oups, I found it out. My docker-compose.yml looks like this:
version: "2.1"
services:
…
web:
build:
args:
HTTP_PROXY: ${HTTP_PROXY}
HTTPS_PROXY: ${HTTPS_PROXY}
NO_PROXY: ${NO_PROXY}
context:
web
depends_on:
- db
env_file:
- .env
image: user/repo
ports:
- "80:80"
restart: always
volumes:
- ./status.txt:/opt/status.txt
- ./web/sites-available/default:/etc/nginx/sites-available/default
- ./web/www:/var/www/html
- ./dump:/opt/dump
As you can see, my web folder is taken from my local ./web/www folder and not from the image… :D

How to write an Ansible playbook with Docker-compose

Please help, I have the docker-compose file below, and I want to write an Ansible playbook that runs the docker-compose file on localhost and a remote target.
version: '2.0'
services:
weather-backend:
build: ./backend
volumes: #map backend dir and package inside container
- './backend/:/usr/src/'
- './backend/package.json:/usr/src/package.json'
#ports:
# - "9000:9000" #expose backend port - Host:container
ports:
- "9000:9000"
command: npm start
weather-frontend:
build: ./frontend
depends_on:
- weather-backend
volumes:
- './frontend/:/usr/src/'
- '/usr/src/node_modules'
ports:
- "8000:8000" #expose ports -Host:container
environment:
NODE_ENV: "development"
Since Ansible 2.1, you can have a look at the docker_compose, which can read directly a docker-compose.yml
playbook task:
- name: run the service defined in my_project's docker-compose.yml
docker_compose:
project_src: /path/to/my_project

Resources