Get a service container name in workflow step - docker

My general question is: how to get a running service name in Github workflow?
I have a Keycloak containerset up ass a service and I want to import a realm by executing a script inside Keycloak container, here is a snippet of my workflow:
name: Test Workflow
on:
push:
branches-ignore:
- main
jobs:
test:
name: Test
runs-on: ubuntu-latest
services:
keycloak:
image: quay.io/keycloak/keycloak:12.0.4
env:
KEYCLOAK_USER: "admin"
KEYCLOAK_PASSWORD: "admin"
JAVA_OPTS_APPEND: "-Dkeycloak.profile.feature.upload_scripts=enabled"
ports:
- "8091:8080"
volumes:
- "/workspace/src/main/resources/keycloak:/src/main/resources/keycloak/"
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Java
uses: actions/setup-java#v1
with:
java-version: 14
- name: List running containers
run: docker ps -a
- name: Setup Keycloak realm
run: |
docker exec -it keycloak sh -c
"/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password admin &&
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=testrealm -s enabled=true &&
/opt/jboss/keycloak/bin/kcadm.sh create partialImport -r testrealm -s ifResourceExists=SKIP -o -f /src/main/resources/keycloak/realm.json"
- name: Gradle Test
run: ./gradlew test
[...]
To connect to a running container, I need its name. A service name keycloak doesn't work as I see in logs of Github actions a list of running containers:
Run docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fdb7e1e05296 quay.io/keycloak/keycloak:12.0.4 "/opt/jboss/tools/do…" 55 seconds ago Up 47 seconds 8443/tcp, 0.0.0.0:8091->8080/tcp 594297e586cd4bdab13cc8fa63b8954d_quayiokeycloakkeycloak1104_1ac754
Is there a way to connect to a running container via running container name?

Two options:
You set the --name in the service object options:
jobs:
test:
name: Test
...
services:
keycloak:
...
options: --name keycloak --hostname keycloak
Possible docker create options.
Workflow syntax documentation.
According to this example, the key of your service object can be used as the hostname. But this seems to be only relevant when running jobs from within containers.

Related

Github Action with selenium and docker

I am currently working on a CI pipeline for a project and just started setting up a Github Action for running integration tests but I can't get it to work.
My action looks like this:
name: Integration Tests
on:
push:
branches:
- main
workflow_dispatch:
jobs:
integration-tests:
runs-on: ubuntu-latest
services:
selenium:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
options: --shm-size="2g"
steps:
- uses: actions/checkout#v2
- name: Get IP Address
run: echo "##[set-output name=ip;]$(ifconfig eth0 | grep 'inet [0-9\.]* ' -o | sed 's/[^0-9\.]//g')"
id: ip_addr
- name: Setup Pyhon
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Install Python dependencies
uses: py-actions/py-dependency-install#v2
with: /path/to/requirements
- name: Run Tests
run: python3 main.py --backend http://localhost:8080/ --frontend http://${{ steps.ip_addr.outputs.ip }}:4200 --selenium http://localhost:4444/wd/hub
main.py starts two docker containers (that expose their corresponding ports) and runs a suite of selenium tests. It works on my local machine with a container I run using docker run -d -p 4444:4444 --shm-size="2g" selenium/standalone-chrome:latest. It takes the url for the backend and the one for the frontend, which gets used by selenium.
I think I have to use the IP address of the runner so selenium can access the site (since it runs locally, localhost doesn't work), but it fails with the following error:
File "main.py", line 57, in <module>
testcase.run()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_cases/t1.py", line 14, in run
self.web_driver.accept_cookies()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 64, in accept_cookies
self.wait_and_click(By.ID, 'confirmCookies')
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 30, in wait_and_click
WebDriverWait(self.driver, 10).until(
File "/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
It seems to just not load the time, but I don't know to fix this. I guess it could also be that I messed up the networking somehow?
I think the service container is referenced by the service name instead of localhost. i.e. in your example:
--selenium http://localhost:4444/wd/hub
would be:
--selenium http://selenium:4444/wd/hub
as you defined it:
services:
selenium:

docker run dynamodb-local on Github Actions Workflow hanging

I'm currently working on a small CICD project that will run a series of tests on Github Actions using dynamodb-local whenever I update my code and then package and deploy if the tests are successful.
I have the following workflow:
name: backend_actions
on:
workflow_dispatch:
push:
paths:
- 'backend/*'
branches:
- master
jobs:
test-locally:
runs-on: ubuntu-latest
outputs:
test-result: ${{ steps.run-tests.outputs.result }}
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: '3.9'
- uses: aws-actions/setup-sam#v1
- uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local
- name: Create table
run: aws dynamodb create-table --cli-input-json file://backend/src/test/make_table.json --endpoint-url http://localhost:8000
- name: start local API Gateway
run: sam local start-api --env-vars backend/env.json
- id: run-tests
name: Run tests
run: |
python backend/src/test_dynamoDB_lambda.py
echo "::set-output name=result::$?"
update_backend:
needs: test-locally
if: ${{ needs.test-locally.outputs.test-result == '0' }}
runs-on: ubuntu-latest
steps:
- name: Package and deploy
run: |
aws cloudformation package --s3-bucket cloud-resume-bucket \
--template-file backend/template.yaml --output-template-file backend/gen/template-gen.yaml
aws cloudformation deploy --template-file backend/gen/template-gen.yaml --stack-name cloud-formation-resume \
--capabilities CAPABILITY_IAM
When I try running the workflow in Github Actions, it will get to the 'Setup local DynamoDB' step, output the text below, and then hang.
Run docker run -p 8000:8000 amazon/dynamodb-local
Unable to find image 'amazon/dynamodb-local:latest' locally
latest: Pulling from amazon/dynamodb-local
2cbe74538cb5: Pulling fs layer
137077f50205: Pulling fs layer
58932e640a40: Pulling fs layer
58932e640a40: Verifying Checksum
58932e640a40: Download complete
2cbe74538cb5: Verifying Checksum
2cbe74538cb5: Download complete
137077f50205: Verifying Checksum
137077f50205: Download complete
2cbe74538cb5: Pull complete
137077f50205: Pull complete
58932e640a40: Pull complete
Digest: sha256:bdd26570dc0e0ae49e1ea9d49ff662a6a1afe9121dd25793dc40d02802e7e806
Status: Downloaded newer image for amazon/dynamodb-local:latest
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: true
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
Seems like it can find the docker image and download it fine, but stops upon initializing? This is my first time working with Github Actions and Docker, so I'm not really sure why it's hanging on Github Actions and not when I run it on my own computer, so any help would be appreciated!
When you run the command docker run -p 8000:8000 amazon/dynamodb-local the process never exits, so the Github run block doesn't actually know when to move on to the next step—it just hangs there forever.
What I did in my project is simply background it, by using the & after the command:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
Github Workflows will start the Docker container and move to the next run step, and when all the steps are done it'll just kill the container as part of normal cleanup. Because of this, you don't need to worry about shutting it down at the end.
The difficulty with this approach is that it takes several seconds for DynamoDB-local to start up, but your next step relies on it and will likely throw ECONNREFUSED errors.
What I've done in my project is to have the next run step execute a script that attempts to list tables, retrying with a short delay until it gets back a response.
The bash command is simply (you would need to put this in a while+try/catch loop):
aws dynamodb list-tables --endpoint-url http://localhost:8000
As a guide, this is (roughly) what I do in JavaScript, using the aws-sdk and NodeJS#16:
// wait-for-dynamodb.js
import timers from 'timers/promises'
import AWS from 'aws-sdk'
const dynamodb = new AWS.DynamoDB()
const waitForDynamoDbToStart = async () => {
try {
await dynamodb.listTables().promise()
} catch (error) {
console.log('Waiting for Docker container to start...')
await timers.setTimeout(500)
return waitForDynamoDbToStart()
}
}
const start = Date.now()
waitForDynamoDbToStart()
.then(() => {
console.log(`DynamoDB-local started after ${Date.now() - start}ms.`)
process.exit(0)
})
.catch(error => {
console.log('Error starting DynamoDB-local!', error)
process.exit(1)
})
Then I simply have that in the run steps:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
- name: Wait for it to boot up
run: node ./wait-for-dynamodb.js
# now you're guaranteed to have DynamoDB-local running

Does dockerized github actions support network options for docker run parameters

I am using self hosted github runners for vpn access to some software and I am trying to use a dockerized github action on the self hosted runners but I am having issues because I need to specify the --network host flag when github action runs docker run. Is there a way to have the github action use the network of the host?
As far as I know, it is not possible. It's not available on steps either. Options are available on jobs though. The only other way is for you to create a composite action and run docker run ... directly in it. Here is one that I wrote for my own workflow. It's slightly more complicated but it allows you to automatically pass environment variable from the runner to the docker container based on the variable name prefix:
name: Docker start container
description: Start a detached container
inputs:
image:
description: The image to use
required: true
name:
description: The container name
required: true
options:
description: Additional options to pass to docker run
required: false
default: ''
command:
description: The command to run
required: false
default: ''
env_pattern:
description: The environment variable pattern to pass to the container
required: false
default: ''
outputs:
cid:
description: Container ID
value: ${{ steps.info.outputs.cid }}
runs:
using: composite
steps:
- name: Run
shell: bash
run: >
variables='';
for i in $(env | grep '${{ inputs.env_pattern }}' | awk -F '=' '{print $1}'); do
variables="--env ${i} ${variables}";
done;
docker run -d
--name ${{ inputs.name }}
--network host
--cidfile ${{ inputs.name }}.cid
${variables}
${{ inputs.options }}
${{ inputs.image }}
${{ inputs.command }}
- name: Info
id: info
shell: bash
run: echo "::set-output name=cid::$(cat ${{ inputs.name }}.cid)"
and to use it:
- name: Start app container
uses: ./.github/actions/docker-start-container
with:
image: myapp/myapp:latest
name: myapp
env_pattern: 'MYAPP_'
options: --entrypoint entrypoint.sh
command: >
--check
-v

How can I wait for the container to be healthy in GitHub action?

I am using GitHub action to do some automation test and my application was developed in docker.
name: Docker Image CI
on:
push:
branches: [ master]
pull_request:
branches: [ master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker-compose build
- name: up mysql and apache container runs
run: docker-compose up -d
- name: install dependencies
run: docker exec myapp php composer.phar install
- name: show running container
run: docker ps
- name: run unit test
run: docker exec myapp ./vendor/bin/phpunit
At the step 'show running container', I can see that all the containers are running but for the MySQL, the status is (health: starting). Thus, my unit test cases all failed as it requires a connection to MySQL. So may I know if there is a way to start the unit case only when the MySQL container's status is healthy?
I would like to offer a solution, not a smart one but it requires minimum configuration and ready to go, just use the GitHub Action for Sleeping.
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Sleep for 30 seconds
uses: jakejarvis/wait-action#master
with:
time: '30s'
Assumption: your Mysql server will be up and running in 30s.
You can use thegabriele97/dockercompose-health-action
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Check services healthiness
uses: thegabriele97/dockercompose-health-action#main
with:
timeout: '60'
workdir: 'src'
As the documentation states:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
If you can't implement this at the moment, you can write some simple script that will indefinitely try a simple statement on the database. Once the script succeed you exit loop and start your unit tests. Check the documentation link I've provided, you'll find there an example of such script (wait-for-it.sh).
My approach was to use:
in my docker-compose.yml file:
healthcheck:
test: curl --fail http://localhost/ping || exit 1
interval: 2s
retries: 10
start_period: 10s
timeout: 10s
in my Github Actions workflow:
- name: Wait for healthchecks
run: timeout 60s sh -c 'until docker ps | grep <CONTAINER_NAME> | grep -q healthy; do echo "Waiting for container to be healthy..."; sleep 2; done'
As stated in documentation:
On Linux and macOS runners, use the sleep command:
- name: Sleep for 30 seconds
run: sleep 30s
shell: bash
On Windows runners, use the Start-Sleep command:
- name: Sleep for 30 seconds
run: Start-Sleep -s 30
shell: powershell

GitLab CI, connect from docker dind to Elastic Search service

I have tests which run in a docker container. For this, I use docker-dind service, my .gitlab-ci:
image: "docker:17"
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
- name: docker.elastic.co/elasticsearch/elasticsearch:5.5.2
alias: elasticsearch
command: [ "bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node" ]
stages:
- test
before_script:
- apk --update add py2-pip python3 bash zip ansible openssh git docker-py curl
- pip3 install docker-compose
- docker info
- docker-compose --version
# Login to registry.gitlab.com
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
test:
script:
- curl "http://elasticsearch:9200" # this works
- docker-compose docker-compose.test.yml build --pull
- docker-compose docker-compose.test.yml run app
stage: test
My tests use ES for this I added ES service but I can't connect to ES cluster from my container where I run tests.
On my machine with runner when CI works I have:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f5d9a64bbe8e 1ee5a99eba5f "gitlab-runner-build" 1 second ago Up Less than a second runner-b7dcaf73-project-2199705-concurrent-1-predefined-0
c85c49d35946 ca27036dd5e7 "bin/elasticsearch -…" 16 seconds ago Up 15 seconds 9200/tcp, 9300/tcp runner-b7dcaf73-project-2199705-concurrent-1-docker.elastic.co__elasticsearch__elasticsearch-1
57472d0300ad 85e924caedbd "dockerd-entrypoint.…" 17 seconds ago Up 16 seconds 2375/tcp runner-b7dcaf73-project-2199705-concurrent-1-docker-0
598c019aa28c - a container with a runner, I can enter to this container run curl "http://elasticsearch:9200" and it works
57472d0300ad - dind container, right? I can enter to this container but curl "http://elasticsearch:9200" doesn't work, docker ps shows:
/ # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
252f0588a41d backend_app "make _inside_docker…" 6 seconds ago Up 5 seconds 8080/tcp backend_app_run_116d12907320
cd0ebb2f1d2d postgres:9.6 "docker-entrypoint.s…" 7 seconds ago Up 6 seconds 5432/tcp backend_postgresql_1
How I can connect from my container with tests (252f0588a41d) to container with ES?
Thanks.

Resources