I'm currently working on a small CICD project that will run a series of tests on Github Actions using dynamodb-local whenever I update my code and then package and deploy if the tests are successful.
I have the following workflow:
name: backend_actions
on:
workflow_dispatch:
push:
paths:
- 'backend/*'
branches:
- master
jobs:
test-locally:
runs-on: ubuntu-latest
outputs:
test-result: ${{ steps.run-tests.outputs.result }}
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: '3.9'
- uses: aws-actions/setup-sam#v1
- uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local
- name: Create table
run: aws dynamodb create-table --cli-input-json file://backend/src/test/make_table.json --endpoint-url http://localhost:8000
- name: start local API Gateway
run: sam local start-api --env-vars backend/env.json
- id: run-tests
name: Run tests
run: |
python backend/src/test_dynamoDB_lambda.py
echo "::set-output name=result::$?"
update_backend:
needs: test-locally
if: ${{ needs.test-locally.outputs.test-result == '0' }}
runs-on: ubuntu-latest
steps:
- name: Package and deploy
run: |
aws cloudformation package --s3-bucket cloud-resume-bucket \
--template-file backend/template.yaml --output-template-file backend/gen/template-gen.yaml
aws cloudformation deploy --template-file backend/gen/template-gen.yaml --stack-name cloud-formation-resume \
--capabilities CAPABILITY_IAM
When I try running the workflow in Github Actions, it will get to the 'Setup local DynamoDB' step, output the text below, and then hang.
Run docker run -p 8000:8000 amazon/dynamodb-local
Unable to find image 'amazon/dynamodb-local:latest' locally
latest: Pulling from amazon/dynamodb-local
2cbe74538cb5: Pulling fs layer
137077f50205: Pulling fs layer
58932e640a40: Pulling fs layer
58932e640a40: Verifying Checksum
58932e640a40: Download complete
2cbe74538cb5: Verifying Checksum
2cbe74538cb5: Download complete
137077f50205: Verifying Checksum
137077f50205: Download complete
2cbe74538cb5: Pull complete
137077f50205: Pull complete
58932e640a40: Pull complete
Digest: sha256:bdd26570dc0e0ae49e1ea9d49ff662a6a1afe9121dd25793dc40d02802e7e806
Status: Downloaded newer image for amazon/dynamodb-local:latest
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: true
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
Seems like it can find the docker image and download it fine, but stops upon initializing? This is my first time working with Github Actions and Docker, so I'm not really sure why it's hanging on Github Actions and not when I run it on my own computer, so any help would be appreciated!
When you run the command docker run -p 8000:8000 amazon/dynamodb-local the process never exits, so the Github run block doesn't actually know when to move on to the next step—it just hangs there forever.
What I did in my project is simply background it, by using the & after the command:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
Github Workflows will start the Docker container and move to the next run step, and when all the steps are done it'll just kill the container as part of normal cleanup. Because of this, you don't need to worry about shutting it down at the end.
The difficulty with this approach is that it takes several seconds for DynamoDB-local to start up, but your next step relies on it and will likely throw ECONNREFUSED errors.
What I've done in my project is to have the next run step execute a script that attempts to list tables, retrying with a short delay until it gets back a response.
The bash command is simply (you would need to put this in a while+try/catch loop):
aws dynamodb list-tables --endpoint-url http://localhost:8000
As a guide, this is (roughly) what I do in JavaScript, using the aws-sdk and NodeJS#16:
// wait-for-dynamodb.js
import timers from 'timers/promises'
import AWS from 'aws-sdk'
const dynamodb = new AWS.DynamoDB()
const waitForDynamoDbToStart = async () => {
try {
await dynamodb.listTables().promise()
} catch (error) {
console.log('Waiting for Docker container to start...')
await timers.setTimeout(500)
return waitForDynamoDbToStart()
}
}
const start = Date.now()
waitForDynamoDbToStart()
.then(() => {
console.log(`DynamoDB-local started after ${Date.now() - start}ms.`)
process.exit(0)
})
.catch(error => {
console.log('Error starting DynamoDB-local!', error)
process.exit(1)
})
Then I simply have that in the run steps:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
- name: Wait for it to boot up
run: node ./wait-for-dynamodb.js
# now you're guaranteed to have DynamoDB-local running
Related
I am currently working on a CI pipeline for a project and just started setting up a Github Action for running integration tests but I can't get it to work.
My action looks like this:
name: Integration Tests
on:
push:
branches:
- main
workflow_dispatch:
jobs:
integration-tests:
runs-on: ubuntu-latest
services:
selenium:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
options: --shm-size="2g"
steps:
- uses: actions/checkout#v2
- name: Get IP Address
run: echo "##[set-output name=ip;]$(ifconfig eth0 | grep 'inet [0-9\.]* ' -o | sed 's/[^0-9\.]//g')"
id: ip_addr
- name: Setup Pyhon
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Install Python dependencies
uses: py-actions/py-dependency-install#v2
with: /path/to/requirements
- name: Run Tests
run: python3 main.py --backend http://localhost:8080/ --frontend http://${{ steps.ip_addr.outputs.ip }}:4200 --selenium http://localhost:4444/wd/hub
main.py starts two docker containers (that expose their corresponding ports) and runs a suite of selenium tests. It works on my local machine with a container I run using docker run -d -p 4444:4444 --shm-size="2g" selenium/standalone-chrome:latest. It takes the url for the backend and the one for the frontend, which gets used by selenium.
I think I have to use the IP address of the runner so selenium can access the site (since it runs locally, localhost doesn't work), but it fails with the following error:
File "main.py", line 57, in <module>
testcase.run()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_cases/t1.py", line 14, in run
self.web_driver.accept_cookies()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 64, in accept_cookies
self.wait_and_click(By.ID, 'confirmCookies')
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 30, in wait_and_click
WebDriverWait(self.driver, 10).until(
File "/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
It seems to just not load the time, but I don't know to fix this. I guess it could also be that I messed up the networking somehow?
I think the service container is referenced by the service name instead of localhost. i.e. in your example:
--selenium http://localhost:4444/wd/hub
would be:
--selenium http://selenium:4444/wd/hub
as you defined it:
services:
selenium:
I'm fairly new to GitHub actions and Redis.
I'm running a this CI on GitHub actions (code below)
name: sanity check
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
tests:
runs-on: ubuntu-latest
strategy:
matrix:
redis-version: [6]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: "14"
- uses: supercharge/redis-github-action#1.2.0 # sets up Redis
with:
redis-version: ${{ matrix.redis-version }}
- run: node -v
- run: yarn -v
# - run: redis-cli ping
- run: yarn install
- run: yarn test --detectOpenHandles
so that I can perform integration tests with Redis, but this CI doesn't exit (I'm running tests with Jest)
Is it because I'm not using Docker? What do I need to do to make sure this test exits? Locally, it runs fine (I start a Redis server manually though). Do I need Docker to make this work well? Any links for how to run Docker with Redis on GitHub actions if that's the problem?
PS: If you need extra information about this, please let me know
You probably don't need this redis action, and you do not need anything docker related (although if you want, you can run redis using docker).
Just install redis-server and if you want the redis CLI, also redis-tools.
Here is a sample GitHub Action that installs and pings the redis server:
name: Redis test
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install redis
run: sudo apt-get install -y redis-tools redis-server
- name: Verify that redis is up
run: redis-cli ping
If you prefer using the action, here is a working workflow:
name: Redis test
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Setup redis
uses: supercharge/redis-github-action#1.2.0
with:
redis-version: 6
- name: Install redis cli # so we can test the server
run: sudo apt-get install -y redis-tools
- name: Verify that redis is up
run: redis-cli ping
Finally, if your GitHub Action did not exit, it could have been a problem related to one of the recent GitHub Actions outages on May 20, May 18 or May 16.
If it's none of the above, the problem is probably not redis related and you might want to reduce the number of "moving parts" until you see the faulty one.
The problem of Jest not exiting was probably because I was using the real redis nodejs client in my tests.
I switced it to this
import { createNodeRedisClient } from "handy-redis";
import { createClient } from "redis-mock";
const cache =
process.env.NODE_ENV === "production"
? createNodeRedisClient({
url: process.env.REDIS_URL,
})
: createClient();
export { cache };
and I wasn't getting the error anymore
My general question is: how to get a running service name in Github workflow?
I have a Keycloak containerset up ass a service and I want to import a realm by executing a script inside Keycloak container, here is a snippet of my workflow:
name: Test Workflow
on:
push:
branches-ignore:
- main
jobs:
test:
name: Test
runs-on: ubuntu-latest
services:
keycloak:
image: quay.io/keycloak/keycloak:12.0.4
env:
KEYCLOAK_USER: "admin"
KEYCLOAK_PASSWORD: "admin"
JAVA_OPTS_APPEND: "-Dkeycloak.profile.feature.upload_scripts=enabled"
ports:
- "8091:8080"
volumes:
- "/workspace/src/main/resources/keycloak:/src/main/resources/keycloak/"
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Java
uses: actions/setup-java#v1
with:
java-version: 14
- name: List running containers
run: docker ps -a
- name: Setup Keycloak realm
run: |
docker exec -it keycloak sh -c
"/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password admin &&
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=testrealm -s enabled=true &&
/opt/jboss/keycloak/bin/kcadm.sh create partialImport -r testrealm -s ifResourceExists=SKIP -o -f /src/main/resources/keycloak/realm.json"
- name: Gradle Test
run: ./gradlew test
[...]
To connect to a running container, I need its name. A service name keycloak doesn't work as I see in logs of Github actions a list of running containers:
Run docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fdb7e1e05296 quay.io/keycloak/keycloak:12.0.4 "/opt/jboss/tools/do…" 55 seconds ago Up 47 seconds 8443/tcp, 0.0.0.0:8091->8080/tcp 594297e586cd4bdab13cc8fa63b8954d_quayiokeycloakkeycloak1104_1ac754
Is there a way to connect to a running container via running container name?
Two options:
You set the --name in the service object options:
jobs:
test:
name: Test
...
services:
keycloak:
...
options: --name keycloak --hostname keycloak
Possible docker create options.
Workflow syntax documentation.
According to this example, the key of your service object can be used as the hostname. But this seems to be only relevant when running jobs from within containers.
I am using GitHub action to do some automation test and my application was developed in docker.
name: Docker Image CI
on:
push:
branches: [ master]
pull_request:
branches: [ master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker-compose build
- name: up mysql and apache container runs
run: docker-compose up -d
- name: install dependencies
run: docker exec myapp php composer.phar install
- name: show running container
run: docker ps
- name: run unit test
run: docker exec myapp ./vendor/bin/phpunit
At the step 'show running container', I can see that all the containers are running but for the MySQL, the status is (health: starting). Thus, my unit test cases all failed as it requires a connection to MySQL. So may I know if there is a way to start the unit case only when the MySQL container's status is healthy?
I would like to offer a solution, not a smart one but it requires minimum configuration and ready to go, just use the GitHub Action for Sleeping.
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Sleep for 30 seconds
uses: jakejarvis/wait-action#master
with:
time: '30s'
Assumption: your Mysql server will be up and running in 30s.
You can use thegabriele97/dockercompose-health-action
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Check services healthiness
uses: thegabriele97/dockercompose-health-action#main
with:
timeout: '60'
workdir: 'src'
As the documentation states:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
If you can't implement this at the moment, you can write some simple script that will indefinitely try a simple statement on the database. Once the script succeed you exit loop and start your unit tests. Check the documentation link I've provided, you'll find there an example of such script (wait-for-it.sh).
My approach was to use:
in my docker-compose.yml file:
healthcheck:
test: curl --fail http://localhost/ping || exit 1
interval: 2s
retries: 10
start_period: 10s
timeout: 10s
in my Github Actions workflow:
- name: Wait for healthchecks
run: timeout 60s sh -c 'until docker ps | grep <CONTAINER_NAME> | grep -q healthy; do echo "Waiting for container to be healthy..."; sleep 2; done'
As stated in documentation:
On Linux and macOS runners, use the sleep command:
- name: Sleep for 30 seconds
run: sleep 30s
shell: bash
On Windows runners, use the Start-Sleep command:
- name: Sleep for 30 seconds
run: Start-Sleep -s 30
shell: powershell
Coming from this issue
I am using GitHub Actions for Gradle project with this given steps:
name: Java CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up JDK 13
uses: actions/setup-java#v1
with:
java-version: 13
- run: ./gradlew bootJar
- name: Login to Github regestry
run: docker login docker.pkg.github.com -u xxxxx -p xxxxx
- name: Build the Docker image
run: docker build . -t docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.$GITHUB_REF
- name: Push the image to github
run: docker push docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.$GITHUB_REF
At the last step I get this error:
The push refers to repository
[docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.refs/heads/master]
3aad04996f8f: Preparing
77cae8ab23bf: Preparing
error parsing HTTP 404 response body: invalid character 'p' after top-level value:
"404 page not found\n"
actually I was using the wrong environment variable to tag my images.
I used $GITHUB_REF what I should use $GITHUB_SHA