I'm fairly new to GitHub actions and Redis.
I'm running a this CI on GitHub actions (code below)
name: sanity check
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
tests:
runs-on: ubuntu-latest
strategy:
matrix:
redis-version: [6]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: "14"
- uses: supercharge/redis-github-action#1.2.0 # sets up Redis
with:
redis-version: ${{ matrix.redis-version }}
- run: node -v
- run: yarn -v
# - run: redis-cli ping
- run: yarn install
- run: yarn test --detectOpenHandles
so that I can perform integration tests with Redis, but this CI doesn't exit (I'm running tests with Jest)
Is it because I'm not using Docker? What do I need to do to make sure this test exits? Locally, it runs fine (I start a Redis server manually though). Do I need Docker to make this work well? Any links for how to run Docker with Redis on GitHub actions if that's the problem?
PS: If you need extra information about this, please let me know
You probably don't need this redis action, and you do not need anything docker related (although if you want, you can run redis using docker).
Just install redis-server and if you want the redis CLI, also redis-tools.
Here is a sample GitHub Action that installs and pings the redis server:
name: Redis test
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install redis
run: sudo apt-get install -y redis-tools redis-server
- name: Verify that redis is up
run: redis-cli ping
If you prefer using the action, here is a working workflow:
name: Redis test
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Setup redis
uses: supercharge/redis-github-action#1.2.0
with:
redis-version: 6
- name: Install redis cli # so we can test the server
run: sudo apt-get install -y redis-tools
- name: Verify that redis is up
run: redis-cli ping
Finally, if your GitHub Action did not exit, it could have been a problem related to one of the recent GitHub Actions outages on May 20, May 18 or May 16.
If it's none of the above, the problem is probably not redis related and you might want to reduce the number of "moving parts" until you see the faulty one.
The problem of Jest not exiting was probably because I was using the real redis nodejs client in my tests.
I switced it to this
import { createNodeRedisClient } from "handy-redis";
import { createClient } from "redis-mock";
const cache =
process.env.NODE_ENV === "production"
? createNodeRedisClient({
url: process.env.REDIS_URL,
})
: createClient();
export { cache };
and I wasn't getting the error anymore
Related
I am currently working on a CI pipeline for a project and just started setting up a Github Action for running integration tests but I can't get it to work.
My action looks like this:
name: Integration Tests
on:
push:
branches:
- main
workflow_dispatch:
jobs:
integration-tests:
runs-on: ubuntu-latest
services:
selenium:
image: selenium/standalone-chrome:latest
ports:
- 4444:4444
options: --shm-size="2g"
steps:
- uses: actions/checkout#v2
- name: Get IP Address
run: echo "##[set-output name=ip;]$(ifconfig eth0 | grep 'inet [0-9\.]* ' -o | sed 's/[^0-9\.]//g')"
id: ip_addr
- name: Setup Pyhon
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Install Python dependencies
uses: py-actions/py-dependency-install#v2
with: /path/to/requirements
- name: Run Tests
run: python3 main.py --backend http://localhost:8080/ --frontend http://${{ steps.ip_addr.outputs.ip }}:4200 --selenium http://localhost:4444/wd/hub
main.py starts two docker containers (that expose their corresponding ports) and runs a suite of selenium tests. It works on my local machine with a container I run using docker run -d -p 4444:4444 --shm-size="2g" selenium/standalone-chrome:latest. It takes the url for the backend and the one for the frontend, which gets used by selenium.
I think I have to use the IP address of the runner so selenium can access the site (since it runs locally, localhost doesn't work), but it fails with the following error:
File "main.py", line 57, in <module>
testcase.run()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_cases/t1.py", line 14, in run
self.web_driver.accept_cookies()
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 64, in accept_cookies
self.wait_and_click(By.ID, 'confirmCookies')
File "/home/runner/work/PROJECT_NAME/QualityAssurance/integration_tests/test_utils/parkview_webdriver.py", line 30, in wait_and_click
WebDriverWait(self.driver, 10).until(
File "/opt/hostedtoolcache/Python/3.8.11/x64/lib/python3.8/site-packages/selenium/webdriver/support/wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
It seems to just not load the time, but I don't know to fix this. I guess it could also be that I messed up the networking somehow?
I think the service container is referenced by the service name instead of localhost. i.e. in your example:
--selenium http://localhost:4444/wd/hub
would be:
--selenium http://selenium:4444/wd/hub
as you defined it:
services:
selenium:
I'm currently working on a small CICD project that will run a series of tests on Github Actions using dynamodb-local whenever I update my code and then package and deploy if the tests are successful.
I have the following workflow:
name: backend_actions
on:
workflow_dispatch:
push:
paths:
- 'backend/*'
branches:
- master
jobs:
test-locally:
runs-on: ubuntu-latest
outputs:
test-result: ${{ steps.run-tests.outputs.result }}
steps:
- uses: actions/checkout#v2
- uses: actions/setup-python#v2
with:
python-version: '3.9'
- uses: aws-actions/setup-sam#v1
- uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local
- name: Create table
run: aws dynamodb create-table --cli-input-json file://backend/src/test/make_table.json --endpoint-url http://localhost:8000
- name: start local API Gateway
run: sam local start-api --env-vars backend/env.json
- id: run-tests
name: Run tests
run: |
python backend/src/test_dynamoDB_lambda.py
echo "::set-output name=result::$?"
update_backend:
needs: test-locally
if: ${{ needs.test-locally.outputs.test-result == '0' }}
runs-on: ubuntu-latest
steps:
- name: Package and deploy
run: |
aws cloudformation package --s3-bucket cloud-resume-bucket \
--template-file backend/template.yaml --output-template-file backend/gen/template-gen.yaml
aws cloudformation deploy --template-file backend/gen/template-gen.yaml --stack-name cloud-formation-resume \
--capabilities CAPABILITY_IAM
When I try running the workflow in Github Actions, it will get to the 'Setup local DynamoDB' step, output the text below, and then hang.
Run docker run -p 8000:8000 amazon/dynamodb-local
Unable to find image 'amazon/dynamodb-local:latest' locally
latest: Pulling from amazon/dynamodb-local
2cbe74538cb5: Pulling fs layer
137077f50205: Pulling fs layer
58932e640a40: Pulling fs layer
58932e640a40: Verifying Checksum
58932e640a40: Download complete
2cbe74538cb5: Verifying Checksum
2cbe74538cb5: Download complete
137077f50205: Verifying Checksum
137077f50205: Download complete
2cbe74538cb5: Pull complete
137077f50205: Pull complete
58932e640a40: Pull complete
Digest: sha256:bdd26570dc0e0ae49e1ea9d49ff662a6a1afe9121dd25793dc40d02802e7e806
Status: Downloaded newer image for amazon/dynamodb-local:latest
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: true
DbPath: null
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
Seems like it can find the docker image and download it fine, but stops upon initializing? This is my first time working with Github Actions and Docker, so I'm not really sure why it's hanging on Github Actions and not when I run it on my own computer, so any help would be appreciated!
When you run the command docker run -p 8000:8000 amazon/dynamodb-local the process never exits, so the Github run block doesn't actually know when to move on to the next step—it just hangs there forever.
What I did in my project is simply background it, by using the & after the command:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
Github Workflows will start the Docker container and move to the next run step, and when all the steps are done it'll just kill the container as part of normal cleanup. Because of this, you don't need to worry about shutting it down at the end.
The difficulty with this approach is that it takes several seconds for DynamoDB-local to start up, but your next step relies on it and will likely throw ECONNREFUSED errors.
What I've done in my project is to have the next run step execute a script that attempts to list tables, retrying with a short delay until it gets back a response.
The bash command is simply (you would need to put this in a while+try/catch loop):
aws dynamodb list-tables --endpoint-url http://localhost:8000
As a guide, this is (roughly) what I do in JavaScript, using the aws-sdk and NodeJS#16:
// wait-for-dynamodb.js
import timers from 'timers/promises'
import AWS from 'aws-sdk'
const dynamodb = new AWS.DynamoDB()
const waitForDynamoDbToStart = async () => {
try {
await dynamodb.listTables().promise()
} catch (error) {
console.log('Waiting for Docker container to start...')
await timers.setTimeout(500)
return waitForDynamoDbToStart()
}
}
const start = Date.now()
waitForDynamoDbToStart()
.then(() => {
console.log(`DynamoDB-local started after ${Date.now() - start}ms.`)
process.exit(0)
})
.catch(error => {
console.log('Error starting DynamoDB-local!', error)
process.exit(1)
})
Then I simply have that in the run steps:
- name: Setup local DynamoDB
run: docker run -p 8000:8000 amazon/dynamodb-local &
- name: Wait for it to boot up
run: node ./wait-for-dynamodb.js
# now you're guaranteed to have DynamoDB-local running
I am using GitHub action to do some automation test and my application was developed in docker.
name: Docker Image CI
on:
push:
branches: [ master]
pull_request:
branches: [ master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker-compose build
- name: up mysql and apache container runs
run: docker-compose up -d
- name: install dependencies
run: docker exec myapp php composer.phar install
- name: show running container
run: docker ps
- name: run unit test
run: docker exec myapp ./vendor/bin/phpunit
At the step 'show running container', I can see that all the containers are running but for the MySQL, the status is (health: starting). Thus, my unit test cases all failed as it requires a connection to MySQL. So may I know if there is a way to start the unit case only when the MySQL container's status is healthy?
I would like to offer a solution, not a smart one but it requires minimum configuration and ready to go, just use the GitHub Action for Sleeping.
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Sleep for 30 seconds
uses: jakejarvis/wait-action#master
with:
time: '30s'
Assumption: your Mysql server will be up and running in 30s.
You can use thegabriele97/dockercompose-health-action
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Check services healthiness
uses: thegabriele97/dockercompose-health-action#main
with:
timeout: '60'
workdir: 'src'
As the documentation states:
To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
If you can't implement this at the moment, you can write some simple script that will indefinitely try a simple statement on the database. Once the script succeed you exit loop and start your unit tests. Check the documentation link I've provided, you'll find there an example of such script (wait-for-it.sh).
My approach was to use:
in my docker-compose.yml file:
healthcheck:
test: curl --fail http://localhost/ping || exit 1
interval: 2s
retries: 10
start_period: 10s
timeout: 10s
in my Github Actions workflow:
- name: Wait for healthchecks
run: timeout 60s sh -c 'until docker ps | grep <CONTAINER_NAME> | grep -q healthy; do echo "Waiting for container to be healthy..."; sleep 2; done'
As stated in documentation:
On Linux and macOS runners, use the sleep command:
- name: Sleep for 30 seconds
run: sleep 30s
shell: bash
On Windows runners, use the Start-Sleep command:
- name: Sleep for 30 seconds
run: Start-Sleep -s 30
shell: powershell
Coming from this issue
I am using GitHub Actions for Gradle project with this given steps:
name: Java CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up JDK 13
uses: actions/setup-java#v1
with:
java-version: 13
- run: ./gradlew bootJar
- name: Login to Github regestry
run: docker login docker.pkg.github.com -u xxxxx -p xxxxx
- name: Build the Docker image
run: docker build . -t docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.$GITHUB_REF
- name: Push the image to github
run: docker push docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.$GITHUB_REF
At the last step I get this error:
The push refers to repository
[docker.pkg.github.com/sulimanlab/realtime-chat/realtimechat-snapshot-0.refs/heads/master]
3aad04996f8f: Preparing
77cae8ab23bf: Preparing
error parsing HTTP 404 response body: invalid character 'p' after top-level value:
"404 page not found\n"
actually I was using the wrong environment variable to tag my images.
I used $GITHUB_REF what I should use $GITHUB_SHA
I am trying to write a custom github-action that runs some commands in a docker container but allows the user to select which docker container they are run in (i.e. so I can run the same build instructions across different versions of the runtime environment)
My gut instinct was to have my .github/actions/main/action.yml file as
name: 'Docker container command execution'
inputs:
dockerfile:
default: Dockerfile_r_latest
runs:
using: 'docker'
image: '${{ inputs.dockerfile }}'
args:
- /scripts/commands.sh
However this errors with:
##[error](Line: 7, Col: 10): Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.dockerfile
Any help would be appreciated !
File References
My .github/workflow/build_and_test.yml file is:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- uses: ./.github/actions/main
name: Build and test
with:
dockerfile: Dockerfile_r_latest
And my Dockerfile .github/actions/main/Dockerfile_r_latest is:
FROM rocker/verse:latest
ADD scripts /scripts
ENTRYPOINT [ "bash", "-c" ]
Interesting approach! I'm not sure if it's possible to use expressions in the image field of the action metadata. I would guess that the only fields that can take expressions instead of hardcoded strings are the args for the image so that the inputs can be passed.
For reference this is the args section of the action.yml metadata.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#args
I think there are other ways to achieve what you want to do. Have you tried using the jobs.<job_id>.container syntax? That allows you to specify an image that the steps of a job will run in. It will require that you publish the image to a public repository, though. So take care not to include any secrets.
For example, if you published your image to Docker Hub at gowerc/r-latest your workflow might look something like this:
name: Test Package
on:
[push, pull_request]
jobs:
R_latest:
name: Test on latest
runs-on: ubuntu-latest
container: gowerc/r-latest
steps:
- uses: actions/checkout#master
name: Checkout project
- name: Build and test
run: ./scripts/commands.sh
ref: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer
Alternatively, you can also specify your image at the step level with uses. You could then pass a command via args to execute your script.
name: my workflow
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Check container
uses: docker://alpine:3.8
with:
args: /bin/sh -c "cat /etc/alpine-release"
ref: https://help.github.com/en/github/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#example-using-a-docker-hub-action
In addition to #peterevans answer, I would add there's a 3rd option where you can use a simple docker run command and pass any env that you have defined.
That helped to solve 3 things :
Reuse a custom docker image being build within the steps for testing actions. It seems not possible to do so with uses as it first tries to pull that image that doesn't exist yet in a Setup job step that occurs before any steps of the job.
This specific image can also be stored in a private docker registry
Be able to use a variable for the docker image
My workflow looks like this :
name: Build-Test-Push
on:
push:
branches:
- master
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }}
ECR_REPOSITORY: myproject/myimage
IMAGE_TAG: ${{ github.sha }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checking out
uses: actions/checkout#v2
with:
ref: master
- name: Login to AWS ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build
run: |
docker pull $ECR_REGISTRY/$ECR_REPOSITORY || true
docker build . -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -t $ECR_REGISTRY/$ECR_REPOSITORY:latest
- name: Test
run: |
docker run $ECR_REGISTRY/$ECR_REPOSITORY:latest /bin/bash -c "make test"
- name: Push
run: |
docker push $ECR_REGISTRY/$ECR_REPOSITORY
Here is another approach. The Docker image to use is passed to a cibuild shell script that takes care of pulling the right image.
GitHub workflow file:
name: 'GH Actions CI'
on:
push:
branches: ['*master', '*0.[0-9]?.x']
pull_request:
# The branches below must be a subset of the branches above
branches: ['*master', '*0.[0-9]?.x']
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
include:
- FROM: 'ubuntu:focal'
- FROM: 'ubuntu:bionic'
- FROM: 'ubuntu:xenial'
- FROM: 'debian:buster'
- FROM: 'debian:stretch'
- FROM: 'opensuse/leap'
- FROM: 'fedora:33'
- FROM: 'fedora:32'
- FROM: 'centos:8'
steps:
- name: Checkout repository
uses: actions/checkout#v2
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.
fetch-depth: 2
# If this run was triggered by a pull request event, then checkout
# the head of the pull request instead of the merge commit.
- run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Run CI
env:
FROM: ${{ matrix.FROM }}
run: script/cibuild
Bash script script/cibuild:
#!/bin/bash
set -e
docker run --name my-docker-container $FROM script/custom-script.sh
docker cp my-docker-container:/usr/src/my-workdir/my-outputs .
docker rm my-docker-container
echo "cibuild Done!"
Put your custom commands in script/custom-script.sh.