I have this code at .github/workflows/main.yaml
# .github/workflows/main.yaml
name: CI Workflow
on: [push]
jobs:
rspec-job:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
container:
image: I-stucked-here
volumes:
- /vendor/bundle
steps:
- code omitted for brevity
The main idea of this job is to run all steps in container mode. Not in Linux host mode.
Under the same repository, I have a public Docker image named ruby-rimy-2.6.3. Since it's not publicly hosted on DockerHub, I can't find a way to programmatically authenticate myself to GitHub Packages/Registry.
I did try with different syntax (see code below) but it didn't work.
# .github/workflows/main.yaml
name: CI Workflow
on: [push]
jobs:
rspec-job:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
container:
image: docker://docker.pkg.github.com/zulhfreelancer/rimy/ruby-rimy-2.6.3:latest
volumes:
- /vendor/bundle
steps:
- code omitted for brevity
From the docs, GitHub says the GITHUB_TOKEN is available when the job is running. How do I use this GITHUB_TOKEN environment variable to run something like docker login on top of that
container: section so that the job is able to pull the image?
Using GitHub Personal Token is not an option for me because that repository is just my experiment repository before applying the same thing to my GitHub organization. I don't want to put my personal token under my organization's repository environment variables/secrets — that will simply exposes my personal token to my co-workers.
You do not need to use the container instruction to run tests in a container.
The GitHub Actions host comes with docker and docker-compose installed. The way I do it, is have a docker-compose.yml in my repository, which includes a "service" that runs tests. Then, your workflow needs to do docker login and simply run the docker-compose run test command.
Note that the beauty of this approach, is that your tests are executed exactly the same on your own machine and on the CI machine. Same exact steps.
Something along these lines:
name: Test
on:
pull_request:
push: { branches: master }
jobs:
test:
name: Run test suite
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Docker login
run: echo ${GITHUB_TOKEN} | docker login -u ${GITHUB_ACTOR} --password-stdin docker.pkg.github.com
- name: Build docker images
run: docker-compose build
- name: Run tests
run: docker-compose run test
I am doing the same with DockerHub, with great ease and success.
Of course, if you do not want to use docker-compose, you can still use any normal docker run ... commands after you login properly in the login step.
I am not sure that docker login command will work as is, see these for a deeper discussion:
https://github.com/actions/starter-workflows/issues/66
https://github.community/t5/GitHub-Actions/Github-Actions-Docker-login/td-p/29852/page/2
Related
Im building a simple CI/CD workflow with github actions. The workflow starts with running all unit tests. When the unit tests were successfull, a docker image gets build and uploaded to docker hub. I have a few environment variables that need to be set, in order run the tests and also to run the docker container.
Thats currently the only way I get it to work:
name: Deploy to Linode k8s Cluster Workflow
env:
REGISTRY: "abc"
IMAGE_NAME: "defg"
DO_POSTGRESQL_URL: ${{ secrets.DO_POSTGRESQL_URL }}
DO_POSTGRESQL_USER: ${{ secrets.DO_POSTGRESQL_USER }}
DO_POSTGRESQL_PASS: ${{ secrets.DO_POSTGRESQL_PASS }}
# ... and more
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
environment: test
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
- name: Setup Java
uses: actions/setup-java#v3
with:
java-version: 18
distribution: temurin
cache: gradle
- name: Setup Gradle
uses: gradle/gradle-build-action#v2
- name: Execute Gradle build
run: ./gradlew build --scan
- name: Login to Docker Hub
uses: docker/login-action#v2.1.0
with:
# Username used to log against the Docker registry
username: ${{ secrets.DOCKER_HUB_USERNAME }}
# Password or personal access token used to log against the Docker registry
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push Docker images
uses: docker/build-push-action#v4.0.0
with:
context: .
# because two jar files get generated (...-SNAPSHOT-plain.jar and ...-SNAPSHOT.jar) only ...-SNAPSHOT.jar is needed
build-args: JAR_FILE=build/libs/*SNAPSHOT.jar, DO_POSTGRESQL_URL=$DO_POSTGRESQL_URL #,... and so on
push: true
tags: abc/defg:latest
# Linode deployment here
But imagine im having multiple workflow files, then I have to add the environment variables on multiple different places. Is there a clever way to solve this problem. So that I only have to define the env variables once?
I'm trying to understand the difference between 2 practices when using Github Runner: copying code from a Dockerfile and recreating the environment on the Runner, or using containers.
Imagine that we have a Dockerfile with some app:
FROM ubuntu
...
It's not important what the app is, the important part is that it runs on Ubuntu like our workflow.
Is it better use this Dockerfile to create an image and push it
to a Docker registry and then use it like this:
on: push
jobs:
first-job:
runs-on: ubuntu-latest
container: container:latest
steps:
- name: checks-out repo
uses: actions/checkout#v3
- name: Greetings
run: echo Hello World
Or, rewrite the workflow so you don't need an image, instead
recreate this environment by youself:
on: push
jobs:
first-job:
runs-on: ubuntu-latest
steps:
- name: recreate container
run:|
....
- name: checks-out repo
uses: actions/checkout#v3
- name: Greetings
run: echo Hello World
I can ask this question in another way. I need to setup
latex, but on github.com/action we don't have anything
to do so ( uses: actions/set-up-latex..). But I already have an image pushed to the registry https://hub.docker.com/r/blang/latex.
Is it preferred to use this image or to recreate the environment on
the runner (VM)?
I appreciate if you explain this concept explicitly with details.
guys!
I need you help to run docker-compose build on github action. I have a docker-compose file and I can't understand how to build and deploy it in correct way besides of just copying docker-compose by ssh and run scripts there.
There's docker/build-push-action#v2 but it's not working with docker-compose.yml.
This strongly depends where do you want to push your images. But for instance if you use Azure ACR you can use this action
on: [push]
name: AzureCLISample
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: 2.0.72
inlineScript: |
az acr login --name <acrName>
docker-compose up
docker-compose push
And then just build and push your images. But this is an example. If you use ECR it would be similar I guess.
For DigitialOcean it would be like this:
steps:
- uses: actions/checkout#v2
- name: Build image
run: docker-compose up
- name: Install doctl # install the doctl on the runner
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: push image to digitalocean
run: |
doctl registry login
docker-compose push
You can find more details about this here
I'm trying to build and push a Docker image in GitHub Actions.
In the YAML file I have other steps as well, which work fine. But when I tried to build a Docker image, the GitHub Action fails. The error is:
Invalid workflow file
The workflow is not valid. Job package depends on unknown job test.
I have a YAML extension installed in VS Code and it shows no errors related to indentation. If I remove the snippet of Docker build part (after arg command), action test runs successfully.
The GitHub Action error doesn't describe the reason of action fail properly so that I could debug.
name: Netlify workflow
on:
push:
branches: [master, develop]
pull_request:
branches: [master, develop]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node: [10.x, 12.x]
steps:
- name: Setup node
uses: actions/setup-node#v1
with:
node-version: ${{matrix.node}}
- name: Checkout
uses: actions/checkout#v2
- name: Setup cache
uses: actions/cache#v1
with:
path: ~/.npm
key: ${{runner.os}}-modules-${{hashFiles('**/package-lock.json') }}
restore-keys: |
${{runner.os}}-modules-
${{runner.os}}-
- name: Install
run: npm ci
- name: Lint
run: npm run lint
- name: Build
run: npm run build
- name: Deploy
uses: netlify/actions/cli#master
env:
NETLIFY_SITE_ID: ${{secrets.NETLIFY_SITE_ID}}
NETLIFY_AUTH_TOKEN: ${{secrets.NETLIFY_AUTH_TOKEN}}
with:
args: deploy --dir=build --prod
package:
runs-on: ubuntu-latest
needs: test
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Build docker image
run: docker builder build -t dockerHubUsername/repoName:latest .
- name: Login to docker hub
run: docker login --username ${{ secrets.DOCKER_USERNAME }} --password ${{ secrets.DOCKER_PASSWORD }}
- name: Push docker image to docker hub
run: docker push dockerHubUsername/repoName:latest
The jobs map in a GitHub Workflow, per jobs.<job_id>, is a map where:
The key job_id is a string and its value is a map of the job's
configuration data.
Stripping all of the other content out of the YAML to focus on that map:
jobs:
build:
# ...
package:
# ...
At the top level, two jobs have been defined, with the IDs build and package. Now let's look at some of the content of those jobs:
jobs:
build:
runs-on: ubuntu-latest
# ...
package:
runs-on: ubuntu-latest
needs: test
# ...
Per job.<job_id>.needs, the needs configuration:
Identifies any jobs that must complete successfully before this job
will run. It can be a string or array of strings.
Although it's not stated explicitly, the example shows that the jobs are identified by their IDs, so it needs to be a string or array of strings corresponding with defined job IDs.
Here we've said that, to run the job with ID package, it "needs" the job with ID test to have successfully completed. The ID of the only other job we've defined is build, though, hence the error:
Job package depends on unknown job test.
// ^~~~~~~ ^~~~~~~~~~ ^~~~
// job_id "needs" job_id
Given that you have only two jobs and likely do want the second to depend on the first, you either need to:
Rename the build job to test; or
Change the dependency to needs: build.
Either way, the two IDs need to correspond for this to be a semantically valid workflow (even though it's already syntactically valid YAML). An alternative would be to remove the dependency entirely, by deleting the needs: test line, although then build and package would be run in parallel (workers permitting).
I wanted to run django test cases inside container.
I am able to pull private image from docker hub. but when I ran command to test, It is failed to run.
Anyone tried running test cases inside the container.
jobs:
test:
container:
image: abcd
credentials:
username: "<username>"
password: "<password>"
steps:
- uses: actions/checkout#v2
- name: Display Python version
run: |
python -m pip install --upgrade pip
pip install -r requirements/dev.txt
- name: run test
run: |
python3 manage.py test
In my experience, I found out that using GitHub's container instruction causes more confusion than simply running whatever you want on the runner itself, as if you are running it on your own machine.
A big majority of the tests I am running on GitHub actions are running in containers, and some require private DockerHub images.
I always do this:
Create a docker-compose.yml for development use, so I can test things locally.
Usually in CI, you might want slightly different things in your docker-compose (for example, no volume mappings) - if this is the case, I am creating another docker-compose.yml in a .ci subfolder.
My docker-compose.yml contains a test service, that runs whatever test (or test suite) I want.
Here is a sample GitHub actions file I am using:
name: Test
on:
pull_request:
push: { branches: master }
jobs:
test:
name: Run test suite
runs-on: ubuntu-latest
env:
COMPOSE_FILE: .ci/docker-compose.yml
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_PASS: ${{ secrets.DOCKER_PASS }}
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Login to DockerHub
run: docker login -u $DOCKER_USER -p $DOCKER_PASS
- name: Build docker images
run: docker-compose build
- name: Run tests
run: docker-compose run test
Of course, this entails setting up the two mentioned secrets, but other than that, I found this method to be:
Reliable
Portable (I switched from Travis CI with the same approach easily)
Compatible with dev environment
Easy to understand and reproduce both locally and in CI