I have an action.yml file and a Dockerfile in repository A like this:
Repository A
action.yml
[...]
runs:
using: 'docker'
image: 'Dockerfile'
[...]
Dockerfile
FROM hello-world
# do something
Now in repository B, I use the action and the Dockerfile:
Repository B
.github/workflows/hello.yml
name: hello
on:
push:
branches:
- master
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: username/repoa#master
My point of using the Dockerfile in repository A is, among other things:
that the GitHub action log is not cluttered with preparation steps which have no relation to the action in repository B
so that it only builds when username/repoa is updated, not when username/repob is updated, because the Dockerfile didn't change.
However in practice, GitHub will happily rebuild the Dockerfile everytime I commit to username/repob and clutter the GitHub action logs with it.
How can I tell GitHub to only build the Dockerfile when repository A is updated and keep it out of the action logs?
Github Workflows create a new machine to run a job, this means that when Github Actions build your image, you need to store this image in a registry or something and use this stored image to create another step.
You need to improve your action.yml to build and store the image and create another action2.yml to say what to do with your container.
Related
How does one pull an image from a github action. Specifically one that requires authentication:
steps:
- name: Pull Docker Image
uses: docker/???
image: image_host.com/image:latest
^^^ Is wrong and I am not sure what the right syntax is.
I want to then run a command inside of the action
- name: Run test
run: |
node index.js # (index.js is inside of the container)
```
In order to use a GitHub workflow with a Docker container, you need a workflow runner which has Docker installed on its system, such as ubuntu-latest. Then use the container directive in order to pick a container.
I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image
Our project uses a multi-stage CI setup where the first stage checks for modification of files like package-lock.json and Gemfile.lock, compiles all these dependencies and then pushes them to the Gitlab container registry.
Using --cache-from in Docker build based on the current mainline branch, this is quite fast and the Docker layering mechanism helps to prevent repetition of steps.
Subsequent stages and jobs then use the Docker image pushed in the first stage as their image:.
Abbreviated configuration for readability:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: test
script:
# do stuff in the context of the image build in the first stage
Unfortunately, when working on longer-running feature branches, we now have a situation where it looks like the image in the second step is sometimes outdated and not pulling the latest version from the registry before starting the job, which makes subsequent jobs complain about missing dependencies.
Is there anything I can do to force it to always pull the latest image for each job?
As already written in the comments, i would not use the $CI_COMMIT_REF_SLUG for tagging. Simply because it is not guaranteed that all pipelines will run in the same order, and this alone can create issues. The same one you are currently experiencing.
I recommend to use $CI_COMMIT_SHA as it is bound to the pipeline. I would also rely on previous builds for caching and i will shortly outline my approach here
stages:
- create_builder_image
- test
- deploy
Create Builder Image:
stage: create_builder_image
script:
- (docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG && export DOCKER_CACHE_TAG=$CI_COMMIT_REF_SLUG) || (docker pull $GITLAB_IMAGE/builder:latest && export DOCKER_CACHE_TAG=latest) || true
- docker build --cache-from $GITLAB_IMAGE/builder:$DOCKER_CACHE_TAG ...
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
stage: test
script:
# do stuff in the context of the image build in the first stage
Push image: # pushing the image for the current branch ref, as i know it is a working image and it can than be used for caching by others.
image: docker:20
stage: deploy
variables:
GIT_STRATEGY: none
stage: push
script:
- docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
- docker tag $GITLAB_IMAGE/builder:$CI_COMMIT_SHA $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
i know it might generate additional build steps, but this way, you can ensure that you will always have the image which belong to the pipeline. You still can use caching and layering from docker, and beneficiary, the image will not be pushed currently if the tests are failing.
Furthermore you can also create an step before creating the build image, where you can figure out, if you do need a new image at all.
I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.
Is it possible to pass docker images built in earlier job in circle ci
example
jobs:
build:
steps:
- checkout
// build image
deploy:
steps:
- deploy earlier image
i cant see how i can access the image without rebuilding it
Each job can run on a different host, so to share the image you would need to push it to a registry from the job that builds it.
To reference the same job that was pushed you'll need an identifier that is known ahead of time. A good example of this is the CIRCLE_SHA1 environment variable. You can use this variable as the image tag
jobs:
build:
machine: true
steps:
...
- run: |
docker build -t repo/app:$CIRCLE_SHA1 .
docker push repo/app:$CIRCLE_SHA1
test:
docker:
- image: repo/app:$CIRCLE_SHA1
steps:
...
I believe you can achieve this by persisting the image to a workspace and then attaching the workspace when you want to deploy it. See CircleCI's workspace documentation here: https://circleci.com/docs/workspaces