How can I execute a command (i.e. run) an image which is only part of the (local) build cache in docker (in GHA) and was not pushed to a registry?
The full example here:
https://github.com/geoHeil/containerfun
Dockerfile:
FROM ubuntu:latest as builder
RUN echo hello >> asdf.txt
FROM builder as app
RUN cat asdf.txt
ci.yaml GHA workflow:
name: ci
on:
push:
branches:
- main
jobs:
testing:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v2
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: cache base builder
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-cache:latest
target: builder
cache-from: type=gha
cache-to: type=gha,mode=max
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest .
- name: run some command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest ls /
- name: one some other command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
When using the pushed image:
docker run ghcr.io/geoheil/containerfun-cache:latest cat /asdf.txt
it works just fine. When using the non-pushed (cached only one) docker fails with:
docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
fails with
Unable to find image 'ghcr.io/geoheil/containerfun-app:latest' locally
why does it fail? Shouldn`t the image at least reside in the local build cache?
edit
obviously:
- name: fooasdf
#run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest --build-arg BUILDKIT_INLINE_CACHE=1 .
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-app:latest
target: app
cache-from: ghcr.io/geoheil/containerfun-cache:latest
Is a potential workaround, and docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt now works fine. However:
this is using the registry and not type=gha as the cache
requires to push the internal builder image to the registry (what I not want)
requires to use the image (pull in the run step) from the registry. I would expect to be able to simply run the already existing local image (which was built in the step before
It's failing because now you built the image with buildx and it's available only inside the buildx context. If you have to use the image in docker context, you have to load that image into the docker side by using the parameter --load when you build the image using buildx.
See https://docs.docker.com/engine/reference/commandline/buildx_build/#load
So change the step to something like this
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --load --tag ghcr.io/geoheil/containerfun-app:latest .
NOTE: The --load parameter doesn't support multi-arch builds atm
Related
I am trying to get Docker-image-layer caching working for my github environment.
One quirk of my build system is that I (indirectly) call docker build twice.
Unfortunately, when I setup gha caching, this doesn't significantly speed up my
build because for the second call to docker build it pulls from the gha cache
instead of the local cache.
It looks like you can only push to a single cache
so I cannot use a local & remote cache. Are there any workarounds to this issue?
Github action file:
name: Build
on:
push:
branches: [ "**" ]
jobs:
build_deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
# Needed for docker layer caching
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
# Needed for docker layer caching
- name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime#v2
- name: Quality Checks
run: make run-checks-docker # <- This does a docker build
- name: Deploy
run: make deploy # <- This does a docker build
Makefile:
run-checks: docker-build
docker run $(DOCKER_IMAGE_NAME) pytest
deploy: docker-build
docker run \
$(DOCKER_IMAGE_NAME) \
poetry run python -m my.code.here
docker-build:
docker buildx create --use --driver=docker-container ; \
docker buildx build \
--cache-to type=gha \
--cache-from type=gha \
--load \
--tag $(DOCKER_IMAGE_NAME) \
--file build.Dockerfile . ; \
We have a java application which uses maven, docker and Github actions.
The below snippet is from our Dockerfile.
FROM maven:3.6.3-jdk-8-openj9 AS builder
RUN mkdir /app
WORKDIR /app
ADD . .
RUN mvn clean install
And then we have a deploy.yml for GitHub actions. The issue is that on GitHub actions, maven always downloads the dependencies and then creates a jar and finally a docker image is created.
Using below tutorial, I have tried to implement caching in GitHub actions.
https://evilmartians.com/chronicles/build-images-on-github-actions-with-docker-layer-caching
The key for the cache in my case is calculated as below:
key: ${{ runner.os }}-buildx-${{ hashFiles('pom.xml') }}
Also made the following changes in the Dockerfile.
FROM maven:3.6.3-jdk-8-openj9 AS builder
RUN mkdir /app
WORKDIR /app
ADD . .
RUN mvn clean dependency:copy-dependencies
ADD . .
RUN mvn install
Still I do not see any significant changes in the reduction in build time.
What I am trying to do is that I want the maven dependencies download as a separate layer in docker image, and caching this docker layer which can be later re-used in the final docker image build.
If anyone can shade a light on this issue.
Use Buildkit(buildkit),now it is already part of every Docker Engine(19 and latest versions for sure).
Very nice Medium post
Introducing buildkit
This is nice example which you can use
docker cache ci
although with Python.
Reagarding the CI environment,Github Actions has fantastic build-push-action
Example
name: ci
on:
push:
branches:
- "master"
jobs:
docker:
runs-on: ubuntu-20.04
steps:
# Check out code
- name: Checkout
uses: actions/checkout#v2
# This is the a separate action that sets up buildx runner
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
# So now you can use Actions' own caching!
- name: Cache Docker layers
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# And make it available for the builds
- name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: false
tags: user/app:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
I have a github workflow that is building the docker image, installing dependencies using requirements.txt and pushing to AWS ECR. When I am checking it locally all is working fine but when github workflow is running it is not able to access the requirements.txt file and shows the following error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Below is my simple dockerfile
FROM amazon/aws-lambda-python:3.9
COPY . ${LAMBDA_TASK_ROOT}
RUN pip3 install scipy
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "api.handler" ]
Here is cicd yaml file.
name: Deploy to ECR
on:
push:
branches: [ metrics_handling ]
jobs:
build:
name: Build Image
runs-on: ubuntu-latest
steps:
- name: Check Out Code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION }}
- name: Build, Tag, and Push image to Amazon ECR
id: tag
run: |
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${accountid}.dkr.ecr.${region}.amazonaws.com
docker rmi --force ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
docker push ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
env:
accountid: ${{ secrets.ACCOUNTID}}
region: ${{ secrets.REGION }}
ecr_repository: ${{ secrets.ECR_REPOSITORY }}
Below is the structure of my directory. The requirements.txt file is inside API directory with all the related code that is needed to build and run the image.
Based upon the questions' comments, the Python requirements.txt file is located in the API directory. This command is is specifying the Dockerfile using a path in the API directory, but building the container in the current directory.
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
The correct approach is to build the container in the API directory:
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest API --no-cache
Notice the change from . to API and removing the Dockerfile location -f API/Dockerfile
I have yaml file in GitHub action and i have successfully build docker image in it and i want to push into docker hub but getting below error
Run docker push ***/vampi_docker:latest
docker push ***/vampi_docker:latest
shell: /usr/bin/bash -e {0}
An image does not exist locally with the tag: ***/vampi_docker
The push refers to repository [docker.io/***/vampi_docker]
Error: Process completed with exit code 1.
Here is the yml file
name: vampi_docker
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker login
env:
DOCKER_USER: ${{secrets.DOCKER_USER}}
DOCKER_PASSWORD: ${{secrets.DOCKER_PASSWORD}}
repository: test/vampi_docker:latest
tags: latest, ${{ secrets.DOCKER_TOKEN }}
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Vampi Docker image
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
docker build . --file Dockerfile --tag vampi_docker:latest
- name: List images
run: docker images
- name: Docker Push
run: docker push ${{secrets.DOCKER_USER}}/vampi_docker:latest
Please do let me know where im wrong and what i miss
Base on the error that is showed.
Change this:
docker build . --file Dockerfile --tag vampi_docker:latest
to:
docker build . --file Dockerfile --tag test/vampi_docker:latest
And run again.
I want to run some NPM scripts, create a docker image and publish it on dockerhub.
I get this error trying to generate the image. It seems the second job doesn't see the build directory.
COPY failed: file not found in build context or excluded by .dockerignore: stat build/: file does not exist
Dockerfile
FROM httpd:2.4-alpine
COPY ./build/ /usr/local/apache2/htdocs/myapp/
EXPOSE 80
this is my workflow
name: CD
on:
push:
branches: [ main ]
jobs:
build:
name: App build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
needs: build
steps:
- uses: actions/checkout#v2
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp .
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest
Project structure
| Dockerfile
| package.json
| README.md
| webpack.config.js
+---.github
| \---workflows
| deploy.yml
+---build
+---src
Update: I changed my workflow to ls the whole GITHUB_WORKSPACE.
build dir is actually missing (the other files are there). Yet, the build process (the first job) ends without errors, and if I try to ls -R in the first job the build dir is there. It is missing in the second job.
It seems the state of the workspace at the end of the first job is not available to the second job.
It seems for that you need actions/upload-artifact and actions/download-artifact.
name: CD
on:
push:
branches: [ main ]
jobs:
build:
name: App build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
- name: LS
run: ls -R
- name: Temporarily save webpack artifact
uses: actions/upload-artifact#v2
with:
name: webpack-artifact
path: build
retention-days: 1
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
needs: build
steps:
## Build and deploy Docker images to DockerHub
- uses: actions/checkout#v2
- name: Retrieve built package
uses: actions/download-artifact#v2
with:
name: webpack-artifact
path: build
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp ./
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest
2 jobs in Github Actions are run on 2 separate machines, so the second job cannot see the first one. The solution is to put them into one job.
name: CD
on:
push:
branches: [ main ]
jobs:
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp .
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest