Cannot push docker image into docker hub from Github Action - docker

I have yaml file in GitHub action and i have successfully build docker image in it and i want to push into docker hub but getting below error
Run docker push ***/vampi_docker:latest
docker push ***/vampi_docker:latest
shell: /usr/bin/bash -e {0}
An image does not exist locally with the tag: ***/vampi_docker
The push refers to repository [docker.io/***/vampi_docker]
Error: Process completed with exit code 1.
Here is the yml file
name: vampi_docker
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker login
env:
DOCKER_USER: ${{secrets.DOCKER_USER}}
DOCKER_PASSWORD: ${{secrets.DOCKER_PASSWORD}}
repository: test/vampi_docker:latest
tags: latest, ${{ secrets.DOCKER_TOKEN }}
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Vampi Docker image
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
docker build . --file Dockerfile --tag vampi_docker:latest
- name: List images
run: docker images
- name: Docker Push
run: docker push ${{secrets.DOCKER_USER}}/vampi_docker:latest
Please do let me know where im wrong and what i miss

Base on the error that is showed.
Change this:
docker build . --file Dockerfile --tag vampi_docker:latest
to:
docker build . --file Dockerfile --tag test/vampi_docker:latest
And run again.

Related

Using a local and remote cache for docker-image-layers in Github Actions

I am trying to get Docker-image-layer caching working for my github environment.
One quirk of my build system is that I (indirectly) call docker build twice.
Unfortunately, when I setup gha caching, this doesn't significantly speed up my
build because for the second call to docker build it pulls from the gha cache
instead of the local cache.
It looks like you can only push to a single cache
so I cannot use a local & remote cache. Are there any workarounds to this issue?
Github action file:
name: Build
on:
push:
branches: [ "**" ]
jobs:
build_deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
# Needed for docker layer caching
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
# Needed for docker layer caching
- name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime#v2
- name: Quality Checks
run: make run-checks-docker # <- This does a docker build
- name: Deploy
run: make deploy # <- This does a docker build
Makefile:
run-checks: docker-build
docker run $(DOCKER_IMAGE_NAME) pytest
deploy: docker-build
docker run \
$(DOCKER_IMAGE_NAME) \
poetry run python -m my.code.here
docker-build:
docker buildx create --use --driver=docker-container ; \
docker buildx build \
--cache-to type=gha \
--cache-from type=gha \
--load \
--tag $(DOCKER_IMAGE_NAME) \
--file build.Dockerfile . ; \

docker run image from (gha or local) cache

How can I execute a command (i.e. run) an image which is only part of the (local) build cache in docker (in GHA) and was not pushed to a registry?
The full example here:
https://github.com/geoHeil/containerfun
Dockerfile:
FROM ubuntu:latest as builder
RUN echo hello >> asdf.txt
FROM builder as app
RUN cat asdf.txt
ci.yaml GHA workflow:
name: ci
on:
push:
branches:
- main
jobs:
testing:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v2
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: cache base builder
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-cache:latest
target: builder
cache-from: type=gha
cache-to: type=gha,mode=max
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest .
- name: run some command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest ls /
- name: one some other command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
When using the pushed image:
docker run ghcr.io/geoheil/containerfun-cache:latest cat /asdf.txt
it works just fine. When using the non-pushed (cached only one) docker fails with:
docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
fails with
Unable to find image 'ghcr.io/geoheil/containerfun-app:latest' locally
why does it fail? Shouldn`t the image at least reside in the local build cache?
edit
obviously:
- name: fooasdf
#run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest --build-arg BUILDKIT_INLINE_CACHE=1 .
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-app:latest
target: app
cache-from: ghcr.io/geoheil/containerfun-cache:latest
Is a potential workaround, and docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt now works fine. However:
this is using the registry and not type=gha as the cache
requires to push the internal builder image to the registry (what I not want)
requires to use the image (pull in the run step) from the registry. I would expect to be able to simply run the already existing local image (which was built in the step before
It's failing because now you built the image with buildx and it's available only inside the buildx context. If you have to use the image in docker context, you have to load that image into the docker side by using the parameter --load when you build the image using buildx.
See https://docs.docker.com/engine/reference/commandline/buildx_build/#load
So change the step to something like this
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --load --tag ghcr.io/geoheil/containerfun-app:latest .
NOTE: The --load parameter doesn't support multi-arch builds atm

GitHub Actions fails to build a docker image for web app

Have a few queries around it
I have a docker build failing due to path. It works when I execute on local PC. What should be the path here?
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v3
- name: Node
uses: actions/setup-node#v3
with:
node-version: 14.15
- name: Dependencies
run: npm install --legacy-peer-deps
- name: Build
run: npm run build-prod
- name: Docker Login
run: docker login -u $USERNAME -p $PWD
- name: Build
run: docker build . --tag $REPO:latest
- name: Docker Push
run: docker push $REPO:latest
Error
> [3/3] COPY /dist/app-name /usr/share/nginx/html:
145
------
146
error: failed to solve: failed to compute cache key: failed to walk /var/lib/docker/tmp/buildkit-mount67345738657/dist: lstat /var/lib/docker/tmp/buildkit-mount67345738657/dist: no such file or directory
Dockerfile
FROM nginx:1.17.1-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY /dist/app-name /usr/share/nginx/html
How do I get the tag version when a new release is created to start the build with two tags: latest and {tag} created for the build. This is to keep build backups of old tags in docker. What will be the changes above to do two builds?

Github workflow not getting the requirements.txt file while building docker image

I have a github workflow that is building the docker image, installing dependencies using requirements.txt and pushing to AWS ECR. When I am checking it locally all is working fine but when github workflow is running it is not able to access the requirements.txt file and shows the following error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Below is my simple dockerfile
FROM amazon/aws-lambda-python:3.9
COPY . ${LAMBDA_TASK_ROOT}
RUN pip3 install scipy
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "api.handler" ]
Here is cicd yaml file.
name: Deploy to ECR
on:
push:
branches: [ metrics_handling ]
jobs:
build:
name: Build Image
runs-on: ubuntu-latest
steps:
- name: Check Out Code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION }}
- name: Build, Tag, and Push image to Amazon ECR
id: tag
run: |
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${accountid}.dkr.ecr.${region}.amazonaws.com
docker rmi --force ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
docker push ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
env:
accountid: ${{ secrets.ACCOUNTID}}
region: ${{ secrets.REGION }}
ecr_repository: ${{ secrets.ECR_REPOSITORY }}
Below is the structure of my directory. The requirements.txt file is inside API directory with all the related code that is needed to build and run the image.
Based upon the questions' comments, the Python requirements.txt file is located in the API directory. This command is is specifying the Dockerfile using a path in the API directory, but building the container in the current directory.
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
The correct approach is to build the container in the API directory:
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest API --no-cache
Notice the change from . to API and removing the Dockerfile location -f API/Dockerfile

Docker build fails in GH Actions

I want to run some NPM scripts, create a docker image and publish it on dockerhub.
I get this error trying to generate the image. It seems the second job doesn't see the build directory.
COPY failed: file not found in build context or excluded by .dockerignore: stat build/: file does not exist
Dockerfile
FROM httpd:2.4-alpine
COPY ./build/ /usr/local/apache2/htdocs/myapp/
EXPOSE 80
this is my workflow
name: CD
on:
push:
branches: [ main ]
jobs:
build:
name: App build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
needs: build
steps:
- uses: actions/checkout#v2
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp .
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest
Project structure
| Dockerfile
| package.json
| README.md
| webpack.config.js
+---.github
| \---workflows
| deploy.yml
+---build
+---src
Update: I changed my workflow to ls the whole GITHUB_WORKSPACE.
build dir is actually missing (the other files are there). Yet, the build process (the first job) ends without errors, and if I try to ls -R in the first job the build dir is there. It is missing in the second job.
It seems the state of the workspace at the end of the first job is not available to the second job.
It seems for that you need actions/upload-artifact and actions/download-artifact.
name: CD
on:
push:
branches: [ main ]
jobs:
build:
name: App build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
- name: LS
run: ls -R
- name: Temporarily save webpack artifact
uses: actions/upload-artifact#v2
with:
name: webpack-artifact
path: build
retention-days: 1
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
needs: build
steps:
## Build and deploy Docker images to DockerHub
- uses: actions/checkout#v2
- name: Retrieve built package
uses: actions/download-artifact#v2
with:
name: webpack-artifact
path: build
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp ./
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest
2 jobs in Github Actions are run on 2 separate machines, so the second job cannot see the first one. The solution is to put them into one job.
name: CD
on:
push:
branches: [ main ]
jobs:
deploy:
name: Docker image in DockerHub repository
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Npm install
run: npm install
- name: Npm build
run: npm run build
- name: LS
run: ls -R
- name: Login to dockerhub
run: docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build Docker image
run: docker build -f ./Dockerfile -t myaccount/myapp .
- name: Push Docker image to DockerHub
run: docker push myaccount/myapp:latest

Resources