Run docker build - Cant not found docker file - docker

This is how I try to make use of Github Actions in relation to the project being built in Docker. I have look on this site here: How do I specify the dockerfile location in my github action?
That which I would like out of it. It is that it makes use of the docker File that makes in my project.
I have built like this in relation to my files:
API
Docker
Service
Data
Test
However, it's the case that my Docker is in the API section.
I have tried this:
name: Docker Image CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the Docker image
run: docker build ./API/ --file Dockerfile --tag my-image-name:$(date +%s)
and i have try
docker build . --file Dockerfile --tag my-image-name:$(date +%s)

Try this with the "./" in front of dockerfile name:
docker build . --file ./Dockerfile --tag my-image-name:$(date +%s)
UPDATE:
After looking at Docker.com documents, I found the solution using Docker Compose. Just use:
docker-compose up

Related

docker run image from (gha or local) cache

How can I execute a command (i.e. run) an image which is only part of the (local) build cache in docker (in GHA) and was not pushed to a registry?
The full example here:
https://github.com/geoHeil/containerfun
Dockerfile:
FROM ubuntu:latest as builder
RUN echo hello >> asdf.txt
FROM builder as app
RUN cat asdf.txt
ci.yaml GHA workflow:
name: ci
on:
push:
branches:
- main
jobs:
testing:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v2
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: cache base builder
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-cache:latest
target: builder
cache-from: type=gha
cache-to: type=gha,mode=max
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest .
- name: run some command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest ls /
- name: one some other command in image
run: docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
When using the pushed image:
docker run ghcr.io/geoheil/containerfun-cache:latest cat /asdf.txt
it works just fine. When using the non-pushed (cached only one) docker fails with:
docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt
fails with
Unable to find image 'ghcr.io/geoheil/containerfun-app:latest' locally
why does it fail? Shouldn`t the image at least reside in the local build cache?
edit
obviously:
- name: fooasdf
#run: docker buildx build --cache-from type=gha --target app --tag ghcr.io/geoheil/containerfun-app:latest --build-arg BUILDKIT_INLINE_CACHE=1 .
uses: docker/build-push-action#v3
with:
push: True
tags: ghcr.io/geoheil/containerfun-app:latest
target: app
cache-from: ghcr.io/geoheil/containerfun-cache:latest
Is a potential workaround, and docker run ghcr.io/geoheil/containerfun-app:latest cat /asdf.txt now works fine. However:
this is using the registry and not type=gha as the cache
requires to push the internal builder image to the registry (what I not want)
requires to use the image (pull in the run step) from the registry. I would expect to be able to simply run the already existing local image (which was built in the step before
It's failing because now you built the image with buildx and it's available only inside the buildx context. If you have to use the image in docker context, you have to load that image into the docker side by using the parameter --load when you build the image using buildx.
See https://docs.docker.com/engine/reference/commandline/buildx_build/#load
So change the step to something like this
- name: build app image (not pushed)
run: docker buildx build --cache-from type=gha --target app --load --tag ghcr.io/geoheil/containerfun-app:latest .
NOTE: The --load parameter doesn't support multi-arch builds atm

Create nginx docker image from npm web app with GitHub Actions

I'm trying to create a docker image using GitHub Actions from a static web application built with npm. However, while running the dockerfile, the /dist folder is not copied into the image as expected.
This is the dockerfile:
FROM nginx:1.21.6-alpine
COPY dist /usr/share/nginx/html
And this is the action:
name: Deploy
on:
push:
tags:
- v*
jobs:
build-homolog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup
uses: actions/setup-node#v3
with:
node-version: '16'
- name: Build
env:
NODE_ENV: homolog
run: npm install; npm run build; docker build -t my-image:1.0.0 .
The result is a working nginx but without content, it just shows its default page. When I run the npm build and the docker build locally on my machine, it works as expected. I think there is a problem with the directory structure on the GitHub Actions machine, but I can't seem to understand it.

Github workflow not getting the requirements.txt file while building docker image

I have a github workflow that is building the docker image, installing dependencies using requirements.txt and pushing to AWS ECR. When I am checking it locally all is working fine but when github workflow is running it is not able to access the requirements.txt file and shows the following error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Below is my simple dockerfile
FROM amazon/aws-lambda-python:3.9
COPY . ${LAMBDA_TASK_ROOT}
RUN pip3 install scipy
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "api.handler" ]
Here is cicd yaml file.
name: Deploy to ECR
on:
push:
branches: [ metrics_handling ]
jobs:
build:
name: Build Image
runs-on: ubuntu-latest
steps:
- name: Check Out Code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION }}
- name: Build, Tag, and Push image to Amazon ECR
id: tag
run: |
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${accountid}.dkr.ecr.${region}.amazonaws.com
docker rmi --force ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
docker push ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
env:
accountid: ${{ secrets.ACCOUNTID}}
region: ${{ secrets.REGION }}
ecr_repository: ${{ secrets.ECR_REPOSITORY }}
Below is the structure of my directory. The requirements.txt file is inside API directory with all the related code that is needed to build and run the image.
Based upon the questions' comments, the Python requirements.txt file is located in the API directory. This command is is specifying the Dockerfile using a path in the API directory, but building the container in the current directory.
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
The correct approach is to build the container in the API directory:
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest API --no-cache
Notice the change from . to API and removing the Dockerfile location -f API/Dockerfile

Cannot push docker image into docker hub from Github Action

I have yaml file in GitHub action and i have successfully build docker image in it and i want to push into docker hub but getting below error
Run docker push ***/vampi_docker:latest
docker push ***/vampi_docker:latest
shell: /usr/bin/bash -e {0}
An image does not exist locally with the tag: ***/vampi_docker
The push refers to repository [docker.io/***/vampi_docker]
Error: Process completed with exit code 1.
Here is the yml file
name: vampi_docker
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker login
env:
DOCKER_USER: ${{secrets.DOCKER_USER}}
DOCKER_PASSWORD: ${{secrets.DOCKER_PASSWORD}}
repository: test/vampi_docker:latest
tags: latest, ${{ secrets.DOCKER_TOKEN }}
run: |
docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- name: Build the Vampi Docker image
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
docker build . --file Dockerfile --tag vampi_docker:latest
- name: List images
run: docker images
- name: Docker Push
run: docker push ${{secrets.DOCKER_USER}}/vampi_docker:latest
Please do let me know where im wrong and what i miss
Base on the error that is showed.
Change this:
docker build . --file Dockerfile --tag vampi_docker:latest
to:
docker build . --file Dockerfile --tag test/vampi_docker:latest
And run again.

github actions: run multiple jobs in the same docker

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).
It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.

Resources