GitHub Action ignoring Docker Containers Entrypoint - docker

I am working on a Flask application and setting up a GitHub pipeline. My Docker file has an entry point that runs a couple of commands to upgrade DB and start gunicorn.
This works perfectly fine while running locally but while deploying through GitHub action it just ignores the entry point and does not run those commands.
Hare is my Docker file -
FROM python:3.10-slim
WORKDIR /opt/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /opt/app/requirements.txt
RUN chmod +x /opt/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /opt/app/
RUN chmod +x /opt/app/docker-entrypoint.sh
ENTRYPOINT [ "/opt/app/docker-entrypoint.sh" ]
Docker entry point content
#! /bin/sh
echo "*********Upgrading database************"
flask db upgrade
echo "**************Statring gunicorn server***************"
gunicorn --bind 0.0.0.0:5000 wsgi:app
echo "************* started gunicorn server****************"
Here is my GitHub action -
name: CI
on:
push:
branches: [master]
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
deploy:
needs: build_and_push
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#v0.1.3
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: 22
- name: Start containers
run: docker-compose up -d --build
- name: Check running containers
run: docker ps
I am very new to Dockerfiles, writing shell commands, and GitHub Actions, so please suggest if there is an any better approach.
Thanks in advance!

Instead of using ENTRYPOINT use CMD inside your dockerfile. Because Entrypoint can override but CMD can not override.

Related

docker in github action Error response from daemon: Container [container_id] is not running

in local, i've set docker to mount the application path. from docker desktop, i set the File Sharing docker desktop > settings > resources > file sharing so docker can mount my apps. But i cannot find how to do it the same way with github action. So, i just pull my updated code to github below
web:
container_name: oe-web
build:
context: ./
dockerfile: Dockerfile
depends_on:
- db
ports:
- 8000:8000
working_dir: /app
volumes:
- ./:/app
workflow
name: Docker Image CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Run docker-compose
run: docker-compose up -d
- name: Sleep for 20s
uses: juliangruber/sleep-action#v1
with:
time: 10s
- name: database migration with docker
run: docker exec oe-web php artisan migrate
- name: database seed with docker
run: docker exec oe-web php artisan db:seed
and the github action return error when trying to build the docker
Run docker exec oe-web php artisan migrate
Error response from daemon: Container 27479cda84fb7f7c393bceeedbb2e2cf5ecd086917390728ac635748ac4411df is not running
Error: Process completed with exit code 1.
you can visit my pull request here:
https://github.com/dhanyn10/open-ecommerce/pull/189
[UPDATED]
error log
1s
Run chmod -R 777 ./
chmod -R 777 ./
docker-compose ps
docker-compose logs
shell: /usr/bin/bash -e {0}
Name Command State Ports
-------------------------------------------------------------------------------------------------
oe-adminer entrypoint.sh php -S [::]: ... Up 0.0.0.0:8080->8080/tcp,:::8080->8080/tcp
oe-db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp
oe-web docker-php-entrypoint /bin ... Exit 255

SSL configuration for DigitalOcean docker image deployed using GitHub actions

The app that I have deployed into DigitalOcean (DO) droplet by using the docker container registry. The issue that I have now of integrating the let's encrypt certificate (SSL) for that docker image. I don't have any experience on this but I tried some tutorials like this SSL. However, i can't get the https:// for my domain and sub domain. https://dev.example.com
Folder structure of my app as below: ( added only important paths )
root
+--.github/workflows
-- main.yml
+-- app
-- .next
-- node_modules
-- public
-- src
-- Dockerfile
+-- nginx-conf
-- nginx.conf
+-- server
-- node_modules
-- src
-- Dockerfile
nginx-conf is the one that i added to set SSL using reverse proxy. that file include this server block.
server {
listen 8000 default_server;
listen [::]:8000 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com dev.example.com;
location / {
root /var/www/html;
try_files $uri /index.tsx;
}
location /api/ {
proxy_pass http://server/api/;
}
}
app/Dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY . .
RUN npm install
RUN npm run build
EXPOSE 3000
ENV NODE_ENV=test
CMD [ "npm", "start" ]
server/Dockerfile
# Create build
FROM node:alpine as build
WORKDIR /server
COPY package*.json ./
COPY tsconfig.json ./
RUN npm i -g typescript ts-node
RUN npm ci
COPY . .
RUN npm run build
# Run Build
FROM node:alpine as prod
WORKDIR /server
COPY --from=build /server/package*.json ./
COPY --from=build /server/build ./
RUN npm ci --only=production
EXPOSE 8000
CMD [ "npm", "run", "prod" ]
main.yml file build server and client apps by SSH into DO droplet. Finally deploy to the container. ( only include sample code )
jobs:
buildClient:
runs-on: ubuntu-latest
defaults:
run:
working-directory: app
steps:
- uses: actions/checkout#v2
- name: Build client docker image
run: |
docker login -u ${{ secrets }} -p ${{ secrets }} registry.digitalocean.com
docker build -t registry.digitalocean.com/....... .
docker push registry.digitalocean.com/..........
deployContainers:
name: Deploy containers to DO Droplet
runs-on: ubuntu-latest
needs: [
buildClient,
buildServer
]
steps:
- name: SSH into Droplet
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{ secrets }}
username: ${{ secrets }}
key: ${{ secrets.SSH }}
passphrase: ${{ secrets }}
script: |
docker login -u ${{ secrets }} -p ${{ secrets }}
registry.digitalocean.com
docker stop app
docker rm app
docker image rm registry.digitalocean.com/.......
docker pull registry.digitalocean.com/..........
docker run --name app -p 80:3000 -d registry.digitalocean.com/.........

COPY failed: no source files were specified with github actions

Error Description
While running command git push, I am getting following error: COPY failed: no source files were specified
Dockerfile
Dockerfile is like it:
# 拉取node:14作为构建工具
FROM node:14 AS build
# 工作目录为 app
WORKDIR /app
# 将以package结尾的json文件拷贝
COPY package*.json ./
RUN npm install -g pnpm
# 执行 安装依赖
RUN pnpm install
# 将 ts配置文件拷贝过去
COPY tsconfig.json ./
# 将public目录拷贝过去
COPY public public/
# 将src目录拷贝过去
COPY src src/
# 执行构建脚本
RUN pnpm run build
# 拉取nginx
FROM nginx:alpine
# 将构建好的文件夹拷贝到nginx中
COPY --from=build /app/build/ /usr/share/nginx/html
# 暴露端口9567
EXPOSE 9567
# 运行nginx
CMD ["nginx", "-g", "daemon off;"]
Github Actions Yaml
dev.yml(github actions)is like as shown below::
# This is a basic workflow to help you get started with Actions
name: Deploy Web De
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [main]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "deploy-web-dev"
deploy-web-dev:
environment:
development
# The type of runner that the job will run on
runs-on:
ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2
with:
version: 6
- name: Install dependencies
run: pnpm install
- name: Build web dev
run: pnpm run build
- name: Log in to Docker Hub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Reset dockerignore
run: |
echo "*" > .dockerignore
echo "!dist" >> .dockerignore
- name: Build and push images
env:
COMMIT_SHA_TAG: development-${{ github.sha }}
LATEST_DEV_TAG: dev-latest
PRIVATE_DOCKERHUB_REGISTRY: ${{ secrets.PRIVATE_DOCKERHUB_REGISTRY }}
PRIVATE_DOCKERHUB_USERNAME: ${{ secrets.PRIVATE_DOCKERHUB_USERNAME }}
PRIVATE_DOCKERHUB_PASSWORD: ${{ secrets.PRIVATE_DOCKERHUB_PASSWORD }}
run: |
docker build . -t cloud-music:$COMMIT_SHA_TAG -t cloud-music:$LATEST_DEV_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$COMMIT_SHA_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$LATEST_DEV_TAG
docker push cloud-music:$COMMIT_SHA_TAG
docker push cloud-music:$LATEST_DEV_TAG
docker login -u $PRIVATE_DOCKERHUB_USERNAME -p $PRIVATE_DOCKERHUB_PASSWORD $PRIVATE_DOCKERHUB_REGISTRY
docker push $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$COMMIT_SHA_TAG
docker push $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$LATEST_DEV_TAG
Jobs Log
Jobs log:
Error Position
Error in line 23
1
Run docker build . -t cloud-music:$COMMIT_SHA_TAG -t cloud-music:$LATEST_DEV_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$COMMIT_SHA_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$LATEST_DEV_TAG
2
docker build . -t cloud-music:$COMMIT_SHA_TAG -t cloud-music:$LATEST_DEV_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$COMMIT_SHA_TAG -t $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$LATEST_DEV_TAG
3
docker push cloud-music:$COMMIT_SHA_TAG
4
docker push cloud-music:$LATEST_DEV_TAG
5
6
docker login -u $PRIVATE_DOCKERHUB_USERNAME -p $PRIVATE_DOCKERHUB_PASSWORD $PRIVATE_DOCKERHUB_REGISTRY
7
docker push $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$COMMIT_SHA_TAG
8
docker push $PRIVATE_DOCKERHUB_REGISTRY/cloud-music:$LATEST_DEV_TAG
9
shell: /usr/bin/bash -e {0}
10
env:
11
PNPM_HOME: /home/runner/setup-pnpm/node_modules/.bin
12
COMMIT_SHA_TAG: development-6ba24b062419ef744d2642e2f9eee97dabb9a63e
13
LATEST_DEV_TAG: dev-latest
14
PRIVATE_DOCKERHUB_REGISTRY: ***
15
PRIVATE_DOCKERHUB_USERNAME: ***
16
PRIVATE_DOCKERHUB_PASSWORD: ***
17
Sending build context to Docker daemon 3.584kB
18
19
Step 1/12 : FROM node:14 AS build
20
---> 903c2c873ea4
21
Step 2/12 : WORKDIR /app
22
---> Running in f80bdf0901cf
23
COPY failed: no source files were specified
24
Removing intermediate container f80bdf0901cf
25
---> 3221d5124e85
26
Step 3/12 : COPY package*.json ./
27
Error: Process completed with exit code 1.
Project Structure
Here is my project structure.
enter image description here
Please help me in solving this error.
Thanks in advance!!
I hope you might solved this issue. ANyhow my reply is to help others for reference. I too faced the same problem and solved by the following changes made in workflow.yml. You need to mention the docker packages and docker file directory in workflow by cd
workflow at the time of error
run: docker build . --file Dockerfile --tag nodejs:$(date +%s)
Fix:
run: |
cd app
docker build . --file Dockerfile --tag nodejs:$(date +%s)

Github workflow not getting the requirements.txt file while building docker image

I have a github workflow that is building the docker image, installing dependencies using requirements.txt and pushing to AWS ECR. When I am checking it locally all is working fine but when github workflow is running it is not able to access the requirements.txt file and shows the following error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Below is my simple dockerfile
FROM amazon/aws-lambda-python:3.9
COPY . ${LAMBDA_TASK_ROOT}
RUN pip3 install scipy
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "api.handler" ]
Here is cicd yaml file.
name: Deploy to ECR
on:
push:
branches: [ metrics_handling ]
jobs:
build:
name: Build Image
runs-on: ubuntu-latest
steps:
- name: Check Out Code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION }}
- name: Build, Tag, and Push image to Amazon ECR
id: tag
run: |
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${accountid}.dkr.ecr.${region}.amazonaws.com
docker rmi --force ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
docker push ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest
env:
accountid: ${{ secrets.ACCOUNTID}}
region: ${{ secrets.REGION }}
ecr_repository: ${{ secrets.ECR_REPOSITORY }}
Below is the structure of my directory. The requirements.txt file is inside API directory with all the related code that is needed to build and run the image.
Based upon the questions' comments, the Python requirements.txt file is located in the API directory. This command is is specifying the Dockerfile using a path in the API directory, but building the container in the current directory.
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest -f API/Dockerfile . --no-cache
The correct approach is to build the container in the API directory:
docker build --tag ${accountid}.dkr.ecr.${region}.amazonaws.com/${ecr_repository}:latest API --no-cache
Notice the change from . to API and removing the Dockerfile location -f API/Dockerfile

Dockerfile go build command not getting cached in Github Actions

I am using the Github Action actions/cache#v2 to cache the docker layers. Following is the build.yml file:
name: Build
on:
push:
branches:
- '**'
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout#v2
- name: Get Docker Tags
id: getDockerTag
run: |
echo ::set-output name=image_tag::${{github.sha}}
echo "Setting image tag as :: ${{github.sha}}"
# Set up buildx runner
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
- name: Cache Docker layers
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ hashFiles('**/Dockerfile') }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Build docker image 1
uses: docker/build-push-action#v2
with:
push: false
tags: go-docker-caching:${{ steps.getDockerTag.outputs.image_tag }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache,mode=max
Dockerfile:
FROM golang:latest as builder
# create a working directory
WORKDIR /main
COPY go.mod go.sum ./
RUN ls -a
# Download dependencies
RUN go mod tidy
RUN go mod download
COPY . .
## Build binary
#RUN CGO_ENABLED=0 GOARCH=amd64 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o gin_test
RUN go build -o main
# use a minimal alpine image for deployment
FROM alpine:latest
# add ca-certificates in case you need them
RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
# set working directory
WORKDIR /root
# copy the binary from builder
COPY --from=builder /main .
RUN touch .main.yml
# Specify the PORT
EXPOSE 8080:8080
# run the binary
CMD ["./main"]
On local, all docker commands are getting cached, but on Github Actions, the following commands are not getting cached which leads to the docker build taking around 3 minutes to download the modules, even if nothing is changed.
RUN go mod download
RUN go build -o main
How to make sure all the commands are cached?

Resources