I'm actually trying to finish my first GitHub action with CI/CD and Heroku deploy and a i get this error.
Error image:
This is my public repo.
https://github.com/jovicon/the_empire_strikes_back_challenge
Everything is updated in "development" branch
This is my test job: (full file)
Note: When I comment Pylint step everything works fine.
test:
name: Test Docker Image
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Log in to GitHub Packages
run: echo ${GITHUB_TOKEN} | docker login -u ${GITHUB_ACTOR} --password-stdin docker.pkg.github.com
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Pull image
run: |
docker pull ${{ env.IMAGE }}:latest || true
- name: Build image
run: |
docker build \
--cache-from ${{ env.IMAGE }}:latest \
--tag ${{ env.IMAGE }}:latest \
--file ./backend/Dockerfile.prod \
"./backend"
- name: Run container
run: |
docker run \
-d \
--name fastapi-tdd \
-e PORT=8765 \
-e ENVIRONMENT=dev \
-e DATABASE_TEST_URL=sqlite://sqlite.db \
-p 5003:8765 \
${{ env.IMAGE }}:latest
- name: Pytest
run: docker exec fastapi-tdd python -m pytest .
- name: Pylint
run: docker exec fastapi-tdd python -m pylint app/
- name: Black
run: docker exec fastapi-tdd python -m black . --check
- name: isort
run: docker exec fastapi-tdd /bin/sh -c "python -m isort ./*/*.py --check-only"
I let here my Dockerfile.prod too:
# pull official base image
FROM python:3.8.3-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV ENVIRONMENT prod
ENV TESTING 0
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# install python dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./dev-requirements.txt .
RUN pip install -r requirements.txt
RUN pip install -r dev-requirements.txt
# add app
COPY . .
RUN chmod 755 $HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run gunicorn
CMD gunicorn --bind 0.0.0.0:$PORT app.main:app -k uvicorn.workers.UvicornWorker
You're setting the $HOME directory permissions to 755 from the default user. chown -R app:app $APP_HOME targets only $APP_HOME, which is only a subdirectory of $HOME.
In consequence, the user app doesn't have write permissions to $HOME and pylint can't create the directory /home/app/.pylint.d.
Related
I have created this Docker image:
FROM ubuntu:22.04 as Base
# Get default packages and some useful tools.
RUN apt-get -qq update && apt-get install -y \
curl \
clang \
build-essential \
llvm \
git
# Install Rust using its installer
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
# Add .cargo/bin to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
# Check cargo is visible
RUN cargo --help
# Update Rust
RUN rustup update stable
# Add WebAssembly target
RUN rustup target add wasm32-unknown-unknown
FROM ubuntu:22.04 as Final
COPY --from=Base /root/.cargo /root/.cargo
COPY --from=Base /root/.rustup /root/.rustup
ENV PATH="/root/.cargo/bin:${PATH}"
When I run it locally:
docker image build -t my_account/cargo:0.1 . -f "Dockerfile rust.v1"
docker container run -it --name cargo-test my_account/cargo:0.1 /bin/bash
and execute tthe command cargo in the bash console, the command is executed and I habve the proper output.
The goal is to use this image in a GitHub Action, but there the cargo command (in the try_cargo job) returns an error:
This is the GitHub action YAML:
name: Rust
on:
push:
branches: [ "main" ]
env:
CARGO_TERM_COLOR: always
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build
run: cargo build --verbose
- name: Run tests
run: cargo test --verbose
try_cargo:
name: Try to use cargo in custom container
runs-on: ubuntu-latest
container:
image: my_account/cargo:0.1
steps:
- uses: actions/checkout#v3
- name: publish package
run: |
echo $(ls / -af)
echo ---
echo cargo
cargo
echo ---
echo ls /root/.cargo
echo $(ls /root/.cargo)
echo ---
echo ls /root/.rustup
echo $(ls /root/.rustup)
echo ---
#rustup update stable # not working... cannot download...
rustup target add wasm32-unknown-unknown
echo ---
What confused me the most is the different behaviour when I use a container running the same image.
Obviously I miss something...
I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.
We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG
When I build my docker file locally and push my application runs correctly. However when I Build through github actions I get an error that 'flask' is not installed.
It seems that the pip install step does nothing in Github actions - it just shows:
Step 8/13 : RUN pip install --trusted-host pypi.python.org -r /app/requirements.txt
---> Running in 6b0816c1bdc8
Removing intermediate container 6b0816c1bdc8
However on my local i get the full pip install output..
Is there something I am missing with Github Actions?
DockerFile:
FROM python:3.8-alpine
WORKDIR /app
ARG DB_PASSWORD
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
ADD ./requirements.txt /app
ADD ./src /app
RUN cat /app/requirements.txt
RUN pip install -r /app/requirements.txt
ENV DEBUG=false
ENV FLASE_DEBUG=false
ENV TESTING=false
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
Action Step:
- name: Build docker image and push to ECR
run: /bin/bash $GITHUB_WORKSPACE/scripts/build_and_push.sh
env:
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
AWS_DEFAULT_REGION: "eu-west-1"
Build Script:
pipenv run pip freeze > requirements.txt
aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin {{ ECR Address}}
docker build -t {{ image name }} .
I am trying to build a jenkins docker image from official jenkins git repo:
https://github.com/jenkinsci/docker.
But when I try to run the container of the image using docker run -it -dP jenkins, it exits immediately and when i check the docker logs, I get the following error:
: invalid option
I read that the error could be because the pid of tini is not 1. I looked at the documents and saw that if we do the following, it should solve the issue.
Passing the -s argument to Tini (tini -s -- ...)
Setting the environment variable TINI_SUBREAPER (e.g. export TINI_SUBREAPER=).
But it did not solve anything.
The following is the exact copy of the Dockerfile being built with docker build -t jenkins .:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
ENV TINI_SUBREAPER=
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.14.0
ENV TINI_SHA 6c41ec7d33e857d4779f14d9c74924cab0c7973485d2972419a3b7c7620ff5fd
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static-amd64 -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha256sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.60.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=34fde424dde0e050738f5ad1e316d54f741c237bd380bd663a07f96147bb1390
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -k -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
The problem was with the docker version. My Docker version was old. Not sure which command was not supported, but the new docker built the dockerfile.