We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG
Related
My main goal is to build and push the docker image to AWS ECR. I try to build my dockerfile in circleci config yml file. But is there a better solution to build and push to ECR?
https://circleci.com/docs/2.0/ecs-ecr/#build-and-push-the-docker-image-to-aws-ecr I checked docs but couldn't do it. I'm too rookie for that I guess. Can I build and push in one step like the one in the documents.
My Circleci config.yml:
version: 2.1
orbs:
node: circleci/node#4.1.0
aws-cli: circleci/aws-cli#1.3.1
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
build-app:
docker:
- image: "cimg/base:stable"
steps:
- checkout
- setup_remote_docker:
version: 19.03.13
docker_layer_caching: true
- run:
name: App Build
command: |
docker build -t sampleapp:latest .
- deploy:
name: push application docker image
command: |
login="$(aws ecr get-login)" \
${login} \
aws ecr describe-repositories --repository-names sampleapp --region eu-west-1 || aws ecr create-repository --repository-name sampleapp --region eu-west-1 \
docker tag 1.0.0:latest "${AWS_ECR_LOGIN_URL}/sampleapp:sampleapp-1.0.0" \
docker push "${AWS_ECR_LOGIN_URL}/sampleapp:sampleapp-1.0.0"
workflows:
version: 2
plan_approve_apply:
jobs:
- build-app
My Dockerfile:
FROM node:carbon
RUN apt-get update && \
apt-get -y install git
RUN git clone https://github.com/GermaVinsmoke/bmi-calculator.git && \
cd bmi-calculator
#install git
RUN apt-get update \
&& apt-get install -y git
RUN npm install
COPY . .
EXPOSE 3000
CMD [“npm”, “start”]
I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.
I use buildx to build multiplatform docker image in the gitlab-ci. But the ci fail while building docker image, because it try to copy xattrs and fail to do this:
> [linux/arm/v7 2/4] RUN set -xe && apk add --no-cache ca-certificates ffmpeg openssl aria2 youtube-dl:
------
Dockerfile:8
--------------------
7 |
8 | >>> RUN set -xe \
9 | >>> && apk add --no-cache ca-certificates \
10 | >>> ffmpeg \
11 | >>> openssl \
12 | >>> aria2 \
13 | >>> youtube-dl
14 |
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c set -xe && apk add --no-cache ca-certificates ffmpeg openssl aria2 youtube-dl]: failed to copy xattrs: failed to set xattr "security.selinux" on /tmp/buildkit-qemu-emulator371955051/dev/.buildkit_qemu_emulator: operation not supported
https://gitlab.com/Lukas1818/docker-youtube-dl-cron/-/jobs/1176558386#L181
I am using the following ci:
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
docker-build:
# Use the docker image with buildx for multiplatform build.
image: lukas1818/docker-with-buildx:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker buildx create --use
- docker buildx build --push --platform linux/arm/v7,linux/arm64/v8,linux/amd64 --tag "$CI_REGISTRY_IMAGE${tag}" .
# Run this job in a branch where a Dockerfile exists
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
https://gitlab.com/Lukas1818/docker-youtube-dl-cron/-/blob/d12adf7779f7df71de6e9b46aa342e9ff41d5dfb/.gitlab-ci.yml
Dockerfile:
#
# Dockerfile for youtube-dl
#
FROM alpine
MAINTAINER kev <noreply#easypi.pro>
RUN set -xe \
&& apk add --no-cache ca-certificates \
ffmpeg \
openssl \
aria2 \
youtube-dl
# Try to run it so we know it works
RUN youtube-dl --version
WORKDIR /data
ENTRYPOINT ["youtube-dl"]
CMD ["--help"]
On my local machine, building using sudo docker buildx build --platform linux/arm/v7,linux/arm64/v8,linux/amd64 . does work wihtout any issue.
running the following command first does fix the problem:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes before docker buildx create --use
see: https://github.com/docker/buildx/issues/584#issuecomment-827122004
I'm actually trying to finish my first GitHub action with CI/CD and Heroku deploy and a i get this error.
Error image:
This is my public repo.
https://github.com/jovicon/the_empire_strikes_back_challenge
Everything is updated in "development" branch
This is my test job: (full file)
Note: When I comment Pylint step everything works fine.
test:
name: Test Docker Image
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Log in to GitHub Packages
run: echo ${GITHUB_TOKEN} | docker login -u ${GITHUB_ACTOR} --password-stdin docker.pkg.github.com
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Pull image
run: |
docker pull ${{ env.IMAGE }}:latest || true
- name: Build image
run: |
docker build \
--cache-from ${{ env.IMAGE }}:latest \
--tag ${{ env.IMAGE }}:latest \
--file ./backend/Dockerfile.prod \
"./backend"
- name: Run container
run: |
docker run \
-d \
--name fastapi-tdd \
-e PORT=8765 \
-e ENVIRONMENT=dev \
-e DATABASE_TEST_URL=sqlite://sqlite.db \
-p 5003:8765 \
${{ env.IMAGE }}:latest
- name: Pytest
run: docker exec fastapi-tdd python -m pytest .
- name: Pylint
run: docker exec fastapi-tdd python -m pylint app/
- name: Black
run: docker exec fastapi-tdd python -m black . --check
- name: isort
run: docker exec fastapi-tdd /bin/sh -c "python -m isort ./*/*.py --check-only"
I let here my Dockerfile.prod too:
# pull official base image
FROM python:3.8.3-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV ENVIRONMENT prod
ENV TESTING 0
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# install python dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./dev-requirements.txt .
RUN pip install -r requirements.txt
RUN pip install -r dev-requirements.txt
# add app
COPY . .
RUN chmod 755 $HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run gunicorn
CMD gunicorn --bind 0.0.0.0:$PORT app.main:app -k uvicorn.workers.UvicornWorker
You're setting the $HOME directory permissions to 755 from the default user. chown -R app:app $APP_HOME targets only $APP_HOME, which is only a subdirectory of $HOME.
In consequence, the user app doesn't have write permissions to $HOME and pylint can't create the directory /home/app/.pylint.d.
Secret
I added a secret to drone.io using:
drone org secret add --image=* --conceal --skip-verify=true octocat SSH_KEY #/home/me/.ssh/id_rsa
Dockerfile
Because npm install needs to access private repositories, I specify an ARG in my Dockerfile, to get my private ssh_key:
FROM node:latest
ARG SSH_KEY
ENV SSH_KEY=$SSH_KEY
RUN mkdir /root/.ssh && \
echo $SSH_KEY | cut -d "\"" -f 2 > /root/.ssh/id_rsa && \
chmod 0600 /root/.ssh/id_rsa && \
eval `ssh-agent -s` && \
ssh-add /root/.ssh/id_rsa && \
echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN mkdir /app
WORKDIR /app
COPY . /app
EXPOSE 3001
CMD ["npm", "start"]
.drone.yml
And finally, in my .drone.yml pipeline, on the plugin/docker step, I use build-arg to inject the ssk_key:
pipeline:
test:
image: node:latest
commands:
- mkdir /root/.ssh && echo "$SSH_KEY" > /root/.ssh/id_rsa && chmod 0600 /root/.ssh/id_rsa
- eval `ssh-agent -s` && ssh-add /root/.ssh/id_rsa
- echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config
- npm install
- npm test
docker:
image: plugins/docker
repo: octocat/bar
tags: latest
build_args:
- SSH_KEY=${SSH_KEY}
My questions:
Is that the correct way to inject my ssh key to the Dockerfile, from drone pipeline?
the build_args are printed in the frontend logs, so is the SSK_KEY...how to avoid this?
the build args is passing my SSK_KEY + quotes around it -> "SSH_KEY", so I have to remove the quotes in my Dockerfile (by piping the string) before echoing it to /root/.ssh/id_rsa:, any way to not have these "?
Many Thanks!!
[EDIT] thanks to Adrian for suggesting a better way, remove the npm install from Dockerfile, as the node_modules can be shared through a volume between the pipeline steps.