My main goal is to build and push the docker image to AWS ECR. I try to build my dockerfile in circleci config yml file. But is there a better solution to build and push to ECR?
https://circleci.com/docs/2.0/ecs-ecr/#build-and-push-the-docker-image-to-aws-ecr I checked docs but couldn't do it. I'm too rookie for that I guess. Can I build and push in one step like the one in the documents.
My Circleci config.yml:
version: 2.1
orbs:
node: circleci/node#4.1.0
aws-cli: circleci/aws-cli#1.3.1
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
build-app:
docker:
- image: "cimg/base:stable"
steps:
- checkout
- setup_remote_docker:
version: 19.03.13
docker_layer_caching: true
- run:
name: App Build
command: |
docker build -t sampleapp:latest .
- deploy:
name: push application docker image
command: |
login="$(aws ecr get-login)" \
${login} \
aws ecr describe-repositories --repository-names sampleapp --region eu-west-1 || aws ecr create-repository --repository-name sampleapp --region eu-west-1 \
docker tag 1.0.0:latest "${AWS_ECR_LOGIN_URL}/sampleapp:sampleapp-1.0.0" \
docker push "${AWS_ECR_LOGIN_URL}/sampleapp:sampleapp-1.0.0"
workflows:
version: 2
plan_approve_apply:
jobs:
- build-app
My Dockerfile:
FROM node:carbon
RUN apt-get update && \
apt-get -y install git
RUN git clone https://github.com/GermaVinsmoke/bmi-calculator.git && \
cd bmi-calculator
#install git
RUN apt-get update \
&& apt-get install -y git
RUN npm install
COPY . .
EXPOSE 3000
CMD [“npm”, “start”]
Related
I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.
We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG
I use buildx to build multiplatform docker image in the gitlab-ci. But the ci fail while building docker image, because it try to copy xattrs and fail to do this:
> [linux/arm/v7 2/4] RUN set -xe && apk add --no-cache ca-certificates ffmpeg openssl aria2 youtube-dl:
------
Dockerfile:8
--------------------
7 |
8 | >>> RUN set -xe \
9 | >>> && apk add --no-cache ca-certificates \
10 | >>> ffmpeg \
11 | >>> openssl \
12 | >>> aria2 \
13 | >>> youtube-dl
14 |
--------------------
error: failed to solve: rpc error: code = Unknown desc = executor failed running [/dev/.buildkit_qemu_emulator /bin/sh -c set -xe && apk add --no-cache ca-certificates ffmpeg openssl aria2 youtube-dl]: failed to copy xattrs: failed to set xattr "security.selinux" on /tmp/buildkit-qemu-emulator371955051/dev/.buildkit_qemu_emulator: operation not supported
https://gitlab.com/Lukas1818/docker-youtube-dl-cron/-/jobs/1176558386#L181
I am using the following ci:
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
docker-build:
# Use the docker image with buildx for multiplatform build.
image: lukas1818/docker-with-buildx:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag=""
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = 'latest'"
else
tag=":$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker buildx create --use
- docker buildx build --push --platform linux/arm/v7,linux/arm64/v8,linux/amd64 --tag "$CI_REGISTRY_IMAGE${tag}" .
# Run this job in a branch where a Dockerfile exists
rules:
- if: $CI_COMMIT_BRANCH
exists:
- Dockerfile
https://gitlab.com/Lukas1818/docker-youtube-dl-cron/-/blob/d12adf7779f7df71de6e9b46aa342e9ff41d5dfb/.gitlab-ci.yml
Dockerfile:
#
# Dockerfile for youtube-dl
#
FROM alpine
MAINTAINER kev <noreply#easypi.pro>
RUN set -xe \
&& apk add --no-cache ca-certificates \
ffmpeg \
openssl \
aria2 \
youtube-dl
# Try to run it so we know it works
RUN youtube-dl --version
WORKDIR /data
ENTRYPOINT ["youtube-dl"]
CMD ["--help"]
On my local machine, building using sudo docker buildx build --platform linux/arm/v7,linux/arm64/v8,linux/amd64 . does work wihtout any issue.
running the following command first does fix the problem:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes before docker buildx create --use
see: https://github.com/docker/buildx/issues/584#issuecomment-827122004
Heres my .gitlab-ci.yml
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
compile:
image: compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
The build_image is running correctly, the image created is listed when using the docker images command on the machine with the runners. But the second job fails with the error:
ERROR: Job failed: Error response from daemon: pull access denied for compiler_image_v0, repository does not exist or may require 'docker login' (executor_docker.go:168:1s)
What's going on?
This is my Dockerfile
FROM ubuntu:18.04
WORKDIR /app
# Ubuntu packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils subversion g++ make cmake unzip
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install libgtk2.*common libpango-1* libasound2* xserver-xorg
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install cpio
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install bash
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install autoconf automake perl m4
# Intel Fortran compiler
RUN mkdir /intel
COPY parallel_studio_xe_2018_3_pro_for_docker.zip /intel
RUN cd /intel && unzip /intel/parallel_studio_xe_2018_3_pro_for_docker.zip
RUN cd /intel/parallel_studio_xe_2018_3_pro_for_docker && ./install.sh --silent=custom_silent.cfg
RUN rm -rf /intel
The stage compile tries to pull the image compiler_image_v0. This image exists only temporary in the docker container of the stage containerize. You have a container registry in your gitlab repository and can push the built image in the containerize stage and then pull it in the compile stage. Furthermore: You should provide a full name of your private gitlab registry. I think dockerhub is used per default.
You can change your .gitlab.ci.yaml to add the push command and use a fully named image:
stages:
- containerize
- compile
build_image:
image: docker
stage: containerize
script:
- docker build -t compiler_image_v0 .
- docker push registry.gitlab.com/group-name/repo-name:compiler_image_v0
compile:
image: registry.gitlab.com/group-name/repo-name:compiler_image_v0
stage: compile
script:
- make
artifacts:
when: on_success
paths:
- output/
expire_in: 1 day
This would overwrite the image on each build. But you could add some versioning.
I am having difficulties with enabling docker for build job. This is how gitlab ci config file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
build:
image: java:8
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
And here is the output from job:
gitlab-ci-multi-runner 1.3.2 (0323456)
Using Docker executor with image java:8 ...
Pulling docker image docker:dind ...
Starting service docker:dind ...
Waiting for services to be up and running...
Pulling docker image java:8 ...
Running on runner-30dcea4b-project-1408237-concurrent-0 via runner-30dcea4b-machine-1470340415-c2bbfc45-digital-ocean-4gb...
Cloning repository...
Cloning into '/builds/.../...'...
Checking out 9ba87ff0 as master...
$ docker info
/bin/bash: line 42: docker: command not found
ERROR: Build failed: exit code 1
Any clues why docker is not found?
After few days of struggling, I came up with following setup:
image: gitlab/dind
stages:
- test
- build
before_script:
- echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
- apt-get update
- apt-get install -y curl
- apt-get install -y software-properties-common python-software-properties
- add-apt-repository -y ppa:webupd8team/java
- apt-get update
- apt-get install -y oracle-java8-installer
- rm -rf /var/lib/apt/lists/*
- rm -rf /var/cache/oracle-jdk8-installer
- apt-get update -yqq
- apt-get install apt-transport-https -yqq
- echo "deb http://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
- apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
- apt-get update -yqq
- apt-get install sbt -yqq
- sbt sbt-version
test:
stage: test
script:
- sbt scalastyle && sbt test:scalastyle
- sbt clean coverage test coverageReport
build:
stage: build
script:
- docker info
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com/...
- sbt server/docker:publish
It has docker (mind gitlab/dind image), java and sbt. Now I can push to gitlab registry from sbt docker plugin.
docker info command is running inside java:8 based container which will not have docker installed/available in it.