Circleci 2.0 Build on Dockerfile - docker

I have a Node.JS application that I'd like to build and test using CircleCI and Amazon ECR. The documentation is not clear on how to build an image from a Dockerfile in a repository. I've looked here: https://circleci.com/docs/2.0/building-docker-images/ and here https://circleci.com/blog/multi-stage-docker-builds/ but it's not clear what I put under the executor. Here's what I've got so far:
version: 2
jobs:
build:
docker:
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
CircleCI fails with the following error:
* The job has no executor type specified. The job should have one of the following keys specified: "machine", "docker", "macos"
The base image is take from the Dockerfile. I'm using CircleCI's built in AWS Integration so I don't think I need to add aws_auth. What do I need to put under the executor to get this running?

Build this with a Docker-in-Docker config:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 gcc \
libffi-dev python-dev \
linux-headers \
musl-dev \
libressl-dev \
make
pip install \
docker-compose==1.12.0 \
awscli==1.11.76 \
ansible==2.4.2.0
- run:
name: Save Vault Password to File
command: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
- run:
name: Decrypt .env
command: |
ansible-vault decrypt .circleci/envs --vault-password-file .vault-pass.txt
- run:
name: Move .env
command: rm -f .env && mv .circleci/envs .env
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build application Docker image
command: |
docker build --cache-from=app -t app .
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- deploy:
name: Push application Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
login="$(aws ecr get-login --region $ECR_REGION)"
${login}
docker tag app "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
docker push "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
fi

You need to specify a Docker image for your build to run in in the first place. This should work:
version: 2
jobs:
build:
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .

Related

error during connect: lookup thedockerhost on *: no such host

I'm new to building docker images in gitlab ci and keep returning an error during connect error.
I set up my docker image in Gitlab to be created in AWS.
Dockerfile
FROM python:3-alpine
RUN apk add --update git bash curl unzip zip openssl make
ENV TERRAFORM_VERSION="0.12.28"
RUN curl https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip > terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /bin && \
rm -f terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN pip install awscli boto3
ENTRYPOINT ["terraform"]
.gitlab-ci.yml
variables:
DOCKER_REGISTRY: *.dkr.ecr.eu-west-2.amazonaws.com
AWS_DEFAULT_REGION: eu-west-2
APP_NAME: mytestbuild
DOCKER_HOST: tcp://thedockerhost:2375/
#publish script
publish:
image:
name: amazon/aws-cli:latest
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:$CI_PIPELINE_IID
When I push the file up to GitLab and the script begins to run it fails and presents this error code
error during connect: Post
"http://thedockerhost:2375/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&t=854124157125.dkr.ecr.eu-west-2.amazonaws.com%2Fmytestbuild%3A20&target=&ulimits=null&version=1":
dial tcp: lookup thedockerhost on 172.20.0.10:53: no such host
I've tried a few things to try to sort it out but it is mostly related to using docker: latest image, however, I also found that using amazon/aws-cli should also work. None of what I have seen has worked, and I'd appreciate the help.

How to deploy dockerized Django+uWSGI+Nginx app to Google App Engine using CircleCI

I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html

Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running? Gitlab CI/CD

I have Gitlab repository and try to add ci/cd pipeline to it
Here .yml file
stages:
- development-db-migrations
- development
step-development-db-migrations:
stage: development-db-migrations
image: mcr.microsoft.com/dotnet/core/sdk:3.1
before_script:
- apt-get update -y
- apt-get upgrade -y
- apt-get dist-upgrade -y
- apt-get -y autoremove
- apt-get clean
- apt-get -y install zip
- dotnet tool install --global dotnet-ef
- export PATH="$PATH:/root/.dotnet/tools"
- sed -i "s/DB_CONNECTION/$DB_CONNECTION_DEV/g" src/COROI.Web.Host/appsettings.json
script:
- echo db migrations started
- cd src/COROI.EntityFrameworkCore
- dotnet ef database update
environment: development
tags:
# - CoroiAdmin
only:
- main
step-deploy-development:
stage: development
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- export DOCKER_HOST="tcp://localhost:2375"
- docker info
- export DYNAMIC_ENV_VAR=DEVELOPMENT
- apk update
- apk upgrade
- apk add util-linux pciutils usbutils coreutils binutils findutils grep
- apk add python3 python3-dev python3 py3-pip
- pip install awscli
script:
- echo setting up env $DYNAMIC_ENV_VAR
- $(aws ecr get-login --no-include-email --region eu-west-2)
- docker build --build-arg ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT_DEV} --build-arg DB_CONNECTION=${DB_CONNECTION_DEV} --build-arg CORS_ORIGINS=${CORS_ORIGINS_DEV} --build-arg SERVER_ROOT_ADDRESS=${SERVER_ROOT_ADDRESS_DEV} -f src/COROI.Web.Host/Dockerfile -t $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA .
- docker push $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA
- cd deployment
- sed -i -e "s/TAG/$CI_COMMIT_SHA/g" ecs_task_dev.json
- aws ecs register-task-definition --region $ECS_REGION --cli-input-json file://ecs_task_dev.json >> temp.json
- REV=`grep '"revision"' temp.json | awk '{print $2}'`
- aws ecs update-service --cluster $ECS_DEV_CLUSTER --service $ECS_DEV_SERVICE --task-definition $ECS_DEV_TASK --region $ECS_REGION
environment: development
tags:
# - CoroiAdmin
only:
- main
at this step
step-deploy-development:
I got this error
ERROR: Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
after
- export DOCKER_HOST="tcp://localhost:2375"
- docker info
Where is my problem and how I can fix it?
Docker tries to connect to local docker daemon by default via unix sockets.
In the deployment file there is this entry which is setting the docker host env variable before building the image
before_script:
- export DOCKER_HOST="tcp://localhost:2375"
To specify remote docker hosts there are env variables we can use to indicate docker client which docker server we want to connect to.
These env vars are DOCKER_HOST and DOCKER_PORT, if we have them defined on the system then docker will connect to the provided docker daemon server provided via the vars.
Read this guide https://linuxhandbook.com/docker-remote-access/ for further info.

Is it possible to copy a file from docker container to gitlab repository by gitlab-ci.yml

I created a docker image with automated tests that generates a report XML file. After the test run, this file is generated. I want to copy this file to the repository because the pipeline needs this file to show result tests:
My gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxxx" -p "yyyy" docker.io
script:
- docker run --name authContainer "xxxx/dockerImage:0.0.1"
after_script:
- docker cp authContainer:/artifacts/test-result.xml .
artifacts:
when: always
paths:
- test-result.xml
reports:
junit:
- test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"
You're .gitlab-ci file is looking fine. You can have the XML report as artifact and gitlab will populate the results from that. Below is the script that i've used and could see the results.
script:
- pytest -o junit_family=xunit2 --junitxml=report.xml --cov=. --cov-report html
- coverage report
coverage: '/^TOTAL.+?(\d+\%)$/'
artifacts:
paths:
- coverage
reports:
junit: report.xml
when: always

cypress ci missing libgtk-x11-2.0.so.0

I am running Cypress with circle ci. It works when using the orb, but this does not. I am trying to start both of my client server along with node server. It seems like I am missing a package in the docker container or something.
I am willing to change back to use the cypress orb, but I am not sure how to set it up to have both servers running before running cypress/run
> If you are using Docker, we provide containers with all required dependencies installed.
----------
/home/circleci/.cache/Cypress/3.1.5/Cypress/Cypress: error while loading shared libraries: libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory
----------
Platform: linux (Debian - 8.11)
Cypress Version: 3.1.5
Here are the steps:
docker:
# specify the version you desire here
- image: circleci/node:10.8.0
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: postgres
POSTGRES_DB: dnb
- image: redis
- image: cypress/base:10
environment:
TERM: xterm
steps:
- checkout
- restore_cache:
keys:
- v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
- v1-deps-{{ .Branch }}
- v1-deps
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
# cache NPM modules and the folder with the Cypress binary
paths:
- ~/.npm
- ~/.cache
# - run:
# name: Run test
# command: npm test -- --coverage --forceExit --detectOpenHandles --maxWorkers=10
# no_output_timeout: 3m
# - run:
# name: Send codecov coverage report
# command: bash <(curl -s https://codecov.io/bash) -f coverage/lcov.info -t
- run:
name: run client server
command: npm start
background: true
- run:
name: Pull server
command: cd && git clone ....git && ls
- run:
name: run node server
command: cd && cd ..i && npm install && npm run dev:prepare && npm start
background: true
- run: npm run cypress:run
You aren't actually executing cypress in the cypress/base:10 docker image.
See the CircleCI docs for multiple images:
In a multi-image configuration job, all steps are executed in the container created by the first image listed.
You should try this instead:
docker:
# specify the version you desire here
- image: cypress/base:10
environment:
TERM: xterm
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: postgres
POSTGRES_DB: dnb
- image: redis

Resources