I am running Cypress with circle ci. It works when using the orb, but this does not. I am trying to start both of my client server along with node server. It seems like I am missing a package in the docker container or something.
I am willing to change back to use the cypress orb, but I am not sure how to set it up to have both servers running before running cypress/run
> If you are using Docker, we provide containers with all required dependencies installed.
----------
/home/circleci/.cache/Cypress/3.1.5/Cypress/Cypress: error while loading shared libraries: libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory
----------
Platform: linux (Debian - 8.11)
Cypress Version: 3.1.5
Here are the steps:
docker:
# specify the version you desire here
- image: circleci/node:10.8.0
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: postgres
POSTGRES_DB: dnb
- image: redis
- image: cypress/base:10
environment:
TERM: xterm
steps:
- checkout
- restore_cache:
keys:
- v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
- v1-deps-{{ .Branch }}
- v1-deps
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: v1-deps-{{ .Branch }}-{{ checksum "package.json" }}
# cache NPM modules and the folder with the Cypress binary
paths:
- ~/.npm
- ~/.cache
# - run:
# name: Run test
# command: npm test -- --coverage --forceExit --detectOpenHandles --maxWorkers=10
# no_output_timeout: 3m
# - run:
# name: Send codecov coverage report
# command: bash <(curl -s https://codecov.io/bash) -f coverage/lcov.info -t
- run:
name: run client server
command: npm start
background: true
- run:
name: Pull server
command: cd && git clone ....git && ls
- run:
name: run node server
command: cd && cd ..i && npm install && npm run dev:prepare && npm start
background: true
- run: npm run cypress:run
You aren't actually executing cypress in the cypress/base:10 docker image.
See the CircleCI docs for multiple images:
In a multi-image configuration job, all steps are executed in the container created by the first image listed.
You should try this instead:
docker:
# specify the version you desire here
- image: cypress/base:10
environment:
TERM: xterm
- image: circleci/postgres:9.6
environment:
POSTGRES_USER: postgres
POSTGRES_DB: dnb
- image: redis
Related
I have developed a Django dockerized web app using docker-compose. It runs in my local fine.
The point is that when I define a CI pipeline, specifically CircleCI (I don't know how it works with any other alternative), to upload it to GCloud App Engine the workflow works fine but when visiting the url it returns nothing (500 error).
The code I have and that I run locally using is the following. When I set the CircleCI pipeline I have no clue on how the app.yaml file interacts and what the steps in the .circleci/config.yml should be in order to run the docker-compose. Any idea or resource I might use?
My Dockerfile:
FROM python:3.9-alpine
ENV PATH="/scripts:${PATH}"
COPY ./requirements.txt /requirements.txt
RUN apk add --update --no-cache --virtual .tmp gcc libc-dev linux-headers
RUN pip install -r /requirements.txt
RUN apk del .tmp
RUN mkdir -p /app
COPY ./app /app
WORKDIR /app
COPY ./scripts /scripts
#this allows for execute permission in all files inside /scripts/
RUN chmod +x /scripts/*
RUN mkdir -p /vol/web/media
RUN mkdir -p /vol/web/static
RUN adduser -D user
RUN chown -R user:user /vol
RUN chmod -R 755 /vol/web
USER user
CMD ["entrypoint.sh"]
My docker-compose file:
version: '3.9'
services:
app:
build:
context: .
volumes:
- static_data:/vol/web
environment:
- SECRET_KEY=samplesecret123
- ALLOWED_HOSTS=127.0.0.1,localhost
proxy:
build:
context: ./proxy
volumes:
- static_data:/vol/static
ports:
- "8080:8080"
depends_on:
- app
volumes:
static_data:
Nginx Dockerfile:
FROM nginxinc/nginx-unprivileged:1-alpine
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./uwsgi_params /etc/nginx/uwsgi_params
USER root
RUN mkdir -p /vol/static
RUN chmod 755 /vol/static
USER nginx
Nginx default.conf
server {
listen 8080;
location /static {
alias /vol/static;
}
location / {
uwsgi_pass app:8000;
include /etc/nginx/uwsgi_params;
}
}
entrypoint.sh
#!/bin/sh
set -e
python manage.py collectstatic --no-input
uwsgi --socket :8000 --master --enable-threads --module app.wsgi
.circleci/config.yml
version: 2.1
workflows:
version: 2
build_and_deploy_workflow:
jobs:
- build_and_deploy_job:
filters:
branches:
only:
- master
jobs:
build_and_deploy_job:
docker:
- image: google/cloud-sdk ##based in Debian
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
name: Install requirements.txt
command: |
apt install -y python-pip
python3 -m pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
apt-get install -y sudo
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker
- run:
name: 'Collect static'
command: |
docker-compose -f docker-compose-deploy.yml up --build
# docker-compose build
# docker-compose run --rm app
# docker-compose run --rm app sh -c "python manage.py collectstatic"
- run:
name: 'Deploy to app engine'
command: |
echo ${GCLOUD_SERVICE_KEY} > /tmp/sa_key.json | \
gcloud auth activate-service-account --key-file=/tmp/sa_key.json
rm /tmp/sa_key.json
gcloud config set project [projectname]
gcloud config set compute/region [region]
gcloud app deploy app.yaml
app.yaml GCloud App Engine:
runtime: python39
#entrypoint: gunicorn -b :$PORT --chdir app/ app.wsgi:application
#entrypoint: gunicorn -b :$PORT app:wsgi
entrypoint: uwsgi --socket :8000 --master --enable-threads --module app.wsgi
handlers:
- url: /static
static_dir: static/
- url: /.*
script: auto
Here is a link that could help you with an example of app.yaml file for a Python 3 application:
https://cloud.google.com/appengine/docs/standard/python3/config/appref
Code example:
runtime: python39 # or another supported version
instance_class: F2
env_variables:
BUCKET_NAME: "example-gcs-bucket"
handlers:
# Matches requests to /images/... to files in static/images/...
- url: /images
static_dir: static/images
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
For Python 3, the app.yaml is required to contain at least a runtime: python39 entry.
For a brief overview, see defining runtime settings:
https://cloud.google.com/appengine/docs/standard/python3/configuring-your-app-with-app-yaml
To deploy to Google App Engine with CircleCi I found this article that may help you with your main issue:
https://medium.com/#1555398769574/deploy-to-google-app-engine-with-circleci-or-github-actions-cb1bab15ca80
Code example:
.circleci/config.yaml
version: 2
jobs:
build:
working_directory: ~/workspace
docker:
- image: circleci/php:7.2-stretch-node-browsers
steps:
- checkout
- run: |
cp .env.example .env &&
php artisan key:generate
- persist_to_workspace:
root: .
paths:
- .
deploy:
working_directory: ~/workspace
docker:
- image: google/cloud-sdk
steps:
- attach_workspace:
at: .
- run:
name: Service Account Key
command: echo ${GCLOUD_SERVICE_KEY} > ${HOME}/gcloud-service-key.json
- run:
name: Set gcloud command
command: |
gcloud auth activate-service-account --key-file=${HOME}/gcloud-service-key.json
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
- run:
name: deploy to Google App Engine
command: |
gcloud app deploy app.yaml
workflows:
version: 2
build:
jobs:
- build
- deploy:
context: gcp
requires:
- build
filters:
branches:
only: master
Adding additional documentation on how to create CI/CD pipeline for Google App Engine with CircleCI 2.0:
https://runzhuoli.me/2018/12/21/ci-cd-gcp-gae-circleci.html
For the stage "deploy" I need a proxy. But stage "test" does not work from the point, on where the Karma test is starting. Is there a way, where I can define: Use proxy settings for stage "Deploy" but not for "test"?
I tried to exclude the IP, Karma is using, from proxy but the Ip is changing every time.
variables:
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
stages:
- test
- deploy
test:
stage: test
image: node:erbium
services:
- selenium/standalone-chrome:3.141.59
script:
- npm ci
- npm run lint
- npm run lint:sass
- npm run lint:editorconfig
- npm run test -- --progress=false --code-coverage
- npm run e2e -- --host=$(hostname -i)
- npm run build:prod -- --progress=false
coverage: '/Statements\s*:\s*(\d+\.?\d+)\%/'
artifacts:
expire_in: 3h
paths:
- dist/
reports:
junit: dist/reports/app-name/test-*.xml
cobertura: dist/coverage/app-name/cobertura-coverage.xml
tags:
- DOCKER
deploy:
stage: deploy
image: python:latest
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
Two ways
Mixin variables
.proxy-variables: &proxy-variables
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
deploy:
stage: deploy
image: python:latest
variables:
- *proxy-variables
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
Extend job template
.proxied-job:
variables:
http_proxy: "$CODE_PROXY"
https_proxy: "$CODE_PROXY"
no_proxy: "127.0.0.1,localhost"
deploy:
extends: .proxied-job
stage: deploy
image: python:latest
script:
- pip install awscli
- aws s3 rm s3://$S3_BUCKET_NAME --recursive
- aws s3 cp ./dist/app-name s3://$S3_BUCKET_NAME/ --recursive
only:
- master
I'm trying to set up continuous deployment on Circle CI.
I've successfully run my build script, which creates a build folder in the root directory. When I run the command locally to sync with s3, it works fine. But in Circle CI I can't get the path to the build folder.
I've tried ./build, adding working_directory: ~/circleci-docs in the deploy job, and printing the working directory in a test run, which was /home/circleci/project, so I tried manually using /home/circleci/project/build and that didn't work either.
This is my CircleCI config.yml file:
executors:
node-executor:
docker:
- image: circleci/node:10.8
python-executor:
docker:
- image: circleci/python:3.7
jobs:
build:
executor: node-executor
steps:
- checkout
- run:
name: Run build script
command: |
curl -o- -L https://yarnpkg.com/install.sh | bash
yarn install --production=false
yarn build
deploy:
executor: python-executor
steps:
- checkout
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync build s3://{MY_BUCKET}
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
The error message was:
The user-provided path build does not exist.
Exited with code 255
I got it to work!
In the build job I used persist_to_workspace and the deploy job attach_workspace (both are under steps)
- persist_to_workspace:
root: ~/
paths:
- project/build
- attach_workspace:
at: ~/
I have a Node.JS application that I'd like to build and test using CircleCI and Amazon ECR. The documentation is not clear on how to build an image from a Dockerfile in a repository. I've looked here: https://circleci.com/docs/2.0/building-docker-images/ and here https://circleci.com/blog/multi-stage-docker-builds/ but it's not clear what I put under the executor. Here's what I've got so far:
version: 2
jobs:
build:
docker:
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
CircleCI fails with the following error:
* The job has no executor type specified. The job should have one of the following keys specified: "machine", "docker", "macos"
The base image is take from the Dockerfile. I'm using CircleCI's built in AWS Integration so I don't think I need to add aws_auth. What do I need to put under the executor to get this running?
Build this with a Docker-in-Docker config:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1 gcc \
libffi-dev python-dev \
linux-headers \
musl-dev \
libressl-dev \
make
pip install \
docker-compose==1.12.0 \
awscli==1.11.76 \
ansible==2.4.2.0
- run:
name: Save Vault Password to File
command: echo $ANSIBLE_VAULT_PASS > .vault-pass.txt
- run:
name: Decrypt .env
command: |
ansible-vault decrypt .circleci/envs --vault-password-file .vault-pass.txt
- run:
name: Move .env
command: rm -f .env && mv .circleci/envs .env
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build application Docker image
command: |
docker build --cache-from=app -t app .
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- deploy:
name: Push application Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
login="$(aws ecr get-login --region $ECR_REGION)"
${login}
docker tag app "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
docker push "${ECR_ENDPOINT}:${CIRCLE_SHA1}"
fi
You need to specify a Docker image for your build to run in in the first place. This should work:
version: 2
jobs:
build:
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker:
version: 17.05.0-ce
# build the image
- run: docker build -t $ECR_REPO:0.1 .
I have created a project in Angular cli. I want to do CI using circle ci. The project is uploaded in Bitbucket and is correctly picked by Circle CI. The build fails though. Following is the config.yml (picked CircleCI's sample.yml and changed it (added ng test). I assume that the package.json created by angularcli earlier would install AngularCLI.
version: 2
jobs:
build:
#working_directory: ~/mern-starter
# The primary container is an instance of the first list image listed. Your build commands run in this container.
docker:
- image: circleci/node:7.10.0
# The secondary container is an instance of the second listed image which is run in a common network where ports exposed on the primary container are available on localhost.
#- image: mongo:3.4.4
steps:
- checkout
- run:
name: Update npm
command: 'sudo npm install -g npm#latest'
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install npm wee
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
test:
docker:
- image: circleci/node:7.10.0
#- image: mongo:3.4.4
steps:
- checkout
- run:
name: Test
command: ng test
#- run:
# name: Generate code coverage
# command: './node_modules/.bin/nyc report --reporter=text-lcov'
#- store_artifacts:
# path: test-results.xml
# prefix: tests
#- store_artifacts:
# path: coverage
# prefix: coverage
workflows:
version: 2
build_and_test:
jobs:
- build
- test:
requires:
- build
filters:
branches:
only: dev
Error
#!/bin/bash -eo pipefail
npm install
module.js:472
throw err;
^
Error: Cannot find module 'process-nextick-args'
at Function.Module._resolveFilename (module.js:470:15)
at Function.Module._load (module.js:418:25)
at Module.require (module.js:498:17)
at require (internal/module.js:20:19)
at Object.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/readable-stream/lib/_stream_readable.js:26:23)
at Module._compile (mod
I see the following line after npm install step so I suppose process-nexttick-args is already installed.
process-nextick-args#1.0.7 node_modules/npm/node_modules/npm-registry-client/node_modules/concat-stream/node_modules/readable-stream/node_modules/process-nextick-arg
Following configuration worked for me. I used CircleCI 2.0. I am still refining it and might change the answer in future.
version: 2
jobs:
build:
working_directory: ~/angularcli
# The primary container is an instance of the first list image listed. Your build commands run in this container.
docker:
- image: circleci/node:6-browsers
environment:
CHROME_BIN: "/usr/bin/google-chrome"
steps:
- checkout
- run:
name: Install node_modules with npm
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- ./node_modules
- run:
name: Install angularcli
command: sudo npm install -g #angular/cli#latest
- run:
name: Run unit tests with karma
command: ng test
- store_test_results:
path: test-results.xml
In addition to above script, set singleRun flag to true in karma.conf.js singleRun: true so that Karma exits after running all the test cases. Without this flag, Karma runs in continuous mode, the ng test stop doesn't end and test fails after timeout.