How to pass "yes" when pipeline asks an answer? - docker

I'm trying to deploy a laravel app to my server. Here is my gitlab-ci.yml file:
image: edbizarro/gitlab-ci-pipeline-php:7.3
stages:
- build
- deploy
# Variables
variables:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: username
MYSQL_PASSWORD: password
MYSQL_DATABASE: database
DB_HOST: localhost:3306
PUBLIC_URL: https://example.com
build:
stage: build
tags:
- my-tag
- another-tag
script:
- echo "Building deploy package"
- echo "composer install"
- composer install --optimize-autoloader --no-dev
- mysql --version
- echo "Migrating database"
- php artisan migrate:fresh --seed
- echo "Dumping mysql database"
- mysqldump --host="${DB_HOST}" --user="${MYSQL_USER}" --password="${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" > db.sql
- php artisan config:cache
- php artisan route:cache
- php artisan view:cache
- echo "Build successful"
artifacts:
expire_in: 1 hour
paths:
- build
deploy_production:
stage: deploy
tags:
- my-tag
- another-tag
script:
- echo "Current Directory:"
- pwd
- ls
- echo "Deploying to server"
- sudo cp -rv build/* /var/www/vhosts/example.com/
- echo "Deployed"
environment:
name: production
url: https://example.com
only:
- master
But it gives this error:
...
$ mysql --version
mysql Ver 15.1 Distrib 10.3.27-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
$ echo "Migrating database"
Migrating database
$ php artisan migrate:fresh --seed
**************************************
* Application In Production! *
**************************************
Do you really wish to run this command? (yes/no) [no]:
> Command Canceled!
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
I'm using docker on gitlab-runner.
How can I handle this error?

php artisan has a --no-interaction flag you can use so it doesn't wait for a "yes" from an interactive user. E.g.:
php artisan migrate:fresh --seed --no-interaction

Related

Bitbucket pipelines: How to find the directories/paths to cache apt-get installed packages?

I need help caching packages in my bitbucket pipeline that were installed via apt-get.
For non-apt-get installed packages you can find the path where packages are installed online. However, I'm not sure what directorie(s) to cache for apt-get installed packages.
For example I have the following command in my pipeline script:
apt-get update && apt-get install -y curl unzip git
I defined a cache directory in definitions like so:
caches:
apt-cache: /var/cache/apt
However, it's only caching 164 bytes and I don't think it's caching all of the packages that are actually installed.
Is there a way to find where these packages are installed so I can cache them?
Here is my full pipeline script below:
image: php:8.2-fpm
definitions:
# set the paths for where the packages are installed that we are caching
# these paths are used to download the packages from the cache to speed up deploys
caches:
install-php-extensions: /usr/local/bin/
phpunit: web-app/vendor/bin/
composer: /usr/local/bin
# directory where apt package cache is
apt-cache: /var/cache/apt
php-extensions: /usr/lib/php/
sonar: ~/.sonar
steps:
- step: &testing
name: Test
caches:
- install-php-extensions
- phpunit
- composer
- apt-cache
- php-extensions
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y curl unzip git
# xdebug is needed to run the code coverage later on and to generate the code coverage report
- pecl install xdebug-3.2.0 && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)" > /usr/local/etc/php/conf.d/xdebug.ini
# Install php extensions, set permissions to execute, required for snowflake, pdo, etc
# The PDO installation is required later so the composer install doesn't fail with an undefined constant
- curl -sSLf -o /usr/local/bin/install-php-extensions https://github.com/mlocati/docker-php-extension-installer/releases/download/1.5.49/install-php-extensions
- chmod +x /usr/local/bin/install-php-extensions
- install-php-extensions bcmath odbc pdo_odbc soap
# Install phpunit dependencies and run the phpunit tests with code coverage
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- cd web-app
- composer require phpunit/phpunit --dev
- XDEBUG_MODE=coverage vendor/bin/phpunit --testdox -d memory_limit=-1 --log-junit test-results/test-execution-results.xml --cache-result --coverage-cache=./coverage/cache --coverage-clover=phpunit-coverage.xml tests/Unit
artifacts:
- test-results/test-execution-results.xml
- step: &sonarqube
name: Sonarqube coverage report
caches:
- sonar
script:
- cd web-app
- pipe: sonarsource/sonarqube-scan:1.0.0
variables:
SONAR_HOST_URL: ${SONAR_HOST_URL} # Get the value from the repository/workspace variable.
SONAR_TOKEN: ${SONAR_TOKEN}
DEBUG: "true"
- step: &deploy
name: Deploy
caches:
- docker
- apt-cache
services:
- docker
script:
# Install apt packages
- apt-get update && apt-get install -y unzip awscli
- TAG=${BITBUCKET_COMMIT}
# Set aws credentials
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET_KEY}"
- aws configure set region "${AWS_REGION}"
# Get credentials for laravel from secrets manager and
# Write to .env file
- aws secretsmanager get-secret-value --secret-id ${ENV_SECRET_ID} --query SecretString --output text >> .env
# Write odbc snowflake definition for connecting to database
- aws secretsmanager get-secret-value --secret-id ${SNOWFLAKE_SECRET_ID} --query SecretString --output text > ./docker/php/snowflake/odbc.ini
# Authenticate bitbucket-deployment user
- aws ecr get-login-password --region us-west-2 | docker login -u AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com
# Build/deploy nginx image
- NGINX_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/nginx"
- docker build -f Dockerfile-nginx -t $NGINX_IMAGE .
# Push the :latest image
- docker push $NGINX_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $NGINX_IMAGE $NGINX_IMAGE:${BITBUCKET_COMMIT}
- docker push $NGINX_IMAGE:${BITBUCKET_COMMIT}
# Build/deploy php image
- PHP_IMAGE="${AWS_ACCOUNT_ID}.dkr.ecr.us-west-2.amazonaws.com/nova/php-app"
- docker build -f Dockerfile-php -t $PHP_IMAGE .
# Push the :latest image
- docker push $PHP_IMAGE:latest
# Tag and push the image with bitbucket commit
- docker tag $PHP_IMAGE $PHP_IMAGE:${BITBUCKET_COMMIT}
- docker push $PHP_IMAGE:${BITBUCKET_COMMIT}
# Start ecs migration task
- aws ecs run-task --cluster nova-api-cluster --launch-type FARGATE --network-configuration "awsvpcConfiguration={subnets=['${PUBLIC_SUBNET_A}','${PUBLIC_SUBNET_B}'],securityGroups=['${SECURITY_GROUP}'],assignPublicIp=ENABLED}" --task-definition nova-api-migration-task
# Force new ecs task deployment
- aws ecs update-service --cluster ${CLUSTER_NAME} --service ${SERVICE_NAME} --region ${AWS_REGION} --force-new-deployment
- step: &auto_merge_down
name: Auto Merge Down
image: atlassian/default-image:3
script:
- ./autoMerge.sh stage || true
- ./autoMerge.sh dev || true
pipelines:
branches:
dev:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Dev
stage:
- step:
<<: *testing
- step:
<<: *deploy
deployment: Staging
prod:
- step:
<<: *testing
- step:
<<: *sonarqube
- step:
<<: *deploy
deployment: Production
- step:
<<: *auto_merge_down
Found another answer on the community here https://community.atlassian.com/t5/Bitbucket-questions/Any-way-to-cache-apt-get-install-y-zip-in-bitbucket-pipelines/qaq-p/622876, thanks #Chase Han. 
Basically you use the following command in your pipeline script or a local docker image that matches the image you have in the pipeline.
which <package-name-here>
e.g
which git
Then it will output a path where it exists. 
e.g. 
/usr/bin/git
Then you just need to include the path in your cache definitions that contains that package. 
e.g.
caches:
      #/usr/bin located packages like git, curl, etc
      usr-bin: /usr/bin
And then you can use that cache definition in your steps

Problems storing video/screenshots when testing Rails app with Cypress/CircleCI

I am running Cypress to test a Rails app on CircleCI. I have the tests running on CircleCI with the following config. But no video/screenshot assets are created in CircleCI artefacts if tests fail.
version: 2.1
orbs:
ruby: circleci/ruby#1.0.6
node: circleci/node#3.0.1
jobs:
build:
docker:
- image: cimg/ruby:2.7.2-node
steps:
- checkout # pull down our git code.
- ruby/install-deps # use the ruby orb to install dependencies
- node/install-packages:
pkg-manager: yarn
test:
parallelism: 3
docker:
- image: cimg/ruby:2.7.2-node # this is our primary docker image, where step commands run.
- image: circleci/postgres:12.3
environment: # add POSTGRES environment variables.
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: testdb
environment:
BUNDLE_JOBS: "3"
BUNDLE_RETRY: "3"
PGHOST: 127.0.0.1
PGUSER: user
PGPASSWORD: password
RAILS_ENV: test
steps:
- checkout
- ruby/install-deps
- node/install-packages:
pkg-manager: yarn
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: bundle exec rails db:schema:load --trace
- run:
name: Load test data
command: bundle exec rails db:seed --trace
- run:
name: Run Rails Server
background: true
command: CYPRESS=1 bundle exec rails s -p 5017
- run:
name: Wait for server
command: |
until $(curl --retry 10 --output /dev/null --silent --head --fail http://127.0.0.1:5017); do
printf '.'
sleep 5
done
- run: sudo apt-get update
- run: sudo apt-get install libgtk2.0-0 libgtk-3-0 libgbm-dev libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2 libxtst6 xauth xvfb
- run: yarn cypress install --force
- run:
yarn: true
command: yarn run cypress run
- store_test_results:
path: test-reports/

CircleCI AWSCLI SyntaxError: Non-ASCII character '\xc3' error

I am working on a Ruby on Rails project where AWSCLI is used.
Recently, When I push my code to Staging, it tries to import newkeys, PrivateKey, PublicKey from rsa.key and throws an error.
The error message is as follows:
SyntaxError: Non-ASCII character '\xc3' in file /opt/circleci/.pyenv/versions/2.7.12/lib/python2.7/site-packages/rsa/key.py on line 1, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
Error message Image
My circleci/config.yml file is as follows:
orbs:
aws-ecr: circleci/aws-ecr#7.0.0
version: 2.1
jobs:
build:
working_directory: ~/mfds
docker:
- image: circleci/ruby:2.3.4-node
environment:
RAILS_ENV: citest
LANG: C.UTF-8
- image: snowhork/mysql_jp:5.7
environment:
MYSQL_ROOT_PASSWORD: ***
MYSQL_DATABASE: myapp
MYSQL_USER: user
MYSQL_PASSWORD: ***
MYSQL_ROOT_HOST: "%"
steps:
- checkout
- restore_cache:
key: gemfiles-{{ checksum "mfds/Gemfile.lock" }}
# Bundle install dependencies
- run: cd mfds/ && bundle install --path vendor/bundle
# Store bundle cache
- save_cache:
key: gemfiles-{{ checksum "mfds/Gemfile.lock" }}
paths:
- /home/circleci/mfds/mfds/vendor/bundle
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:3306 -timeout 1m
# Database setup
- run:
name: db setting
command: |
cd mfds
bundle exec rake db:create
bundle exec rake db:ridgepole:apply[citest]
- run:
name: rspec
command: |
cd mfds
bundle exec rspec
staging_deploy:
machine: true
steps:
- checkout
- run:
name: asset compile
command: |
cd mfds
npm install
./node_modules/.bin/webpack --optimize-minimize
- run:
name: gem install
command: |
gem install aws-sdk-ecs
pip install futures
pip install --upgrade awscli
- run:
name: push image
command: |
export AWS_ACCESS_KEY_ID=$XX_AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$XX_AWS_SECRET_ACCESS_KEY
bin/deployv2/docker-push staging
- run:
name: update service
command: |
export AWS_ACCESS_KEY_ID=$XX_AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$XX_AWS_SECRET_ACCESS_KEY
bin/deployv2/update-task staging
....
I don't understand how to resolve this problem.

Circle CI - Can't connect to Redis or memcached using Docker Compose, but I can do so on my local machine

I'm developing a Node.js program that connects to both Redis and memcached. I am testing my Node.js program with Jest, and before running the test I run docker-compose up. My Node.js program connects to the Docker Redis and memcached Docker containers fine, and my tests pass fine on my local machine.
However, I want the tests to run on Circle CI so that every time I git push, the CI environment will verify the program is buildable and that tests are passing.
When I try to do the same on Circle CI, it seems that the Docker containers spin up fine, however the tests aren't able to connect to the Redis or memcached servers in the containers, despite it working fine on my local PC.
My config.yml for Circle CI:
version: 2
jobs:
build:
docker:
- image: circleci/node
steps:
- checkout
- setup_remote_docker
- run:
name: Install Docker Compose
command: |
curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- restore_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
paths:
- /home/circleci/.npm
- run:
name: Ensure Test Parity
command: |
chmod +x ./validateTestCases.sh
./validateTestCases.sh
- run:
name: Run Tests
command: npm test
My docker-compose.yml:
services:
redis:
image: redis
container_name: redis-container
ports:
- 6379:6379
memcached:
image: memcached
container_name: memcached-container
ports:
- 11211:11211
My build failing test log in Circle CI:
#!/bin/bash -eo pipefail
npm test
> easy-cache#1.0.0 test
> jest
FAIL memcached/memcached.test.js
● Test suite failed to run
Error: connect ECONNREFUSED 127.0.0.1:11211
FAIL redis/redis.test.js
● Test suite failed to run
Timeout - Async callback was not invoked within the 5000 ms timeout specified by jest.setTimeout.Error: Timeout - Async callback was not invoked within the 5000 ms timeout specified by jest.setTimeout.
at mapper (node_modules/jest-jasmine2/build/queueRunner.js:27:45)
Test Suites: 2 failed, 2 total
Tests: 0 total
Snapshots: 0 total
Time: 36.183 s
Ran all test suites.
npm ERR! code 1
npm ERR! path /home/circleci/project
npm ERR! command failed
npm ERR! command sh -c jest
npm ERR! A complete log of this run can be found in:
npm ERR! /home/circleci/.npm/_logs/2021-02-05T20_29_26_896Z-debug.log
Exited with code exit status 1
CircleCI received exit code 1
Link to my current source code
I am not sure what to try next. I have tried moving the npm test block right after docker-compose up -d but that had no effect.
It turns out that Docker Compose is not required for what I'm trying to do. Instead, you can include multiple Docker images in Circle CI.
Here's my updated Circle CI yaml file, where my tests run successfully (connection to Redis and memcached works like on my local PC using Docker Compose):
version: 2
jobs:
build:
docker:
- image: circleci/node
- image: redis
- image: memcached
steps:
- checkout
# - setup_remote_docker
# - run:
# name: Install Docker Compose
# command: |
# curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
# chmod +x ~/docker-compose
# sudo mv ~/docker-compose /usr/local/bin/docker-compose
# - run:
# name: Start Container
# command: |
# docker-compose up -d
# docker-compose ps
- restore_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm ci
- save_cache:
key: npm-cache-v1-{{ checksum "package-lock.json" }}
paths:
- /home/circleci/.npm
- run:
name: Ensure Test Parity
command: |
chmod +x ./validateTestCases.sh
./validateTestCases.sh
- run:
name: Run Tests
command: npm test

Build FAILED but job status is SUCCESS in Gitlab

My Dockerfile:
FROM mm_php:7.1
ADD ./docker/test/source/entrypoint.sh /work/entrypoint.sh
ADD ./docker/wait-for-it.sh /work/wait-for-it.sh
RUN chmod 755 /work/entrypoint.sh \
&& chmod 755 /work/wait-for-it.sh
ENTRYPOINT ["/work/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash -e
/work/wait-for-it.sh db:5432 -- echo "PostgreSQL started"
./vendor/bin/parallel-phpunit --pu-cmd="./vendor/bin/phpunit -c phpunit-docker.xml" tests
docker-compose.yml:
version: '2'
services:
test:
build:
context: .
args:
ssh_prv_key: ${ssh_prv_key}
application_env: ${application_env}
dockerfile: docker/test/source/Dockerfile
links:
- db
db:
build:
context: .
dockerfile: docker/test/postgres/Dockerfile
environment:
PGDATA: /tmp
.gitlab-ci.yml:
image: docker:latest
services:
- name: docker:dind
command: ["--insecure-registry=my.domain:5000 --registry-mirror=http://my.domain"]
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
test:
stage: test
script:
- export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
All works good. But if tests are failed, status of job in Gitlab is SUCCESS instead of FAILED.
How to obtain status FAILED if tests are failed?
UPD
If I run docker-compose up locally, it return no error code:
$ export ssh_prv_key="$(cat ~/.ssh/id_rsa)" && export application_env="testing-docker" && docker-compose up --build test
Building db
Step 1/2 : FROM mm_postgres:9.6
...
test_1 | FAILURES!
test_1 | Tests: 1, Assertions: 1, Failures: 1.
test_1 | Success: 2 Fail: 2 Error: 0 Skip: 2 Incomplete: 0
mmadmin_test_1 exited with code 1
$ echo $?
0
It looks to me like it's reporting failed on the test without necessarily reporting failure on the return value of the docker-compose call. Have you tried capturing the return value of docker-compose when tests fail locally?
In order to get docker-compose to return the exit code from a specific service, try this:
docker-compose up --exit-code-from=service
When Gitlab CI runs something, if the process executed returns something different from zero, then, your build will fail.
In your case, you are running a docker-compose and this program returns zero when the container finish, what is correct.
You are trying to get phpunit's failure.
I think that is better you split your build in steps and not use docker-compose in this case:
gitlab.yml:
stages:
- build
- test
build:
image: docker:latest
stage: build
script:
- docker build -t ${NAME_OF_IMAGE} .
- docker push ${NAME_OF_IMAGE}
test:
image: ${NAME_OF_IMAGE}
stage: test
script:
- ./execute_your.sh

Resources