circleci only 1 subschema matches out of 2 - electron

i'm trying to create a circleci automated testing for my electron apps.
i followed the intruction from here: https://circleci.com/blog/electron-testing/
my repo: https://github.com/dhanyn10/electron-example/tree/spectron
my project folder looks like this
electron-example
|──/.circleci
| |──config.yml
|──/bootbox
|──/project1
|──/project2
because in my project contains many applications, i need to specify which application in folder that i will test. Here's my circleci config
version: 2.1
jobs:
build:
working_directory: ~/electron-example/bootbox
docker:
- image: circleci/node:11-browsers
steps:
- checkout:
path: ~/electron-example
- run:
name: Update NPM
command: "sudo npm install -g npm"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: Run tests
command: npm run test
package.json
...
"devDependencies": {
"electron": "^11.4.3",
"electron-builder": "^22.10.4",
"mocha": "^8.3.2",
"spectron": "^13.0.0"
},
...
it return error below
#!/bin/sh -eo pipefail
# ERROR IN CONFIG FILE:
# [#/jobs/build] only 1 subschema matches out of 2
# 1. [#/jobs/build/steps/0] 0 subschemas matched instead of one
# | 1. [#/jobs/build/steps/0] extraneous key [path] is not permitted
# | | Permitted keys:
# | | - persist_to_workspace
# | | - save_cache
# | | - run
# | | - checkout
# | | - attach_workspace
# | | - store_test_results
# | | - restore_cache
# | | - store_artifacts
# | | - add_ssh_keys
# | | - deploy
# | | - setup_remote_docker
# | | Passed keys:
# | | []
# | 2. [#/jobs/build/steps/0] Input not a valid enum value
# | | Steps without arguments can be called as strings
# | | enum:
# | | - checkout
# | | - setup_remote_docker
# | | - add_ssh_keys
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code exit status 1
CircleCI received exit code 1
How to solve this error?

See:
https://circleci.com/docs/2.0/configuration-reference/#checkout
A bit late here, but checkout defaults to the working_directory:
- checkout:
path: ~/electron-example
Should be:
- checkout
Also I got here because of the following, trying to add the browser-tools:
workflows:
build-and-test:
jobs:
- build
- pylint
- tslint
- karma
- jest
- unit-tests
- functional-tests:
requires:
- build
- deploy:
requires:
- build
orbs:
browser-tools-blah
Should have been:
orbs:
browser-tools-blah
workflows:
build-and-test:
jobs:
- build
- pylint
- tslint
- karma
- jest
- unit-tests
- functional-tests:
requires:
- build
- deploy:
requires:
- build

Related

Unable to connect to external network service from docker running inside github actions workflow

I am building a docker compose based micro-service and am trying to execute integration tests from within github actions. Tests all pass locally and the app is currently deployed and working. However when I run the app in GHA it appears that the chat service is unable to reach the twitch irc servers. The strangest part is the chat service is capable of logging in but apparently not sending messages. I can only assume I am missing some critical network configuration in my compose file.
I am running the following compose file in GHA, any ideas folks?
# docker-compose.gha.yml
# Copyright (C) 2023
# Squidpie
version: "3.5"
services:
redis-test:
build:
context: ${STRAUSS_ROOT_DIR}
dockerfile: ${STRAUSS_ROOT_DIR}/tests/redis/Dockerfile
networks:
- strauss
depends_on:
- redis
chat-test:
build:
context: ${STRAUSS_ROOT_DIR}
dockerfile: ${STRAUSS_ROOT_DIR}/tests/chat/Dockerfile
networks:
- strauss
depends_on:
- chat
redis:
image: redis:7.0.6-alpine
container_name: redis
networks:
- strauss
chat:
image: strauss/chat:${STRAUSS_CHAT_PKG_VERSION}
container_name: chat
build:
context: ${STRAUSS_ROOT_DIR}
dockerfile: ${STRAUSS_ROOT_DIR}/services/chat/Dockerfile
args:
version: "prod"
STRAUSS_BUILD_DIR: ${STRAUSS_BUILD_DIR}
environment:
TWITCH_USER: ${TWITCH_USER}
TWITCH_TOKEN: ${TWITCH_TOKEN}
networks:
- strauss
volumes:
- ./strauss.yml:/strauss/strauss.yml
depends_on:
- redis
networks:
strauss:
name: strauss
driver: bridge
With the following Github Action Workflow:
# strauss-validate.yml
# Copyright (C) 2023
# Squidpie
name: strauss-validate
run-name: ${{ github.actor }} is validating strauss
on:
push:
branches:
[dev, stage/*]
pull_request:
types: [assigned, opened, synchronize, reopened]
branches:
[dev]
workflow_dispatch: {}
jobs:
validate:
name: Validate strauss
runs-on: ubuntu-latest
env:
STRAUSS_ROOT_DIR: ${{ github.workspace }}
STRAUSS_BUILD_DIR: target/debug
steps:
- name: Checkout
uses: actions/checkout#v3
with:
fetch-depth: 0
- name: Setup GitVersion
uses: gittools/actions/gitversion/setup#v0.9.15
with:
versionSpec: '5.x'
- name: Resolve Version
id: gitversion
uses: gittools/actions/gitversion/execute#v0.9.15
- name: Setup Ruby
uses: ruby/setup-ruby#v1
with:
ruby-version: '3.2'
bundler-cache: true
- name: Setup Strauss Env
run: |
./scripts/gen-gha-env.sh
cat .env > .env.runtime
echo ${{ secrets.STRAUSS_SECRETS }} >> .env.runtime
printf '%b\n' "services:\n chat:\n channel: <test channel>\n" > strauss.yml
- uses: actions/cache#v3
id: cache-rust
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Build & Unit Test Services
run: docker compose --env-file .env -f strauss-build.debug.yml run build
- name: Run Integration Tests
run: |
docker compose --env-file .env.runtime \
-f docker-compose.gha.yml \
up -d
- name: Get Test Status
run: |
sleep 45
docker logs `docker container ls -a | grep chat-test | awk '{print $1}'` | grep "PASSED" && \
docker logs `docker container ls -a | grep redis-test | awk '{print $1}'` | grep "PASSED"
- name: Inspect Dockers
if: failure()
run: |
docker inspect `docker container ls -a | grep "strauss/chat" | awk '{print $1}'`
docker inspect `docker container ls -a | grep "redis:7" | awk '{print $1}'`
docker inspect `docker container ls -a | grep "chat-test" | awk '{print $1}'`
docker inspect `docker container ls -a | grep "redis-test" | awk '{print $1}'`
- name: Collect Docker Logs
if: failure()
uses: jwalton/gh-docker-logs#v2.2.1

Circle ci Workflow Build error. Matrix and Name parameters not working

Does anyone know why this script isn't working?
version: 2.1
orbs:
android: circleci/android#1.0.3
gcp-cli: circleci/gcp-cli#2.2.0
jobs:
build:
working_directory: ~/code
docker:
- image: cimg/android:2022.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD
environment:
JVM_OPTS: -Xmx3200m
steps:
- checkout
- run:
name: Chmod permissions
command: sudo chmod +x ./gradlew
- run:
name: Download Dependencies
command: ./gradlew androidDependencies
- run:
name: Run Tests
command: ./gradlew lint test
- store_artifacts:
path: app/build/reports
destination: reports
- store_test_results:
path: app/build/test-results
nightly-android-test:
parameters:
system-image:
type: string
default: system-images;android-30;google_apis;x86
executor:
name: android/android-machine
resource-class: xlarge
steps:
- checkout
- android/start-emulator-and-run-tests:
test-command: ./gradlew connectedDebugAndroidTest
system-image: << parameters.system-image >>
- run:
name: Save test results
command: |
mkdir -p ~/test-results/junit/
find . -type f -regex ".*/build/outputs/androidTest-results/.*xml" -exec cp {} ~/test-results/junit/ \;
when: always
- store_test_results:
path: ~/test-results
- store_artifacts:
path: ~/test-results/junit
workflows:
unit-test-workflow:
jobs:
- build
nightly-test-workflow:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image:
- system-images;android-30;google_apis;x86
- system-images;android-29;google_apis;x86
- system-images;android-28;google_apis;x86
- system-images;android-27;google_apis;x86
name: nightly-android-test-<<matrix.system-image>>
I keep getting the following build error:
Config does not conform to schema: {:workflows {:nightly-test-workflow {:jobs
[{:nightly-android-test {:matrix disallowed-key, :name disallowed-key}}]}}}
The second workflow seems to fail due to the matrix and name parameters but I can't see anything wrong in the script that would make them fail. I've tried looking at a yaml parser and couldn't see any null vaules and I tried the circle ci discussion forum with not a lot of luck.
I don't think that's the correct syntax. See the CircleCI documentation:
https://circleci.com/docs/2.0/configuration-reference/#matrix-requires-version-21
https://circleci.com/docs/2.0/using-matrix-jobs/
According to the above references, I believe it should be:
- nightly-android-test:
matrix:
alias: nightly
parameters:
system-image: ["system-images;android-30;google_apis;x86", "system-images;android-29;google_apis;x86", "system-images;android-28;google_apis;x86", "system-images;android-27;google_apis;x86"]
name: nightly-android-test-<<matrix.system-image>>

Circleci Can we use multiple workflows for multiple type?

I'm new in circleci. I want to install my infrastructure via terraform after that I also want to trigger my build, deploy and push command for aws side. But workflow does not allow me to use plan_approve_apply and build-and-deploy together in understand one workflow. I also try to create multiple workflows (like below example) for each one but also it didn't work. How can I call both in single circli config file
My Circleci config yml file:
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
init-plan:
working_directory: /tmp/project
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- checkout
- run:
name: terraform init & plan
command: |
terraform init
terraform plan
- persist_to_workspace:
root: .
paths:
- .
apply:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: terraform
command: |
terraform apply
- persist_to_workspace:
root: .
paths:
- .
destroy:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: destroy
command: |
terraform destroy
- persist_to_workspace:
root: .
paths:
- .
workflows:
version: 2
plan_approve_apply:
jobs:
- init-plan
- apply:
requires:
- init-plan
- hold-destroy:
type: approval
requires:
- apply
- destroy:
requires:
- hold-destroy
workflows: # didn't work
build-and-deploy:
jobs:
- aws-ecr/build_and_push_image:
account-url: "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
repo: "${AWS_RESOURCE_NAME_PREFIX}"
region: ${AWS_DEFAULT_REGION}
tag: "${CIRCLE_SHA1}"
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build_and_push_image
aws-region: ${AWS_DEFAULT_REGION}
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,image-and-tag=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${AWS_RESOURCE_NAME_PREFIX}:${CIRCLE_SHA1}"

Gitlab integration of RabbitMQ as a service

I'm trying to have a Gitlab setup where I integrate different services because I have a nodejs app and I would like to do integration testings with services like RabbitMQ, Cassandra, etc.
Question + Description of the problem + Possible Solution
Does someone know how to : write the Gitlab configuration file (.gitlab-ci.yml) to integrate RabbitMQ as a service, where I define a configuration file to create specific virtualhosts, exchanges, queues and users ?
So a section in my .gitlab-ci.yml I defined a variable which should point to the rabbitmq.config file like specified in the official documentation (https://www.rabbitmq.com/configure.html#config-location) but this does not work.
...
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
...
File I need to point to in my Gitlab configuration : rabbitmq.conf
In this file I want to specify a file rabbitmq-definition.json containing my specific virtualhosts, exchanges, queues and users for RabbitMQ.
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "./rabbitmq-definition.json"}
]}
].
File containing my RabbitMQ configuration :rabbitmq-definition.json
{
"rabbit_version": "3.8.9",
"rabbitmq_version": "3.8.9",
"product_name": "RabbitMQ",
"product_version": "3.8.9",
"users": [
{
"name": "guest",
"password_hash": "9OhzGMQqiSCStw2uosywVW2mm95V/I6zLoeOIuVZZm8yFqAV",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
},
{
"name": "test",
"password_hash": "4LWHqT8/KZN8EHa1utXAknONOCjRTZKNoUGdcP3PfG0ljM7L",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "management"
}
],
"vhosts": [
{
"name": "my_virtualhost"
},
{
"name": "/"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "guest",
"vhost": "my_virtualhost",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "test",
"vhost": "my_virtualhost",
"configure": "^(my).*",
"write": "^(my).*",
"read": "^(my).*"
}
],
"topic_permissions": [],
"parameters": [],
"policies": [],
"queues": [
{
"name": "my_queue",
"vhost": "my_virtualhost",
"durable": true,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [
{
"name": "my_exchange",
"vhost": "my_virtualhost",
"type": "topic",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
},
{
"name": "my_exchange",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "my_exchange",
"vhost": "my_virtualhost",
"destination": "my_queue",
"destination_type": "queue",
"routing_key": "test.test.*.1",
"arguments": {}
}
]
}
Existing Setup
Existing file .gitlab-ci.yml:
#image: node:latest
image: node:12
cache:
paths:
- node_modules/
stages:
- install
- test
- build
- deploy
- security
- leanix
variables:
NODE_ENV: "CI"
ENO_ENV: "CI"
LOG_FOLDER: "."
LOG_FILE: "queries.log"
.caching:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
before_script:
- npm ci --cache .npm --prefer-offline --no-audit
#install_dependencies:
# stage: install
# script:
# - npm install --no-audit
# only:
# changes:
# - package-lock.json
# test:quality:
# stage: test
# allow_failure: true
# script:
# - npx eslint --format table .
# test:unit:
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
# test_node14:unit:
# image: node:14
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
# RABBITMQ_DEFAULT_USER: guest
# RABBITMQ_DEFAULT_PASS: guest
# RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
# AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
dependency_scan:
stage: security
allow_failure: false
script:
- npm audit --audit-level=moderate
include:
- template: Security/Secret-Detection.gitlab-ci.yml
- template: Security/SAST.gitlab-ci.yml
secret_detection:
stage: security
before_script: []
secret_detection_default_branch:
stage: security
before_script: []
nodejs-scan-sast:
stage: security
before_script: []
eslint-sast:
stage: security
before_script: []
leanix_sync:
stage: leanix
variables:
ENV: "development"
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
ENV: "development"
- if: '$CI_COMMIT_BRANCH == "test"'
variables:
ENV: "uat"
- if: '$CI_COMMIT_BRANCH == "master"'
variables:
ENV: "production"
before_script:
- apt update && apt -y install jq
script:
- VERSION=$(cat package.json | jq -r .version)
- npm run dependencies_check
- echo "Update LeanIx Factsheet "
...
allow_failure: true
This is my .env_CI file :
CASSANDRA_CONTACTPOINTS = localhost
CASSANDRA_KEYSPACE = pfm
CASSANDRA_USER = "cassandra"
CASSANDRA_PASS = "cassandra"
RABBITMQ_HOSTS=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_VHOST=my_virtualhost
RABBITMQ_USER=guest
RABBITMQ_PASS=guest
RABBITMQ_PROTOCOL=amqp
PORT = 8091
Logs of a run after a commit on the node-api project :
Running with gitlab-runner 13.12.0 (7a6612da)
on Enocloud-Gitlab-Runner PstDVLop
Preparing the "docker" executor
00:37
Using Docker executor with image node:12 ...
Starting service rabbitmq:management ...
Pulling docker image rabbitmq:management ...
Using docker image sha256:737d67e8db8412d535086a8e0b56e6cf2a6097906e2933803c5447c7ff12f265 for rabbitmq:management with digest rabbitmq#sha256:b29faeb169f3488b3ccfee7ac889c1c804c7102be83cb439e24bddabe5e6bdfb ...
Waiting for services to be up and running...
*** WARNING: Service runner-pstdvlop-project-372-concurrent-0-b78aed36fb13c180-rabbitmq-0 probably didn't start properly.
Health check error:
Service container logs:
2021-08-05T15:39:02.476374200Z 2021-08-05 15:39:02.456089+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:02.476612801Z 2021-08-05 15:39:02.475702+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
...
2021-08-05T15:39:03.024092380Z 2021-08-05 15:39:03.023476+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-08-05T15:39:03.024287781Z 2021-08-05 15:39:03.023757+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-08-05T15:39:03.045901591Z 2021-08-05 15:39:03.045602+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-08-05T15:39:03.391624143Z 2021-08-05 15:39:03.391057+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-08-05T15:39:03.391785874Z 2021-08-05 15:39:03.391207+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/quorum/rabbit#635519274c80
2021-08-05T15:39:03.510825736Z 2021-08-05 15:39:03.510441+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-08-05T15:39:03.536493082Z 2021-08-05 15:39:03.536098+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2021-08-05T15:39:03.547541524Z 2021-08-05 15:39:03.546999+00:00 [info] <0.222.0> ra: starting system coordination
2021-08-05T15:39:03.547876996Z 2021-08-05 15:39:03.547058+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/coordination/rabbit#635519274c80
2021-08-05T15:39:03.551508520Z 2021-08-05 15:39:03.551130+00:00 [info] <0.272.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2021-08-05T15:39:03.552002433Z 2021-08-05 15:39:03.551447+00:00 [noti] <0.277.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2021-08-05T15:39:03.557022096Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0>
2021-08-05T15:39:03.557045886Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Starting RabbitMQ 3.9.1 on Erlang 24.0.5 [jit]
2021-08-05T15:39:03.557050686Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.557069166Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558119613Z
2021-08-05T15:39:03.558134063Z ## ## RabbitMQ 3.9.1
2021-08-05T15:39:03.558139043Z ## ##
2021-08-05T15:39:03.558142303Z ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.558145473Z ###### ##
2021-08-05T15:39:03.558201373Z ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558206473Z
2021-08-05T15:39:03.558210714Z Erlang: 24.0.5 [jit]
2021-08-05T15:39:03.558215324Z TLS Library: OpenSSL - OpenSSL 1.1.1k 25 Mar 2021
2021-08-05T15:39:03.558219824Z
2021-08-05T15:39:03.558223984Z Doc guides: https://rabbitmq.com/documentation.html
2021-08-05T15:39:03.558227934Z Support: https://rabbitmq.com/contact.html
2021-08-05T15:39:03.558232464Z Tutorials: https://rabbitmq.com/getstarted.html
2021-08-05T15:39:03.558236944Z Monitoring: https://rabbitmq.com/monitoring.html
2021-08-05T15:39:03.558241154Z
2021-08-05T15:39:03.558244394Z Logs: /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.558247324Z <stdout>
2021-08-05T15:39:03.558250464Z
2021-08-05T15:39:03.558253304Z Config file(s): /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.558256274Z
2021-08-05T15:39:03.558984369Z Starting broker...2021-08-05 15:39:03.558703+00:00 [info] <0.222.0>
2021-08-05T15:39:03.558996969Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> node : rabbit#635519274c80
2021-08-05T15:39:03.559000489Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> home dir : /var/lib/rabbitmq
2021-08-05T15:39:03.559003679Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> config file(s) : /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.559006959Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> cookie hash : 1iZSjTlqOt/PC9WvpuHVSg==
2021-08-05T15:39:03.559010669Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> log(s) : /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.559014249Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> : <stdout>
2021-08-05T15:39:03.559017899Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> database dir : /var/lib/rabbitmq/mnesia/rabbit#635519274c80
2021-08-05T15:39:03.893651319Z 2021-08-05 15:39:03.892900+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:09.081076751Z 2021-08-05 15:39:09.080611+00:00 [info] <0.659.0> * rabbitmq_management_agent
----
Pulling docker image node:12 ...
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256:... ...
Preparing environment 00:01
Running on runner-pstdvlop-project-372-concurrent-0 via gitlab-runner01...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/node-api/.git/
Checking out 4ce1ae1a as PM-1814...
Removing .npm/
Removing node_modules/
Skipping Git submodules setup
Restoring cache 00:03
Checking cache for default...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
WARNING: node_modules/.bin/depcheck: chmod node_modules/.bin/depcheck: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:20
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256: ...
$ npm ci --cache .npm --prefer-offline --no-audit
npm WARN prepare removing existing node_modules/ before installation
> node-cron#2.0.3 postinstall /builds/node-api/node_modules/node-cron
> opencollective-postinstall
> core-js#2.6.12 postinstall /builds/node-api/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
added 642 packages in 10.824s
$ npm run test_integration
> pfm-liveprice-api#0.1.3 test_integration /builds/node-api
> npx nyc mocha test/integration --exit --timeout 10000 --reporter mocha-junit-reporter
RABBITMQ_PROTOCOL : amqp RABBITMQ_USER : guest RABBITMQ_PASS : guest
config.js parseInt(RABBITMQ_PORT) : NaN
simple message
[x] Sent 'Hello World!'
this queue [object Object] exists
----------------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------------------------|---------|----------|---------|---------|-------------------
All files | 5.49 | 13.71 | 4.11 | 5.33 |
pfm-liveprice-api | 21.3 | 33.8 | 21.43 | 21 |
app.js | 0 | 0 | 0 | 0 | 1-146
config.js | 76.67 | 55.81 | 100 | 77.78 | 19-20,48,55,67-69
pfm-liveprice-api/routes | 0 | 0 | 0 | 0 |
index.js | 0 | 100 | 0 | 0 | 1-19
info.js | 0 | 100 | 0 | 0 | 1-15
liveprice.js | 0 | 0 | 0 | 0 | 1-162
status.js | 0 | 100 | 0 | 0 | 1-14
pfm-liveprice-api/services | 0 | 0 | 0 | 0 |
rabbitmq.js | 0 | 0 | 0 | 0 | 1-110
pfm-liveprice-api/utils | 0 | 0 | 0 | 0 |
buildBinding.js | 0 | 0 | 0 | 0 | 1-35
buildProducts.js | 0 | 0 | 0 | 0 | 1-70
store.js | 0 | 0 | 0 | 0 | 1-291
----------------------------|---------|----------|---------|---------|-------------------
=============================== Coverage summary ===============================
Statements : 5.49% ( 23/419 )
Branches : 13.71% ( 24/175 )
Functions : 4.11% ( 3/73 )
Lines : 5.33% ( 21/394 )
================================================================================
Saving cache for successful job
00:05
Creating cache default...
node_modules/: found 13259 matching files and directories
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: test-results.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Tried and does not work
Using variables where to define RabbitMQ vars is deprecated and a .config is required
If I try to use the following vars in my .gitlab-ci.yml :
...
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
...
I get the following outout :
...
Starting service rabbitmq:latest ...
Pulling docker image rabbitmq:latest ...
Using docker image sha256:1c609d1740383796a30facdb06e52905e969f599927c1a537c10e4bcc6990193 for rabbitmq:latest with digest rabbitmq#sha256:d5056e576d8767c0faffcb17b5604a4351edacb8f83045e084882cabd384d216 ...
Waiting for services to be up and running...
*** WARNING: Service runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 probably didn't start properly.
Health check error:
start service container: Error response from daemon: Cannot link to a non running container: /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 AS /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0-wait-for-service/service (docker.go:1156:0s)
Service container logs:
2021-08-05T13:14:33.024761664Z error: RABBITMQ_DEFAULT_PASS is set but deprecated
2021-08-05T13:14:33.024797191Z error: RABBITMQ_DEFAULT_USER is set but deprecated
2021-08-05T13:14:33.024802924Z error: deprecated environment variables detected
2021-08-05T13:14:33.024806771Z
2021-08-05T13:14:33.024810742Z Please use a configuration file instead; visit https://www.rabbitmq.com/configure.html to learn more
2021-08-05T13:14:33.024844321Z
...
because on the official Docker documentation (https://hub.docker.com/_/rabbitmq) it is stated that :
WARNING: As of RabbitMQ 3.9, all of the docker-specific variables listed below are deprecated and no longer used. Please use a configuration file instead; visit rabbitmq.com/configure to learn more about the configuration file. For a starting point, the 3.8 images will print out the config file it generated from supplied environment variables.
# Unavailable in 3.9 and up
RABBITMQ_DEFAULT_PASS
RABBITMQ_DEFAULT_PASS_FILE
RABBITMQ_DEFAULT_USER
RABBITMQ_DEFAULT_USER_FILE
RABBITMQ_DEFAULT_VHOST
RABBITMQ_ERLANG_COOKIE
...

CircleCI Hold step

I'm trying to add a hold job into a workflow in CircleCI's config.yml file but I cannot make it work and I'm pretty sure it's a really simple error on my part (I just can't see it!).
When validating it with the CircleCI CLI locally running the following command
circleci config validate:
I get the following error
Error: Job 'hold' requires 'build-and-test-service', which is the name of 0 other jobs in workflow 'build-deploy'
This is the config.yml (note it's for a Serverless Framework application - not that that should make any difference)
version: 2.1
jobs:
build-and-test-service:
docker:
- image: timbru31/java-node
parameters:
service_path:
type: string
steps:
- checkout
- serverless/setup:
app-name: serverless-framework-orb
org-name: circleci
- restore_cache:
keys:
- dependencies-cache-{{ checksum "v2/shared/package-lock.json" }}-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
- dependencies-cache
- run:
name: Install dependencies
command: |
npm install
cd v2/shared
npm install
cd ../../<< parameters.service_path >>
npm install
- run:
name: Test service
command: |
cd << parameters.service_path >>
npm run test:ci
- store_artifacts:
path: << parameters.service_path >>/test-results/jest
prefix: tests
- store_artifacts:
path: << parameters.service_path >>/coverage
prefix: coverage
- store_test_results:
path: << parameters.service_path >>/test-results
deploy:
docker:
- image: circleci/node:lts
parameters:
service_path:
type: string
stage_name:
type: string
region:
type: string
steps:
- run:
name: Deploy application
command: |
cd << parameters.service_path >>
serverless deploy --verbose --stage << parameters.stage_name >> --region << parameters.region >>
- save_cache:
paths:
- node_modules
- << parameters.service_path >>/node_modules
key: dependencies-cache-{{ checksum "package-lock.json" }}-{{ checksum "<< parameters.service_path >>/package-lock.json" }}
orbs:
serverless: circleci/serverless-framework#1.0.1
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
- deploy:
service_path: v2/campaign
stage_name: dev
region: eu-west-2
requires:
- hold
It's obvious the error relates to the hold step (near the bottom of the config) not being able to find the build-and-test-service just above it but build-and-test-service does exist so am stumped at this point.
For anyone reading I figured out why it wasn't working.
Essentially I was using the incorrect property reference under the requires key:
workflows:
version: 2
build-deploy:
jobs:
# non-master branches deploys to stage named by the branch
- build-and-test-service:
name: Build and test campaign
service_path: v2/campaign
filters:
branches:
only: develop
- hold:
name: hold
type: approval
requires:
- build-and-test-service
The correct property reference in this case should have been the name of the previous step, i.e. Build and test campaign so I just changed that name to build-and-test-service.
I found the CircleCI docs were not very clear on this but perhaps it was because their examples surrounding manual approvals stated the requires property should be pointing to the root key of the job such as build-and-test-service.
I suppose I should have been more vigilant in my error reading too, it did mention name there as well.

Resources