Gitlab integration of RabbitMQ as a service - docker

I'm trying to have a Gitlab setup where I integrate different services because I have a nodejs app and I would like to do integration testings with services like RabbitMQ, Cassandra, etc.
Question + Description of the problem + Possible Solution
Does someone know how to : write the Gitlab configuration file (.gitlab-ci.yml) to integrate RabbitMQ as a service, where I define a configuration file to create specific virtualhosts, exchanges, queues and users ?
So a section in my .gitlab-ci.yml I defined a variable which should point to the rabbitmq.config file like specified in the official documentation (https://www.rabbitmq.com/configure.html#config-location) but this does not work.
...
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
...
File I need to point to in my Gitlab configuration : rabbitmq.conf
In this file I want to specify a file rabbitmq-definition.json containing my specific virtualhosts, exchanges, queues and users for RabbitMQ.
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "./rabbitmq-definition.json"}
]}
].
File containing my RabbitMQ configuration :rabbitmq-definition.json
{
"rabbit_version": "3.8.9",
"rabbitmq_version": "3.8.9",
"product_name": "RabbitMQ",
"product_version": "3.8.9",
"users": [
{
"name": "guest",
"password_hash": "9OhzGMQqiSCStw2uosywVW2mm95V/I6zLoeOIuVZZm8yFqAV",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
},
{
"name": "test",
"password_hash": "4LWHqT8/KZN8EHa1utXAknONOCjRTZKNoUGdcP3PfG0ljM7L",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "management"
}
],
"vhosts": [
{
"name": "my_virtualhost"
},
{
"name": "/"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "guest",
"vhost": "my_virtualhost",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "test",
"vhost": "my_virtualhost",
"configure": "^(my).*",
"write": "^(my).*",
"read": "^(my).*"
}
],
"topic_permissions": [],
"parameters": [],
"policies": [],
"queues": [
{
"name": "my_queue",
"vhost": "my_virtualhost",
"durable": true,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [
{
"name": "my_exchange",
"vhost": "my_virtualhost",
"type": "topic",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
},
{
"name": "my_exchange",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "my_exchange",
"vhost": "my_virtualhost",
"destination": "my_queue",
"destination_type": "queue",
"routing_key": "test.test.*.1",
"arguments": {}
}
]
}
Existing Setup
Existing file .gitlab-ci.yml:
#image: node:latest
image: node:12
cache:
paths:
- node_modules/
stages:
- install
- test
- build
- deploy
- security
- leanix
variables:
NODE_ENV: "CI"
ENO_ENV: "CI"
LOG_FOLDER: "."
LOG_FILE: "queries.log"
.caching:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
before_script:
- npm ci --cache .npm --prefer-offline --no-audit
#install_dependencies:
# stage: install
# script:
# - npm install --no-audit
# only:
# changes:
# - package-lock.json
# test:quality:
# stage: test
# allow_failure: true
# script:
# - npx eslint --format table .
# test:unit:
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
# test_node14:unit:
# image: node:14
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
# RABBITMQ_DEFAULT_USER: guest
# RABBITMQ_DEFAULT_PASS: guest
# RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
# AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
dependency_scan:
stage: security
allow_failure: false
script:
- npm audit --audit-level=moderate
include:
- template: Security/Secret-Detection.gitlab-ci.yml
- template: Security/SAST.gitlab-ci.yml
secret_detection:
stage: security
before_script: []
secret_detection_default_branch:
stage: security
before_script: []
nodejs-scan-sast:
stage: security
before_script: []
eslint-sast:
stage: security
before_script: []
leanix_sync:
stage: leanix
variables:
ENV: "development"
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
ENV: "development"
- if: '$CI_COMMIT_BRANCH == "test"'
variables:
ENV: "uat"
- if: '$CI_COMMIT_BRANCH == "master"'
variables:
ENV: "production"
before_script:
- apt update && apt -y install jq
script:
- VERSION=$(cat package.json | jq -r .version)
- npm run dependencies_check
- echo "Update LeanIx Factsheet "
...
allow_failure: true
This is my .env_CI file :
CASSANDRA_CONTACTPOINTS = localhost
CASSANDRA_KEYSPACE = pfm
CASSANDRA_USER = "cassandra"
CASSANDRA_PASS = "cassandra"
RABBITMQ_HOSTS=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_VHOST=my_virtualhost
RABBITMQ_USER=guest
RABBITMQ_PASS=guest
RABBITMQ_PROTOCOL=amqp
PORT = 8091
Logs of a run after a commit on the node-api project :
Running with gitlab-runner 13.12.0 (7a6612da)
on Enocloud-Gitlab-Runner PstDVLop
Preparing the "docker" executor
00:37
Using Docker executor with image node:12 ...
Starting service rabbitmq:management ...
Pulling docker image rabbitmq:management ...
Using docker image sha256:737d67e8db8412d535086a8e0b56e6cf2a6097906e2933803c5447c7ff12f265 for rabbitmq:management with digest rabbitmq#sha256:b29faeb169f3488b3ccfee7ac889c1c804c7102be83cb439e24bddabe5e6bdfb ...
Waiting for services to be up and running...
*** WARNING: Service runner-pstdvlop-project-372-concurrent-0-b78aed36fb13c180-rabbitmq-0 probably didn't start properly.
Health check error:
Service container logs:
2021-08-05T15:39:02.476374200Z 2021-08-05 15:39:02.456089+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:02.476612801Z 2021-08-05 15:39:02.475702+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
...
2021-08-05T15:39:03.024092380Z 2021-08-05 15:39:03.023476+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-08-05T15:39:03.024287781Z 2021-08-05 15:39:03.023757+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-08-05T15:39:03.045901591Z 2021-08-05 15:39:03.045602+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-08-05T15:39:03.391624143Z 2021-08-05 15:39:03.391057+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-08-05T15:39:03.391785874Z 2021-08-05 15:39:03.391207+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/quorum/rabbit#635519274c80
2021-08-05T15:39:03.510825736Z 2021-08-05 15:39:03.510441+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-08-05T15:39:03.536493082Z 2021-08-05 15:39:03.536098+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2021-08-05T15:39:03.547541524Z 2021-08-05 15:39:03.546999+00:00 [info] <0.222.0> ra: starting system coordination
2021-08-05T15:39:03.547876996Z 2021-08-05 15:39:03.547058+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/coordination/rabbit#635519274c80
2021-08-05T15:39:03.551508520Z 2021-08-05 15:39:03.551130+00:00 [info] <0.272.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2021-08-05T15:39:03.552002433Z 2021-08-05 15:39:03.551447+00:00 [noti] <0.277.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2021-08-05T15:39:03.557022096Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0>
2021-08-05T15:39:03.557045886Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Starting RabbitMQ 3.9.1 on Erlang 24.0.5 [jit]
2021-08-05T15:39:03.557050686Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.557069166Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558119613Z
2021-08-05T15:39:03.558134063Z ## ## RabbitMQ 3.9.1
2021-08-05T15:39:03.558139043Z ## ##
2021-08-05T15:39:03.558142303Z ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.558145473Z ###### ##
2021-08-05T15:39:03.558201373Z ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558206473Z
2021-08-05T15:39:03.558210714Z Erlang: 24.0.5 [jit]
2021-08-05T15:39:03.558215324Z TLS Library: OpenSSL - OpenSSL 1.1.1k 25 Mar 2021
2021-08-05T15:39:03.558219824Z
2021-08-05T15:39:03.558223984Z Doc guides: https://rabbitmq.com/documentation.html
2021-08-05T15:39:03.558227934Z Support: https://rabbitmq.com/contact.html
2021-08-05T15:39:03.558232464Z Tutorials: https://rabbitmq.com/getstarted.html
2021-08-05T15:39:03.558236944Z Monitoring: https://rabbitmq.com/monitoring.html
2021-08-05T15:39:03.558241154Z
2021-08-05T15:39:03.558244394Z Logs: /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.558247324Z <stdout>
2021-08-05T15:39:03.558250464Z
2021-08-05T15:39:03.558253304Z Config file(s): /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.558256274Z
2021-08-05T15:39:03.558984369Z Starting broker...2021-08-05 15:39:03.558703+00:00 [info] <0.222.0>
2021-08-05T15:39:03.558996969Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> node : rabbit#635519274c80
2021-08-05T15:39:03.559000489Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> home dir : /var/lib/rabbitmq
2021-08-05T15:39:03.559003679Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> config file(s) : /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.559006959Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> cookie hash : 1iZSjTlqOt/PC9WvpuHVSg==
2021-08-05T15:39:03.559010669Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> log(s) : /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.559014249Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> : <stdout>
2021-08-05T15:39:03.559017899Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> database dir : /var/lib/rabbitmq/mnesia/rabbit#635519274c80
2021-08-05T15:39:03.893651319Z 2021-08-05 15:39:03.892900+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:09.081076751Z 2021-08-05 15:39:09.080611+00:00 [info] <0.659.0> * rabbitmq_management_agent
----
Pulling docker image node:12 ...
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256:... ...
Preparing environment 00:01
Running on runner-pstdvlop-project-372-concurrent-0 via gitlab-runner01...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/node-api/.git/
Checking out 4ce1ae1a as PM-1814...
Removing .npm/
Removing node_modules/
Skipping Git submodules setup
Restoring cache 00:03
Checking cache for default...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
WARNING: node_modules/.bin/depcheck: chmod node_modules/.bin/depcheck: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:20
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256: ...
$ npm ci --cache .npm --prefer-offline --no-audit
npm WARN prepare removing existing node_modules/ before installation
> node-cron#2.0.3 postinstall /builds/node-api/node_modules/node-cron
> opencollective-postinstall
> core-js#2.6.12 postinstall /builds/node-api/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
added 642 packages in 10.824s
$ npm run test_integration
> pfm-liveprice-api#0.1.3 test_integration /builds/node-api
> npx nyc mocha test/integration --exit --timeout 10000 --reporter mocha-junit-reporter
RABBITMQ_PROTOCOL : amqp RABBITMQ_USER : guest RABBITMQ_PASS : guest
config.js parseInt(RABBITMQ_PORT) : NaN
simple message
[x] Sent 'Hello World!'
this queue [object Object] exists
----------------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------------------------|---------|----------|---------|---------|-------------------
All files | 5.49 | 13.71 | 4.11 | 5.33 |
pfm-liveprice-api | 21.3 | 33.8 | 21.43 | 21 |
app.js | 0 | 0 | 0 | 0 | 1-146
config.js | 76.67 | 55.81 | 100 | 77.78 | 19-20,48,55,67-69
pfm-liveprice-api/routes | 0 | 0 | 0 | 0 |
index.js | 0 | 100 | 0 | 0 | 1-19
info.js | 0 | 100 | 0 | 0 | 1-15
liveprice.js | 0 | 0 | 0 | 0 | 1-162
status.js | 0 | 100 | 0 | 0 | 1-14
pfm-liveprice-api/services | 0 | 0 | 0 | 0 |
rabbitmq.js | 0 | 0 | 0 | 0 | 1-110
pfm-liveprice-api/utils | 0 | 0 | 0 | 0 |
buildBinding.js | 0 | 0 | 0 | 0 | 1-35
buildProducts.js | 0 | 0 | 0 | 0 | 1-70
store.js | 0 | 0 | 0 | 0 | 1-291
----------------------------|---------|----------|---------|---------|-------------------
=============================== Coverage summary ===============================
Statements : 5.49% ( 23/419 )
Branches : 13.71% ( 24/175 )
Functions : 4.11% ( 3/73 )
Lines : 5.33% ( 21/394 )
================================================================================
Saving cache for successful job
00:05
Creating cache default...
node_modules/: found 13259 matching files and directories
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: test-results.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Tried and does not work
Using variables where to define RabbitMQ vars is deprecated and a .config is required
If I try to use the following vars in my .gitlab-ci.yml :
...
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
...
I get the following outout :
...
Starting service rabbitmq:latest ...
Pulling docker image rabbitmq:latest ...
Using docker image sha256:1c609d1740383796a30facdb06e52905e969f599927c1a537c10e4bcc6990193 for rabbitmq:latest with digest rabbitmq#sha256:d5056e576d8767c0faffcb17b5604a4351edacb8f83045e084882cabd384d216 ...
Waiting for services to be up and running...
*** WARNING: Service runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 probably didn't start properly.
Health check error:
start service container: Error response from daemon: Cannot link to a non running container: /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 AS /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0-wait-for-service/service (docker.go:1156:0s)
Service container logs:
2021-08-05T13:14:33.024761664Z error: RABBITMQ_DEFAULT_PASS is set but deprecated
2021-08-05T13:14:33.024797191Z error: RABBITMQ_DEFAULT_USER is set but deprecated
2021-08-05T13:14:33.024802924Z error: deprecated environment variables detected
2021-08-05T13:14:33.024806771Z
2021-08-05T13:14:33.024810742Z Please use a configuration file instead; visit https://www.rabbitmq.com/configure.html to learn more
2021-08-05T13:14:33.024844321Z
...
because on the official Docker documentation (https://hub.docker.com/_/rabbitmq) it is stated that :
WARNING: As of RabbitMQ 3.9, all of the docker-specific variables listed below are deprecated and no longer used. Please use a configuration file instead; visit rabbitmq.com/configure to learn more about the configuration file. For a starting point, the 3.8 images will print out the config file it generated from supplied environment variables.
# Unavailable in 3.9 and up
RABBITMQ_DEFAULT_PASS
RABBITMQ_DEFAULT_PASS_FILE
RABBITMQ_DEFAULT_USER
RABBITMQ_DEFAULT_USER_FILE
RABBITMQ_DEFAULT_VHOST
RABBITMQ_ERLANG_COOKIE
...

Related

node-red localhost not connecting

I am running node-red in docker compose and from the gitlab-cli file I am calling docker/compose image and my pipeline is working and I can see this:
node-red | 11 Nov 11:28:51 - [info]
462node-red |
463node-red | Welcome to Node-RED
464node-red | ===================
465node-red |
466node-red | 11 Nov 11:28:51 - [info] Node-RED version: v3.0.2
467node-red | 11 Nov 11:28:51 - [info] Node.js version: v16.16.0
468node-red | 11 Nov 11:28:51 - [info] Linux 5.15.49-linuxkit x64 LE
469node-red | 11 Nov 11:28:52 - [info] Loading palette nodes
470node-red | 11 Nov 11:28:53 - [info] Settings file : /data/settings.js
471node-red | 11 Nov 11:28:53 - [info] Context store : 'default' [module=memory]
472node-red | 11 Nov 11:28:53 - [info] User directory : /data
473node-red | 11 Nov 11:28:53 - [warn] Projects disabled : editorTheme.projects.enabled=false
474node-red | 11 Nov 11:28:53 - [info] Flows file : /data/flows.json
475node-red | 11 Nov 11:28:53 - [warn]
476node-red |
477node-red | ---------------------------------------------------------------------
478node-red | Your flow credentials file is encrypted using a system-generated key.
479node-red |
480node-red | If the system-generated key is lost for any reason, your credentials
481node-red | file will not be recoverable, you will have to delete it and re-enter
482node-red | your credentials.
483node-red |
484node-red | You should set your own key using the 'credentialSecret' option in
485node-red | your settings file. Node-RED will then re-encrypt your credentials
486node-red | file using your chosen key the next time you deploy a change.
487node-red | ---------------------------------------------------------------------
488node-red |
489node-red | 11 Nov 11:28:53 - [info] Server now running at http://127.0.0.1:1880/
490node-red | 11 Nov 11:28:53 - [warn] Encrypted credentials not found
491node-red | 11 Nov 11:28:53 - [info] Starting flows
492node-red | 11 Nov 11:28:53 - [info] Started flows
but when I am trying to open the localhost server to access the node-red or the dashboard, I am getting the error "Failed to open page"
This is my docker-compose.yml
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
user: '1000'
container_name: node-red
environment:
- TZ=Europe/Amsterdam
ports:
- "1880:1880"
This is my .gitlab-cli.yml
yateen-docker:
stage: build
image:
name: docker/compose
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
script:
- docker-compose up
only:
- main
Any help!
I tried to create the node-red docker via docker-compose not just by running docker run command. Though my node-red image is running but I can't access the server page

Dockerfile Docker-Compose VueJS App using HAProxy won't run

I'm building my project VUEJS App using Trusted Third Party API, and I'm in the middle of building Dockerfile and docker-compose.yml and using haproxy to allow all methode access to API. But after running docker-compose up --build my first theApp stopped immediately, and always stop even after restart, here's my file
Dockerfile
FROM node:18.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "serve"]
docker-compose.yml
version: "3.7"
services:
theApp:
container_name: theApp
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "9990:9990"
haproxy:
image: haproxy:2.3
expose:
- "7000"
- "8080"
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
restart: "always"
depends_on:
- theApp
haproxy.cfg
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout tunnel 1h # timeout to use with WebSocket and CONNECT
#enable resolving throught docker dns and avoid crashing if service is down while proxy is starting
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend stats
bind *:7000
stats enable
stats hide-version
stats uri /stats
stats refresh 10s
stats auth admin:admin
frontend project_frontend
bind *:8080
acl is_options method OPTIONS
use_backend cors_backend if is_options
default_backend project_backend
backend project_backend
# START CORS
http-response add-header Access-Control-Allow-Origin "*"
http-response add-header Access-Control-Allow-Headers "*"
http-response add-header Access-Control-Max-Age 3600
http-response add-header Access-Control-Allow-Methods "GET, DELETE, OPTIONS, POST, PUT, PATCH"
# END CORS
server pbe1 theApp:8080 check inter 5s
backend cors_backend
http-after-response set-header Access-Control-Allow-Origin "*"
http-after-response set-header Access-Control-Allow-Headers "*"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200
here's the error from command
[NOTICE] 150/164342 (1) : New worker #1 (8) forked
haproxy_1 | [WARNING] 150/164342 (8) : Server project_backend/pbe1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1 | [NOTICE] 150/164342 (8) : haproxy version is 2.3.20-2c8082e
haproxy_1 | [NOTICE] 150/164342 (8) : path to executable is /usr/local/sbin/haproxy
haproxy_1 | [ALERT] 150/164342 (8) : backend 'project_backend' has no server available!
trisaic |
trisaic | > trisaic#0.1.0 serve
trisaic | > vue-cli-service serve
trisaic |
trisaic | INFO Starting development server...
trisaic | ERROR Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | at checkResourceSource (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:167:11)
trisaic | at Function.normalizeRule (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:198:4)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:110:20
trisaic | at Array.map (<anonymous>)
trisaic | at Function.normalizeRules (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:109:17)
trisaic | at new RuleSet (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:104:24)
trisaic | at new NormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/NormalModuleFactory.js:115:18)
trisaic | at Compiler.createNormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:636:31)
trisaic | at Compiler.newCompilationParams (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:653:30)
trisaic | at Compiler.compile (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:661:23)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:77:18
trisaic | at AsyncSeriesHook.eval [as callAsync] (eval at create (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:24:1)
trisaic | at AsyncSeriesHook.lazyCompileHook (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/Hook.js:154:20)
trisaic | at Watching._go (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:41:32)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:33:9
trisaic | at Compiler.readRecords (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:529:11)
trisaic exited with code 1
I already tried and googled but got stuck, am I missing something here?

Docker-Compose with Commandbox cannot change web root

I'm using docker-compose to launch a commandbox lucee container and a mysql contianer.
I'd like to change the web root of the lucee server, to keep all my non-public files hidden (server.json etc, cfmigrations resources folder)
I've followed the docs and updated my server.json
https://commandbox.ortusbooks.com/embedded-server/server.json/packaging-your-server
{
"web":{
"webroot":"./public"
}
}
If I launch the server from Windows (box start from the app folder), the server loads my index.cfm from ./public at http://localhost, perfect.
But using this .yaml file, the webroot doesn't change to ./public and the contents of my /app folder is shown, with the "public" folder visible in the directory listing.
services:
db:
image: mysql:8.0.26
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_DATABASE: cf
MYSQL_USER: $MYSQL_USER
MYSQL_PASSWORD: $MYSQL_PASSWORD
MYSQL_SOURCE: $MYSQL_SOURCE
MYSQL_SOURCE_USER: $MYSQL_SOURCE_USER
MYSQL_SOURCE_PASSWORD: $MYSQL_SOURCE_PASSWORD
volumes:
- ./mysql:/var/lib/mysql
- ./assets/initdb:/docker-entrypoint-initdb.d
- ./assets/sql:/assets/sql
web:
depends_on:
- db
# Post 3.1.0 fails to boot if APP_DIR is set to non /app
# image: ortussolutions/commandbox:lucee5-3.1.0
image: ortussolutions/commandbox:lucee5
# build: .
ports:
- "80:80"
- "443:443"
environment:
- PORT=80
- SSL_PORT=443
- BOX_SERVER_WEB_SSL_ENABLE=true
- BOX_SERVER_WEB_DIRECTORYBROWSING=$CF_DIRECTORY_BROWSING
- BOX_INSTALL=true
- BOX_SERVER_WEB_BLOCKCFADMIN=$CF_BLOCK_ADMIN
- BOX_SERVER_CFCONFIGFILE=/app/.cfconfig.json
# - APP_DIR=/app/public
# - BOX_SERVER_WEB_WEBROOT=/app/public
- cfconfig_robustExceptionEnabled=$CF_ROBOUST_EXCEPTION_ENABLED
- cfconfig_adminPassword=$CF_ADMIN_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_HOST=$MYSQL_HOST
- MYSQL_PORT=$MYSQL_PORT
volumes:
- ./app:/app
- ./assets/mysql-connector-java-8.0.26.jar:/usr/local/lib/CommandBox/lib/mysql-connector-java-8.0.26.jar
Here's the directory listing:
This is my project structure:
It seems like the server.json file is being ignored or at least the web.webroot property, but I've tried both of these settings, and neither solve the problem
- APP_DIR=/app/public
- BOX_SERVER_WEB_WEBROOT=/app/public
The commandbox docs suggest changing APP_DIR to fix the web root, "APP_DIR - Application directory (web root)."
https://hub.docker.com/r/ortussolutions/commandbox/
But if I do that, I get an error about the startup script being in the wrong place, which to me looks like it should be fixed:
https://github.com/Ortus-Solutions/docker-commandbox/issues/55
The BOX_SERVER_WEB_WEBROOT is in the same way server.json is (or at least that property). I've tried setting the following env vars as well (both upper and lower case) and it makes no diffence, but bear in mind server.json changes the webroot for me whe
BOX_SERVER_WEB_WEBROOT=./public
BOX_SERVER_WEB_WEBROOT=/app/public
BOX_SERVER_WEB_WEBROOT=public
The output as the web container starts up:
Set verboseErrors = true
INFO: CF Engine defined as lucee#5.3.8+189
INFO: Convention .cfconfig.json found at /app/.cfconfig.json
INFO: Server Home Directory set to: /usr/local/lib/serverHome
√ | Installing ALL dependencies
| √ | Installing package [forgebox:commandbox-cfconfig#1.6.3]
| √ | Installing package [forgebox:commandbox-migrations#3.2.3]
| | √ | Installing package [forgebox:cfmigrations#^2.0.0]
| | | √ | Installing package [forgebox:qb#^8.0.0]
| | | | √ | Installing package [forgebox:cbpaginator#^2.4.0]
+ [[ -n '' ]]
+ [[ -n '' ]]
INFO: Generating server startup script
√ | Starting Server
|------------------------------
| start server in - /app/
| server name - app
| server config file - /app//server.json
| WAR/zip archive already installed.
| Found CFConfig JSON in ".cfconfig.json" file in web root by convention
| .
| Importing luceeserver config from [/app/.cfconfig.json]
| Config transferred!
| Setting OS environment variable [cfconfig_adminPassword] into luceeser
| ver
| [adminPassword] set.
| Setting OS environment variable [cfconfig_robustExceptionEnabled] into
| luceeserver
| [robustExceptionEnabled] set.
| Start script for shell [bash] generated at: /app/server-start.sh
| Server start command:
| /opt/java/openjdk/bin/java
| -jar /usr/local/lib/CommandBox/lib/runwar-4.5.1.jar
| --background=false
| --host 0.0.0.0
| --stop-port 42777
| --processname app [lucee 5.3.8+189]
| --log-dir /usr/local/lib/serverHome//logs
| --server-name app
| --tray-enable false
| --dock-enable true
| --directoryindex true
| --timeout 240
| --proxy-peeraddress true
| --cookie-secure false
| --cookie-httponly false
| --pid-file /usr/local/lib/serverHome//.pid.txt
| --gzip-enable true
| --cfengine-name lucee
| -war /app/
| --web-xml-path /usr/local/lib/serverHome/WEB-INF/web.xml
| --http-enable true
| --ssl-enable true
| --ajp-enable false
| --http2-enable true
| --open-browser false
| --open-url https://0.0.0.0:443
| --port 80
| --ssl-port 443
| --urlrewrite-enable false
| --predicate-file /usr/local/lib/serverHome//.predicateFile.txt
| Dry run specified, exiting without starting server.
|------------------------------
| √ | Setting Server Profile to [production]
| |-----------------------------------------------------
| | Profile set from secure by default
| | Block CF Admin disabled
| | Block Sensitive Paths enabled
| | Block Flash Remoting enabled
| | Directory Browsing enabled
| |-----------------------------------------------------
INFO: Starting server using generated script: /usr/local/bin/startup.sh
[INFO ] runwar.server: Starting RunWAR 4.5.1
[INFO ] runwar.server: HTTP2 Enabled:true
[INFO ] runwar.server: Enabling SSL protocol on port 443
[INFO ] runwar.server: HTTP ajpEnable:false
[INFO ] runwar.server: HTTP warFile exists:true
[INFO ] runwar.server: HTTP warFile isDirectory:true
[INFO ] runwar.server: HTTP background:false
[INFO ] runwar.server: Adding additional lib dir of: /usr/local/lib/serverHome/WEB-INF/lib
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Starting - port:80 stop-port:42777 warpath:file:/app/
[INFO ] runwar.server: context: / - version: 4.5.1
[INFO ] runwar.server: web-dirs: ["\/app"]
[INFO ] runwar.server: Log Directory: /usr/local/lib/serverHome/logs
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: XNIO-Option CONNECTION_LOW_WATER:1000000
[INFO ] runwar.server: XNIO-Option CORK:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_MAX_THREADS:30
[INFO ] runwar.server: XNIO-Option WORKER_IO_THREADS:8
[INFO ] runwar.server: XNIO-Option TCP_NODELAY:true
[INFO ] runwar.server: XNIO-Option WORKER_TASK_CORE_THREADS:30
[INFO ] runwar.server: XNIO-Option CONNECTION_HIGH_WATER:1000000
[INFO ] runwar.config: Parsing '/usr/local/lib/serverHome/WEB-INF/web.xml'
[INFO ] runwar.server: Extensions allowed by the default servlet for static files: 3gp,3gpp,7z,ai,aif,aiff,asf,asx,atom,au,avi,bin,bmp,btm,cco,crt,css,csv,deb,der,dmg,doc,docx,eot,eps,flv,font,gif,hqx,htc,htm,html,ico,img,ini,iso,jad,jng,jnlp,jpeg,jpg,js,json,kar,kml,kmz,m3u8,m4a,m4v,map,mid,midi,mml,mng,mov,mp3,mp4,mpeg,mpeg4,mpg,msi,msm,msp,ogg,otf,pdb,pdf,pem,pl,pm,png,ppt,pptx,prc,ps,psd,ra,rar,rpm,rss,rtf,run,sea,shtml,sit,svg,svgz,swf,tar,tcl,tif,tiff,tk,ts,ttf,txt,wav,wbmp,webm,webp,wmf,wml,wmlc,wmv,woff,woff2,xhtml,xls,xlsx,xml,xpi,xspf,zip,aifc,aac,apk,bak,bk,bz2,cdr,cmx,dat,dtd,eml,fla,gz,gzip,ipa,ia,indd,hey,lz,maf,markdown,md,mkv,mp1,mp2,mpe,odt,ott,odg,odf,ots,pps,pot,pmd,pub,raw,sdd,tsv,xcf,yml,yaml
[INFO ] runwar.server: welcome pages in deployment manager: [index.cfm, index.lucee, index.html, index.htm]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender (file:/usr/local/lib/serverHome/WEB-INF/lib/lucee.jar) to method java.net.URLClassLoader.addURL(java.net.URL)
WARNING: Please consider reporting this to the maintainers of org.apache.felix.framework.ext.ClassPathExtenderFactory$DefaultClassLoaderExtender
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[INFO ] runwar.server: Direct Buffers: true
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: *** starting 'stop' listener thread - Host: 0.0.0.0 - Socket: 42777
[INFO ] runwar.server: ******************************************************************************
[INFO ] runwar.server: Server is up - http-port:80 https-port:443 stop-port:42777 PID:286 version 4.5.1
This is all fairly new to me so I might have done something completely wrong, I'm wondering if it's a problem with the folder nesting, although I've tried rearranging it and can't come up with a working solution.
You're using a pre-warmed image
image: ortussolutions/commandbox:lucee5
That means the server has already been started and "locked in" to all its settings, including the web root. Use the vanilla commandbox image that has never had a server started, that way when you warm up the image, you'll be starting it with your settings the first time.
To set a custom web root, you'll want to add this to your docker file
ENV APP_DIR=/app/public

circleci only 1 subschema matches out of 2

i'm trying to create a circleci automated testing for my electron apps.
i followed the intruction from here: https://circleci.com/blog/electron-testing/
my repo: https://github.com/dhanyn10/electron-example/tree/spectron
my project folder looks like this
electron-example
|──/.circleci
| |──config.yml
|──/bootbox
|──/project1
|──/project2
because in my project contains many applications, i need to specify which application in folder that i will test. Here's my circleci config
version: 2.1
jobs:
build:
working_directory: ~/electron-example/bootbox
docker:
- image: circleci/node:11-browsers
steps:
- checkout:
path: ~/electron-example
- run:
name: Update NPM
command: "sudo npm install -g npm"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install Dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: Run tests
command: npm run test
package.json
...
"devDependencies": {
"electron": "^11.4.3",
"electron-builder": "^22.10.4",
"mocha": "^8.3.2",
"spectron": "^13.0.0"
},
...
it return error below
#!/bin/sh -eo pipefail
# ERROR IN CONFIG FILE:
# [#/jobs/build] only 1 subschema matches out of 2
# 1. [#/jobs/build/steps/0] 0 subschemas matched instead of one
# | 1. [#/jobs/build/steps/0] extraneous key [path] is not permitted
# | | Permitted keys:
# | | - persist_to_workspace
# | | - save_cache
# | | - run
# | | - checkout
# | | - attach_workspace
# | | - store_test_results
# | | - restore_cache
# | | - store_artifacts
# | | - add_ssh_keys
# | | - deploy
# | | - setup_remote_docker
# | | Passed keys:
# | | []
# | 2. [#/jobs/build/steps/0] Input not a valid enum value
# | | Steps without arguments can be called as strings
# | | enum:
# | | - checkout
# | | - setup_remote_docker
# | | - add_ssh_keys
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
Exited with code exit status 1
CircleCI received exit code 1
How to solve this error?
See:
https://circleci.com/docs/2.0/configuration-reference/#checkout
A bit late here, but checkout defaults to the working_directory:
- checkout:
path: ~/electron-example
Should be:
- checkout
Also I got here because of the following, trying to add the browser-tools:
workflows:
build-and-test:
jobs:
- build
- pylint
- tslint
- karma
- jest
- unit-tests
- functional-tests:
requires:
- build
- deploy:
requires:
- build
orbs:
browser-tools-blah
Should have been:
orbs:
browser-tools-blah
workflows:
build-and-test:
jobs:
- build
- pylint
- tslint
- karma
- jest
- unit-tests
- functional-tests:
requires:
- build
- deploy:
requires:
- build

Error while using docker-compose and nodemon

I am using docker-compose and nodemon for my dev.
my fs looks like this:
├── code
│   ├── images
│   │   ├── api
│   │   ├── db
│   └── topology
│   └── docker-compose.yml
Normally when I run docker-compose up --build, files are copied from my local computer to containers. Since I am in dev mode,
I don't want to run docker-compose up --build every time, that is why I am using volume to share a directory between my local computer and container.
I make some few research and this is what I come out with:
API, Dockerfile:
FROM node:8
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN npm install nodemon -g --save
RUN npm install
CMD [ "nodemon", "app.js" ]
DB, Dockerfile:
FROM mongo:3.2-jessie
docker-compose.yml
version: '2'
services:
api:
build: ../images/api
volumes:
- .:/usr/src/app
ports:
- "7000:7000"
links: ["db"]
db:
build: ../images/db
ports:
- "27017:27017"
The problem is that, when I run docker-compose up --build
I have this error:
---> 327987c38250
Removing intermediate container f7b46029825f
Step 7/7 : CMD nodemon app.js
---> Running in d8430d03bcd2
---> ee5de77d78eb
Removing intermediate container d8430d03bcd2
Successfully built ee5de77d78eb
Recreating topology_db_1
Recreating topology_api_1
Attaching to topology_db_1, topology_api_1
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=5b93871d0f4f
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] db version v3.2.21
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] git version: 1ab1010737145ba3761318508ff65ba74dfe8155
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1t 3 May 2016
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] allocator: tcmalloc
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] modules: none
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] build environment:
db_1 | 2018-09-22T10:08:41.679+0000 I CONTROL [initandlisten] distmod: debian81
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] distarch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] target_arch: x86_64
db_1 | 2018-09-22T10:08:41.680+0000 I CONTROL [initandlisten] options: {}
db_1 | 2018-09-22T10:08:41.686+0000 I - [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1 | 2018-09-22T10:08:41.687+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),verbose=(recovery_progress),
db_1 | 2018-09-22T10:08:41.905+0000 I STORAGE [initandlisten] WiredTiger [1537610921:904991][1:0x7fdf57debcc0], txn-recover: Main recovery loop: starting at 89/4096
db_1 | 2018-09-22T10:08:41.952+0000 I STORAGE [initandlisten] WiredTiger [1537610921:952261][1:0x7fdf57debcc0], txn-recover: Recovering log 89 through 90
db_1 | 2018-09-22T10:08:41.957+0000 I STORAGE [initandlisten] WiredTiger [1537610921:957000][1:0x7fdf57debcc0], txn-recover: Recovering log 90 through 90
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
db_1 | 2018-09-22T10:08:42.148+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1 | 2018-09-22T10:08:42.148+0000 I NETWORK [initandlisten] waiting for connections on port 27017
api_1 | Usage: nodemon [nodemon options] [script.js] [args]
api_1 |
api_1 | See "nodemon --help" for more.
api_1 |
topology_api_1 exited with code 0
If I comment:
volumes:
-.:/usr/src/app
It compiles and run correctly.
Can someone help find what is what in my approach ?
Thanks
docker-compose.yml and api dockerFile are in different directory?“volume” instruction in compose file overwrite the ”copy” instruction。they have different context。

Resources