Docker-Compose container communication for microservices via Caddy v2 - docker

I have the following architecture:
Docker with Linux-Containers on Windows
A NodeJS microservice called register listens inside the docker-network on port 3100
a Caddy Container image from https://hub.docker.com/r/abiosoft/caddy/ for internal routing
a Mongo Container image
docker-compose.yaml
version: '3'
services:
mongodb:
build: ./data
container_name: "mongodb"
hostname: mongodb
ports:
- "27019:27017"
logging:
driver: none
router:
build:
context: ./
dockerfile: ./router/Dockerfile
volumes:
- ./router/etc/:/etc/
- ./router/.config/:/.config/
- ./router/home:/home/caddy/
ports:
- "3000:8080"
cap_add:
- CAP_NET_BIND_SERVICE
register:
build:
context: ./
dockerfile: ./development.docker
args:
SERVICE_NAME: register
container_name: "register"
environment:
FLASK_ENV: development
CaddyFile.json
"admin": {
"listen": "0.0.0.0:2019"
},
"apps": {
"http": {
"servers": {
"srv0": {
"listen": ["0.0.0.0:8080"],
"routes": [{
"handle": [{
"handler": "reverse_proxy",
"transport": {
"protocol": "http"
},
"upstreams": [
{
"dial":"register:3100"
}
]
}],
"match": [{
"path": ["/register", "/register/*"]
}],
"terminal": true
}, {
"handle": [{
"handler": "subroute",
"routes": [{
"handle": [{
"handler": "file_server",
"hide": ["/etc/caddy/Caddyfile"],
"root": "/home/caddy/web/" # index.html is shown when accessing localhost:3000
}]
}]
}],
"match": [{
"path": ["/"]
}],
"terminal": true
}]
}
}
}
}
}
Expected Behaviour
GET-Request to localhost:3000 shows index.html --> works
GET-Request to localhost:3000/register should return a JSON-Object
Actual Behaviour
I get the following error:
router_1 | 2020/01/30 12:24:20.688 ERROR http.log.error dial tcp: lookup register on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address {"request": {"method": "GET", "uri": "/register/", "proto": "HTTP/1.1", "remote_addr":
"172.18.0.1:43612", "host": "localhost:3000", "headers": {"Accept-Encoding": ["gzip, deflate, br"], "Connection": ["keep-alive"], "Content-Type": ["application/json"], "User-Agent": ["PostmanRuntime/7.22.0"], "Accept": ["*/*"], "Cache-Control": ["no-cache"], "Postman-Token": ["a7558f06-c099-4bbe-bee9-97c50a8910b8"]}}, "status": 502, "err_id": "5nm6udifm", "err_trace": "reverseproxy.(*Handler).ServeHTTP (reverseproxy.go:362)"}
I restarted docker-compose multiple times, tried to change DNS settings and strangely it worked in rare cases but just once.
As far as I know, all the containers are able to ping each other in the network, so there must be some kind of connection between them. As mentioned before, I am trying to run this network with Linux containers on a Windows machine. I also tried it on multiple Linux systems and everything worked just fine.
I am not sure, whether it is a DNS-Problem or something else?
Does anyone have any idea?
Thank you in advance.
UPDATE
here are the logs
mongodb | WARNING: no logs are available with the 'none' log driver
router_1 | 2020/01/30 13:42:26.969 INFO using provided configuration {"config_file": "/etc/caddy/caddyfile.json", "config_adapter": "json5"}
router_1 | 2020/01/30 13:42:26.982 INFO admin admin endpoint started {"address": "0.0.0.0:2019", "enforce_origin": false, "origins": ["0.0.0.0:2019"]}
router_1 | 2020/01/30 13:42:26.984 INFO tls cleaned up storage units
router_1 | 2020/01/30 13:42:26 [INFO][cache:0xc000324dc0] Started certificate maintenance routine
router_1 | 2020/01/30 13:42:27.099 INFO autosaved config {"file": "/.config/caddy/autosave.json"}
router_1 | 2020/01/30 13:42:27.099 INFO serving initial configuration
register |
register | > register#0.0.1 start:linux /app/services/register
register | > nodemon --watch src --ext ts --exec 'nest build && node ./dist/services/'$npm_package_name'/src/main'
register |
register | [nodemon] 2.0.2
register | [nodemon] to restart at any time, enter `rs`
register | [nodemon] watching dir(s): src/*/
register | [nodemon] watching extensions: ts
register | [nodemon] starting `nest build && node ./dist/services/register/src/main`
register | (node:30) [DEP0091] DeprecationWarning: crypto.DEFAULT_ENCODING is deprecated.
register | (node:30) [DEP0010] DeprecationWarning: crypto.createCredentials is deprecated. Use tls.createSecureContext instead.
register | (node:30) [DEP0011] DeprecationWarning: crypto.Credentials is deprecated. Use tls.SecureContext instead.
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestFactory] Starting Nest application...
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseModule dependencies initialized +50ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] ConfigHostModule dependencies initialized +2ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] ConfigModule dependencies initialized +1ms
register | (node:29) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the
new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
register | (node:29) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseCoreModule dependencies initialized +48ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] MongooseModule dependencies initialized +3ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [InstanceLoader] AppModule dependencies initialized +2ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestMicroservice] Nest microservice successfully started +12ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [RoutesResolver] AppController {/register}: +33ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [RouterExplorer] Mapped {/create, POST} route +16ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [NestApplication] Nest application successfully started +16ms
register | [Nest] 29 - 01/30/2020, 1:42:45 PM [Main] REGISTRATION LISTENING 2
router_1 | 2020/01/30 13:43:08.235 ERROR http.log.error dial tcp: lookup register on [::1]:53: dial udp [::1]:53: connect: cannot assign requested address {"request": {"method": "POST", "uri": "/register/create", "proto": "HTTP/1.1", "remote_addr": "172.20.0.1:53036", "host": "localhost:3000", "headers": {"Postman-Token": ["c3b7a4b1-d424-4fe0-8809-5d9fc38dd9dc"], "Accept-Encoding": ["gzip, deflate"], "Connection": ["keep-alive"], "Content-Type": ["application/json"], "Cache-Control": ["no-cache"], "User-Agent": ["PostmanRuntime/7.6.0"], "Accept": ["/"], "Content-Length": ["429"]}}, "status": 502, "err_id": "qx319czc0", "err_trace": "reverseproxy.(*Handler).ServeHTTP (reverseproxy.go:362)"}
Unfortunately using depends_on in docker-compose.yaml does not work.

Related

Why is hot reloading not working in my NestJS/Docker-Compose multistage project?

Hot reloading is not working. The API is not being updated after changes in the code are saved. Here is the code:
https://codesandbox.io/s/practical-snowflake-c4j6fh
When bulding (docker-compose up -V --build) I get the following messages on terminal:
2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo</br>2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # Redis version=7.0.6, bits=64, commit=00000000, modified=0, pid=1, just started</br>2022-12-16 09:29:53 redis | 1:C 16 Dec 2022 12:29:53.411 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 * monotonic clock: POSIX clock_gettime</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 * Running mode=standalone, port=6379.</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 # Server initialized</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.411 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/</br>jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.</br>2022-12-16 09:29:53 redis | 1:M 16 Dec 2022 12:29:53.412 * Ready to accept connections</br>2022-12-16 09:29:53 postgres | The files belonging to this database system will be owned by user "postgres".</br>2022-12-16 09:29:53 postgres | This user must also own the server process.</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | The database cluster will be initialized with locale "en_US.utf8".</br>2022-12-16 09:29:53 postgres | The default database encoding has accordingly been set to "UTF8".</br>2022-12-16 09:29:53 postgres | The default text search configuration will be set to "english".</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | Data page checksums are disabled.</br>2022-12-16 09:29:53 postgres | </br>2022-12-16 09:29:53 postgres | fixing permissions on existing directory /var/lib/postgresql/data ... ok</br>2022-12-16 09:29:53 postgres | creating subdirectories ... ok</br>2022-12-16 09:29:53 postgres | selecting dynamic shared memory implementation ... posix</br>2022-12-16 09:29:53 postgres | selecting default max_connections ... 100</br>2022-12-16 09:29:53 postgres | selecting default shared_buffers ... 128MB</br>2022-12-16 09:29:53 postgres | selecting default time zone ... Etc/UTC</br>2022-12-16 09:29:53 postgres | creating configuration files ... ok</br>2022-12-16 09:29:53 postgres | running bootstrap script ... ok</br>2022-12-16 09:29:54 postgres | performing post-bootstrap initialization ... ok</br>2022-12-16 09:29:54 postgres | initdb: warning: enabling "trust" authentication for local connections</br>2022-12-16 09:29:54 postgres | You can change this by editing pg_hba.conf or using the option -A, or</br>2022-12-16 09:29:54 postgres | --auth-local and --auth-host, the next time you run initdb.</br>2022-12-16 09:29:54 postgres | syncing data to disk ... ok</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | Success. You can now start the database server using:</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | pg_ctl -D /var/lib/postgresql/data -l logfile start</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | waiting for server to start....2022-12-16 12:29:54.305 UTC [48] LOG: starting PostgreSQL 12.13 (Debian 12.13-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.311 UTC [48] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"</br>2022-12-16 09:29:54 store-backend-api-1 | </br>2022-12-16 09:29:54 store-backend-api-1 | > store-backend#0.0.1 start:dev</br>2022-12-16 09:29:54 store-backend-api-1 | > nest start --watch</br>2022-12-16 09:29:54 store-backend-api-1 | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.338 UTC [49] LOG: database system was shut down at 2022-12-16 12:29:54 UTC</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.345 UTC [48] LOG: database system is ready to accept connections</br>2022-12-16 09:29:54 postgres | done</br>2022-12-16 09:29:54 postgres | server started</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.434 UTC [48] LOG: received fast shutdown request</br>2022-12-16 09:29:54 postgres | waiting for server to shut down....2022-12-16 12:29:54.444 UTC [48] LOG: aborting any active transactions</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.445 UTC [48] LOG: background worker "logical replication launcher" (PID 55) exited with exit code 1</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.445 UTC [50] LOG: shutting down</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.482 UTC [48] LOG: database system is shut down</br>2022-12-16 09:29:54 postgres | done</br>2022-12-16 09:29:54 postgres | server stopped</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | PostgreSQL init process complete; ready for start up.</br>2022-12-16 09:29:54 postgres | </br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: starting PostgreSQL 12.13 (Debian 12.13-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.552 UTC [1] LOG: listening on IPv6 address "::", port 5432</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.563 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.592 UTC [67] LOG: database system was shut down at 2022-12-16 12:29:54 UTC</br>2022-12-16 09:29:54 postgres | 2022-12-16 12:29:54.600 UTC [1] LOG: database system is ready to accept connections</br>
And then the previous messages disappear and the following ones are shown:
[12:29:55 PM] Starting compilation in watch mode...</br>2022-12-16 09:29:55 store-backend-api-1 | </br>2022-12-16 09:29:58 store-backend-api-1 | [12:29:58 PM] Found 0 errors. Watching for file changes.</br>2022-12-16 09:29:58 store-backend-api-1 | </br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [NestFactory] Starting Nest application...</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +65ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] AppModule dependencies initialized +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmCoreModule dependencies initialized +49ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] TypeOrmModule dependencies initialized +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [InstanceLoader] UserModule dependencies initialized +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RoutesResolver] AppController {/api}: +7ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api, GET} route +4ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/test, GET} route +0ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RoutesResolver] UserController {/api/users}: +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/users, POST} route +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms</br>2022-12-16 09:29:59 store-backend-api-1 | [Nest] 29 - 12/16/2022, 12:29:59 PM LOG [NestApplication] Nest application successfully started +3ms
Add this to tsconfig.json.
"watchOptions": {
// Use native file system events for files and directories
"watchFile": "priorityPollingInterval",
"watchDirectory": "dynamicprioritypolling",
// Poll files for updates more frequently
// when they're updated a lot.
"fallbackPolling": "dynamicPriority",
// Don't coalesce watch notification
"synchronousWatchDirectory": true,
// Finally, two additional settings for reducing the amount of possible
// files to track work from these directories
"excludeDirectories": ["**/node_modules", "dist"]
}
Try this. I downloaded your code an tested it. You have a permission problem. Remove container and delete docker volumes , then run docker compose with this changes in the Dockerfile:
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:18-alpine As development
USER root
# Create app directory
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./
# Install app dependencies using the `npm ci` command instead of `npm install`
RUN npm ci
# Bundle app source
COPY . .
RUN npm run build
# Use the node user from the image (instead of the root user)
USER node
OK! This is very strange!
I probably solved this by deleting all files inside my project and recreating new files.
Some files, I copied from the tutorial repository below. Other files, I manually recreated. But in all I put the old content, without changes.
Maybe there was some file permission issue that didn't allow hot reloading. I am not sure! But the hot reload is working, for now.
https://www.tomray.dev/nestjs-docker-compose-postgres

Dockerfile Docker-Compose VueJS App using HAProxy won't run

I'm building my project VUEJS App using Trusted Third Party API, and I'm in the middle of building Dockerfile and docker-compose.yml and using haproxy to allow all methode access to API. But after running docker-compose up --build my first theApp stopped immediately, and always stop even after restart, here's my file
Dockerfile
FROM node:18.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "serve"]
docker-compose.yml
version: "3.7"
services:
theApp:
container_name: theApp
build:
context: .
dockerfile: Dockerfile
volumes:
- ./src:/app/src
ports:
- "9990:9990"
haproxy:
image: haproxy:2.3
expose:
- "7000"
- "8080"
ports:
- "8080:8080"
volumes:
- ./haproxy:/usr/local/etc/haproxy
restart: "always"
depends_on:
- theApp
haproxy.cfg
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout tunnel 1h # timeout to use with WebSocket and CONNECT
#enable resolving throught docker dns and avoid crashing if service is down while proxy is starting
resolvers docker_resolver
nameserver dns 127.0.0.11:53
frontend stats
bind *:7000
stats enable
stats hide-version
stats uri /stats
stats refresh 10s
stats auth admin:admin
frontend project_frontend
bind *:8080
acl is_options method OPTIONS
use_backend cors_backend if is_options
default_backend project_backend
backend project_backend
# START CORS
http-response add-header Access-Control-Allow-Origin "*"
http-response add-header Access-Control-Allow-Headers "*"
http-response add-header Access-Control-Max-Age 3600
http-response add-header Access-Control-Allow-Methods "GET, DELETE, OPTIONS, POST, PUT, PATCH"
# END CORS
server pbe1 theApp:8080 check inter 5s
backend cors_backend
http-after-response set-header Access-Control-Allow-Origin "*"
http-after-response set-header Access-Control-Allow-Headers "*"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200
here's the error from command
[NOTICE] 150/164342 (1) : New worker #1 (8) forked
haproxy_1 | [WARNING] 150/164342 (8) : Server project_backend/pbe1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1 | [NOTICE] 150/164342 (8) : haproxy version is 2.3.20-2c8082e
haproxy_1 | [NOTICE] 150/164342 (8) : path to executable is /usr/local/sbin/haproxy
haproxy_1 | [ALERT] 150/164342 (8) : backend 'project_backend' has no server available!
trisaic |
trisaic | > trisaic#0.1.0 serve
trisaic | > vue-cli-service serve
trisaic |
trisaic | INFO Starting development server...
trisaic | ERROR Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | Error: Rule can only have one resource source (provided resource and test + include + exclude) in {
trisaic | "type": "javascript/auto",
trisaic | "include": [
trisaic | {}
trisaic | ],
trisaic | "use": []
trisaic | }
trisaic | at checkResourceSource (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:167:11)
trisaic | at Function.normalizeRule (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:198:4)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:110:20
trisaic | at Array.map (<anonymous>)
trisaic | at Function.normalizeRules (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:109:17)
trisaic | at new RuleSet (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/RuleSet.js:104:24)
trisaic | at new NormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/NormalModuleFactory.js:115:18)
trisaic | at Compiler.createNormalModuleFactory (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:636:31)
trisaic | at Compiler.newCompilationParams (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:653:30)
trisaic | at Compiler.compile (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:661:23)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:77:18
trisaic | at AsyncSeriesHook.eval [as callAsync] (eval at create (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:24:1)
trisaic | at AsyncSeriesHook.lazyCompileHook (/app/node_modules/#vue/cli-service/node_modules/tapable/lib/Hook.js:154:20)
trisaic | at Watching._go (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:41:32)
trisaic | at /app/node_modules/#vue/cli-service/node_modules/webpack/lib/Watching.js:33:9
trisaic | at Compiler.readRecords (/app/node_modules/#vue/cli-service/node_modules/webpack/lib/Compiler.js:529:11)
trisaic exited with code 1
I already tried and googled but got stuck, am I missing something here?

Getting "State of the connection with the Jaeger Collector backend.." (jaeger/TRANSIENT_FAILURE) while running OpenTelemetry Collector

I am trying to build a simple application that sends traces to OpenTelemetry Collector, which exports the traces to Jaeger Backend.
But while I spin up the collector and Jaeger Backend, I get the following message,
info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "TRANSIENT_FAILURE"}
When I run the go application, I see no traces on the Jaeger UI. Also, no logs from the collector the shell.
main.go
package main
import (
"context"
"fmt"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
func initialize() {
traceExp, err := otlptracehttp.New(
context.TODO(),
otlptracehttp.WithEndpoint("0.0.0.0:55680"),
otlptracehttp.WithInsecure(),
)
if err != nil {
fmt.Println(err)
}
bsp := sdktrace.NewBatchSpanProcessor(traceExp)
tracerProvider := sdktrace.NewTracerProvider(
sdktrace.WithSpanProcessor(bsp),
)
otel.SetTracerProvider(tracerProvider)
}
func main() {
initialize()
tracer := otel.Tracer("demo-client-tracer")
ctx, span := tracer.Start(context.TODO(), "span-name")
defer span.End()
time.Sleep(time.Second)
fmt.Println(ctx)
}
Following are the collector config and docker-compose file.
otel-collector-config
receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
jaeger:
endpoint: "http://jaeger-all-in-one:14250"
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]
docker-compose.yaml
version: "2"
services:
# Jaeger
jaeger-all-in-one:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250:14250"
# Collector
otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317"
- "55680:55680"
depends_on:
- jaeger-all-in-one
Additional Logs while running docker-compose up,
Starting open-telemetry-collector-2_jaeger-all-in-one_1 ... done
Starting open-telemetry-collector-2_otel-collector_1 ... done
Attaching to open-telemetry-collector-2_jaeger-all-in-one_1, open-telemetry-collector-2_otel-collector_1
jaeger-all-in-one_1 | 2021/09/02 09:26:58 maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0155272,"caller":"flags/service.go:117","msg":"Mounting metrics handler on admin server","route":"/metrics"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.015579,"caller":"flags/service.go:123","msg":"Mounting expvar handler on admin server","route":"/debug/vars"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.016236,"caller":"flags/admin.go:106","msg":"Mounting health check on admin server","route":"/"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0163133,"caller":"flags/admin.go:117","msg":"Starting admin HTTP server","http-addr":":14269"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0163486,"caller":"flags/admin.go:98","msg":"Admin server started","http.host-port":"[::]:14269","health-status":"unavailable"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.017912,"caller":"memory/factory.go:61","msg":"Memory storage initialized","configuration":{"MaxTraces":0}}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.018202,"caller":"static/strategy_store.go:138","msg":"Loading sampling strategies","filename":"/etc/jaeger/sampling_strategies.json"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0273001,"caller":"server/grpc.go:82","msg":"Starting jaeger-collector gRPC server","grpc.host-port":":14250"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0273921,"caller":"server/http.go:48","msg":"Starting jaeger-collector HTTP server","http host-port":":14268"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276191,"caller":"server/zipkin.go:49","msg":"Not listening for Zipkin HTTP traffic, port not configured"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276558,"caller":"grpc/builder.go:70","msg":"Agent requested insecure grpc connection to collector(s)"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0276873,"caller":"channelz/logging.go:50","msg":"[core]parsed scheme: \"\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277174,"caller":"channelz/logging.go:50","msg":"[core]scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277457,"caller":"channelz/logging.go:50","msg":"[core]ccResolverWrapper: sending update to cc: {[{:14250 <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277772,"caller":"channelz/logging.go:50","msg":"[core]ClientConn switching balancer to \"round_robin\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0277963,"caller":"channelz/logging.go:50","msg":"[core]Channel switches to new LB policy \"round_robin\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0278597,"caller":"grpclog/component.go:55","msg":"[balancer]base.baseBalancer: got new ClientConn state: {{[{:14250 <nil> 0 <nil>}] <nil> <nil>} <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0279217,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028044,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":14250\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0284538,"caller":"grpclog/component.go:71","msg":"[balancer]base.baseBalancer: handle SubConn state change: 0xc000688840, CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028513,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0280442,"caller":"grpc/builder.go:109","msg":"Checking connection to collector"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.028587,"caller":"grpc/builder.go:120","msg":"Agent collector connection state change","dialTarget":":14250","status":"CONNECTING"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0294988,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.029561,"caller":"grpclog/component.go:71","msg":"[balancer]base.baseBalancer: handle SubConn state change: 0xc000688840, READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0296533,"caller":"grpclog/component.go:71","msg":"[roundrobin]roundrobinPicker: newPicker called with info: {map[0xc000688840:{{:14250 <nil> 0 <nil>}}]}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0297205,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0297425,"caller":"grpc/builder.go:120","msg":"Agent collector connection state change","dialTarget":":14250","status":"READY"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0298278,"caller":"./main.go:233","msg":"Starting agent"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0298927,"caller":"querysvc/query_service.go:137","msg":"Archive storage not created","reason":"archive storage not supported"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0299237,"caller":"app/flags.go:124","msg":"Archive storage not initialized"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0300004,"caller":"app/agent.go:69","msg":"Starting jaeger-agent HTTP server","http-port":5778}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0303733,"caller":"channelz/logging.go:50","msg":"[core]parsed scheme: \"\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304158,"caller":"channelz/logging.go:50","msg":"[core]scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304341,"caller":"channelz/logging.go:50","msg":"[core]ccResolverWrapper: sending update to cc: {[{:16685 <nil> 0 <nil>}] <nil> <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304427,"caller":"channelz/logging.go:50","msg":"[core]ClientConn switching balancer to \"pick_first\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0304537,"caller":"channelz/logging.go:50","msg":"[core]Channel switches to new LB policy \"pick_first\"","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0305033,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0305545,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":16685\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"warn","ts":1630574818.0307937,"caller":"channelz/logging.go:75","msg":"[core]grpc: addrConn.createTransport failed to connect to {:16685 localhost:16685 <nil> 0 <nil>}. Err: connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\". Reconnecting...","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.030827,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0308597,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {CONNECTING <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0308924,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0309658,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {TRANSIENT_FAILURE connection error: desc = \"transport: Error while dialing dial tcp :16685: connect: connection refused\"}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0309868,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to TRANSIENT_FAILURE","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0314078,"caller":"app/static_handler.go:181","msg":"UI config path not provided, config file will not be watched"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315406,"caller":"app/server.go:197","msg":"Query server started"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315752,"caller":"healthcheck/handler.go:129","msg":"Health Check state change","status":"ready"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0315914,"caller":"app/server.go:276","msg":"Starting GRPC server","port":16685,"addr":":16685"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574818.0316222,"caller":"app/server.go:257","msg":"Starting HTTP server","port":16686,"addr":":16686"}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.031331,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0314019,"caller":"channelz/logging.go:50","msg":"[core]Subchannel picks a new address \":16685\" to connect","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0315094,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {CONNECTING <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0315537,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0323153,"caller":"channelz/logging.go:50","msg":"[core]Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0325227,"caller":"grpclog/component.go:71","msg":"[core]pickfirstBalancer: UpdateSubConnState: 0xc00061fd40, {READY <nil>}","system":"grpc","grpc_log":true}
jaeger-all-in-one_1 | {"level":"info","ts":1630574819.0325499,"caller":"channelz/logging.go:50","msg":"[core]Channel Connectivity change to READY","system":"grpc","grpc_log":true}
otel-collector_1 | 2021-09-02T09:26:59.628Z info service/collector.go:303 Starting otelcol... {"Version": "v0.33.0", "NumCPU": 8}
otel-collector_1 | 2021-09-02T09:26:59.628Z info service/collector.go:242 Loading configuration...
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/collector.go:258 Applying configuration...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:264 Exporter was built. {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:214 Pipeline was built. {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/receivers_builder.go:227 Receiver was built. {"kind": "receiver", "name": "otlp", "datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:143 Starting extensions...
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:188 Starting exporters...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:93 Exporter is starting... {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "CONNECTING"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/exporters_builder.go:98 Exporter started. {"kind": "exporter", "name": "jaeger"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:193 Starting processors...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:52 Pipeline is starting... {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/pipelines_builder.go:63 Pipeline is started. {"pipeline_name": "traces", "pipeline_datatype": "traces"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info service/service.go:198 Starting receivers...
otel-collector_1 | 2021-09-02T09:26:59.630Z info builder/receivers_builder.go:71 Receiver is starting... {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info otlpreceiver/otlp.go:93 Starting HTTP server on endpoint 0.0.0.0:4318 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.630Z info otlpreceiver/otlp.go:159 Setting up a second HTTP listener on legacy endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info otlpreceiver/otlp.go:93 Starting HTTP server on endpoint 0.0.0.0:55681 {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info builder/receivers_builder.go:76 Receiver started. {"kind": "receiver", "name": "otlp"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/collector.go:206 Setting up own telemetry...
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/telemetry.go:99 Serving Prometheus metrics {"address": ":8888", "level": 0, "service.instance.id": "0fe56a33-e40e-4251-9a82-100fa600c4a0"}
otel-collector_1 | 2021-09-02T09:26:59.631Z info service/collector.go:218 Everything is ready. Begin running and processing data.
otel-collector_1 | 2021-09-02T09:27:00.631Z info jaegerexporter/exporter.go:186 State of the connection with the Jaeger Collector backend {"kind": "exporter", "name": "jaeger", "state": "TRANSIENT_FAILURE"}
Thanks!
Updating otel-collector-config.yaml to the following endpoint should work:
receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [jaeger]

Gitlab integration of RabbitMQ as a service

I'm trying to have a Gitlab setup where I integrate different services because I have a nodejs app and I would like to do integration testings with services like RabbitMQ, Cassandra, etc.
Question + Description of the problem + Possible Solution
Does someone know how to : write the Gitlab configuration file (.gitlab-ci.yml) to integrate RabbitMQ as a service, where I define a configuration file to create specific virtualhosts, exchanges, queues and users ?
So a section in my .gitlab-ci.yml I defined a variable which should point to the rabbitmq.config file like specified in the official documentation (https://www.rabbitmq.com/configure.html#config-location) but this does not work.
...
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
...
File I need to point to in my Gitlab configuration : rabbitmq.conf
In this file I want to specify a file rabbitmq-definition.json containing my specific virtualhosts, exchanges, queues and users for RabbitMQ.
[
{rabbit, [
{loopback_users, []},
{vm_memory_high_watermark, 0.7},
{vm_memory_high_watermark_paging_ratio, 0.8},
{log_levels, [{channel, warning}, {connection, warning}, {federation, warning}, {mirroring, info}]},
{heartbeat, 10}
]},
{rabbitmq_management, [
{load_definitions, "./rabbitmq-definition.json"}
]}
].
File containing my RabbitMQ configuration :rabbitmq-definition.json
{
"rabbit_version": "3.8.9",
"rabbitmq_version": "3.8.9",
"product_name": "RabbitMQ",
"product_version": "3.8.9",
"users": [
{
"name": "guest",
"password_hash": "9OhzGMQqiSCStw2uosywVW2mm95V/I6zLoeOIuVZZm8yFqAV",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "administrator"
},
{
"name": "test",
"password_hash": "4LWHqT8/KZN8EHa1utXAknONOCjRTZKNoUGdcP3PfG0ljM7L",
"hashing_algorithm": "rabbit_password_hashing_sha256",
"tags": "management"
}
],
"vhosts": [
{
"name": "my_virtualhost"
},
{
"name": "/"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "guest",
"vhost": "my_virtualhost",
"configure": ".*",
"write": ".*",
"read": ".*"
},
{
"user": "test",
"vhost": "my_virtualhost",
"configure": "^(my).*",
"write": "^(my).*",
"read": "^(my).*"
}
],
"topic_permissions": [],
"parameters": [],
"policies": [],
"queues": [
{
"name": "my_queue",
"vhost": "my_virtualhost",
"durable": true,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [
{
"name": "my_exchange",
"vhost": "my_virtualhost",
"type": "topic",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
},
{
"name": "my_exchange",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "my_exchange",
"vhost": "my_virtualhost",
"destination": "my_queue",
"destination_type": "queue",
"routing_key": "test.test.*.1",
"arguments": {}
}
]
}
Existing Setup
Existing file .gitlab-ci.yml:
#image: node:latest
image: node:12
cache:
paths:
- node_modules/
stages:
- install
- test
- build
- deploy
- security
- leanix
variables:
NODE_ENV: "CI"
ENO_ENV: "CI"
LOG_FOLDER: "."
LOG_FILE: "queries.log"
.caching:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
before_script:
- npm ci --cache .npm --prefer-offline --no-audit
#install_dependencies:
# stage: install
# script:
# - npm install --no-audit
# only:
# changes:
# - package-lock.json
# test:quality:
# stage: test
# allow_failure: true
# script:
# - npx eslint --format table .
# test:unit:
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
# test_node14:unit:
# image: node:14
# stage: test
# script:
# - npm run test
# coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
# artifacts:
# reports:
# junit: test-results.xml
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_CONF_FILE : rabbitmq.conf
# RABBITMQ_DEFAULT_USER: guest
# RABBITMQ_DEFAULT_PASS: guest
# RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
# AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
dependency_scan:
stage: security
allow_failure: false
script:
- npm audit --audit-level=moderate
include:
- template: Security/Secret-Detection.gitlab-ci.yml
- template: Security/SAST.gitlab-ci.yml
secret_detection:
stage: security
before_script: []
secret_detection_default_branch:
stage: security
before_script: []
nodejs-scan-sast:
stage: security
before_script: []
eslint-sast:
stage: security
before_script: []
leanix_sync:
stage: leanix
variables:
ENV: "development"
rules:
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
ENV: "development"
- if: '$CI_COMMIT_BRANCH == "test"'
variables:
ENV: "uat"
- if: '$CI_COMMIT_BRANCH == "master"'
variables:
ENV: "production"
before_script:
- apt update && apt -y install jq
script:
- VERSION=$(cat package.json | jq -r .version)
- npm run dependencies_check
- echo "Update LeanIx Factsheet "
...
allow_failure: true
This is my .env_CI file :
CASSANDRA_CONTACTPOINTS = localhost
CASSANDRA_KEYSPACE = pfm
CASSANDRA_USER = "cassandra"
CASSANDRA_PASS = "cassandra"
RABBITMQ_HOSTS=rabbitmq
RABBITMQ_PORT=5672
RABBITMQ_VHOST=my_virtualhost
RABBITMQ_USER=guest
RABBITMQ_PASS=guest
RABBITMQ_PROTOCOL=amqp
PORT = 8091
Logs of a run after a commit on the node-api project :
Running with gitlab-runner 13.12.0 (7a6612da)
on Enocloud-Gitlab-Runner PstDVLop
Preparing the "docker" executor
00:37
Using Docker executor with image node:12 ...
Starting service rabbitmq:management ...
Pulling docker image rabbitmq:management ...
Using docker image sha256:737d67e8db8412d535086a8e0b56e6cf2a6097906e2933803c5447c7ff12f265 for rabbitmq:management with digest rabbitmq#sha256:b29faeb169f3488b3ccfee7ac889c1c804c7102be83cb439e24bddabe5e6bdfb ...
Waiting for services to be up and running...
*** WARNING: Service runner-pstdvlop-project-372-concurrent-0-b78aed36fb13c180-rabbitmq-0 probably didn't start properly.
Health check error:
Service container logs:
2021-08-05T15:39:02.476374200Z 2021-08-05 15:39:02.456089+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:02.476612801Z 2021-08-05 15:39:02.475702+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
...
2021-08-05T15:39:03.024092380Z 2021-08-05 15:39:03.023476+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-08-05T15:39:03.024287781Z 2021-08-05 15:39:03.023757+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-08-05T15:39:03.045901591Z 2021-08-05 15:39:03.045602+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-08-05T15:39:03.391624143Z 2021-08-05 15:39:03.391057+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-08-05T15:39:03.391785874Z 2021-08-05 15:39:03.391207+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/quorum/rabbit#635519274c80
2021-08-05T15:39:03.510825736Z 2021-08-05 15:39:03.510441+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-08-05T15:39:03.536493082Z 2021-08-05 15:39:03.536098+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
2021-08-05T15:39:03.547541524Z 2021-08-05 15:39:03.546999+00:00 [info] <0.222.0> ra: starting system coordination
2021-08-05T15:39:03.547876996Z 2021-08-05 15:39:03.547058+00:00 [info] <0.222.0> starting Ra system: coordination in directory: /var/lib/rabbitmq/mnesia/rabbit#635519274c80/coordination/rabbit#635519274c80
2021-08-05T15:39:03.551508520Z 2021-08-05 15:39:03.551130+00:00 [info] <0.272.0> ra: meta data store initialised for system coordination. 0 record(s) recovered
2021-08-05T15:39:03.552002433Z 2021-08-05 15:39:03.551447+00:00 [noti] <0.277.0> WAL: ra_coordination_log_wal init, open tbls: ra_coordination_log_open_mem_tables, closed tbls: ra_coordination_log_closed_mem_tables
2021-08-05T15:39:03.557022096Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0>
2021-08-05T15:39:03.557045886Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Starting RabbitMQ 3.9.1 on Erlang 24.0.5 [jit]
2021-08-05T15:39:03.557050686Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.557069166Z 2021-08-05 15:39:03.556629+00:00 [info] <0.222.0> Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558119613Z
2021-08-05T15:39:03.558134063Z ## ## RabbitMQ 3.9.1
2021-08-05T15:39:03.558139043Z ## ##
2021-08-05T15:39:03.558142303Z ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates.
2021-08-05T15:39:03.558145473Z ###### ##
2021-08-05T15:39:03.558201373Z ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
2021-08-05T15:39:03.558206473Z
2021-08-05T15:39:03.558210714Z Erlang: 24.0.5 [jit]
2021-08-05T15:39:03.558215324Z TLS Library: OpenSSL - OpenSSL 1.1.1k 25 Mar 2021
2021-08-05T15:39:03.558219824Z
2021-08-05T15:39:03.558223984Z Doc guides: https://rabbitmq.com/documentation.html
2021-08-05T15:39:03.558227934Z Support: https://rabbitmq.com/contact.html
2021-08-05T15:39:03.558232464Z Tutorials: https://rabbitmq.com/getstarted.html
2021-08-05T15:39:03.558236944Z Monitoring: https://rabbitmq.com/monitoring.html
2021-08-05T15:39:03.558241154Z
2021-08-05T15:39:03.558244394Z Logs: /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.558247324Z <stdout>
2021-08-05T15:39:03.558250464Z
2021-08-05T15:39:03.558253304Z Config file(s): /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.558256274Z
2021-08-05T15:39:03.558984369Z Starting broker...2021-08-05 15:39:03.558703+00:00 [info] <0.222.0>
2021-08-05T15:39:03.558996969Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> node : rabbit#635519274c80
2021-08-05T15:39:03.559000489Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> home dir : /var/lib/rabbitmq
2021-08-05T15:39:03.559003679Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> config file(s) : /etc/rabbitmq/conf.d/10-default-guest-user.conf
2021-08-05T15:39:03.559006959Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> cookie hash : 1iZSjTlqOt/PC9WvpuHVSg==
2021-08-05T15:39:03.559010669Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> log(s) : /var/log/rabbitmq/rabbit#635519274c80_upgrade.log
2021-08-05T15:39:03.559014249Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> : <stdout>
2021-08-05T15:39:03.559017899Z 2021-08-05 15:39:03.558703+00:00 [info] <0.222.0> database dir : /var/lib/rabbitmq/mnesia/rabbit#635519274c80
2021-08-05T15:39:03.893651319Z 2021-08-05 15:39:03.892900+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-08-05T15:39:09.081076751Z 2021-08-05 15:39:09.080611+00:00 [info] <0.659.0> * rabbitmq_management_agent
----
Pulling docker image node:12 ...
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256:... ...
Preparing environment 00:01
Running on runner-pstdvlop-project-372-concurrent-0 via gitlab-runner01...
Getting source from Git repository 00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/node-api/.git/
Checking out 4ce1ae1a as PM-1814...
Removing .npm/
Removing node_modules/
Skipping Git submodules setup
Restoring cache 00:03
Checking cache for default...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
WARNING: node_modules/.bin/depcheck: chmod node_modules/.bin/depcheck: no such file or directory (suppressing repeats)
Successfully extracted cache
Executing "step_script" stage of the job script 00:20
Using docker image sha256:7e90b11a78a2c66f8824cb7a125dc0e9340d6e17d66bd8f6ba9dd2717af56f6b for node:12 with digest node#sha256: ...
$ npm ci --cache .npm --prefer-offline --no-audit
npm WARN prepare removing existing node_modules/ before installation
> node-cron#2.0.3 postinstall /builds/node-api/node_modules/node-cron
> opencollective-postinstall
> core-js#2.6.12 postinstall /builds/node-api/node_modules/core-js
> node -e "try{require('./postinstall')}catch(e){}"
added 642 packages in 10.824s
$ npm run test_integration
> pfm-liveprice-api#0.1.3 test_integration /builds/node-api
> npx nyc mocha test/integration --exit --timeout 10000 --reporter mocha-junit-reporter
RABBITMQ_PROTOCOL : amqp RABBITMQ_USER : guest RABBITMQ_PASS : guest
config.js parseInt(RABBITMQ_PORT) : NaN
simple message
[x] Sent 'Hello World!'
this queue [object Object] exists
----------------------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------------------------|---------|----------|---------|---------|-------------------
All files | 5.49 | 13.71 | 4.11 | 5.33 |
pfm-liveprice-api | 21.3 | 33.8 | 21.43 | 21 |
app.js | 0 | 0 | 0 | 0 | 1-146
config.js | 76.67 | 55.81 | 100 | 77.78 | 19-20,48,55,67-69
pfm-liveprice-api/routes | 0 | 0 | 0 | 0 |
index.js | 0 | 100 | 0 | 0 | 1-19
info.js | 0 | 100 | 0 | 0 | 1-15
liveprice.js | 0 | 0 | 0 | 0 | 1-162
status.js | 0 | 100 | 0 | 0 | 1-14
pfm-liveprice-api/services | 0 | 0 | 0 | 0 |
rabbitmq.js | 0 | 0 | 0 | 0 | 1-110
pfm-liveprice-api/utils | 0 | 0 | 0 | 0 |
buildBinding.js | 0 | 0 | 0 | 0 | 1-35
buildProducts.js | 0 | 0 | 0 | 0 | 1-70
store.js | 0 | 0 | 0 | 0 | 1-291
----------------------------|---------|----------|---------|---------|-------------------
=============================== Coverage summary ===============================
Statements : 5.49% ( 23/419 )
Branches : 13.71% ( 24/175 )
Functions : 4.11% ( 3/73 )
Lines : 5.33% ( 21/394 )
================================================================================
Saving cache for successful job
00:05
Creating cache default...
node_modules/: found 13259 matching files and directories
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Uploading artifacts for successful job
00:01
Uploading artifacts...
WARNING: test-results.xml: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
Job succeeded
Tried and does not work
Using variables where to define RabbitMQ vars is deprecated and a .config is required
If I try to use the following vars in my .gitlab-ci.yml :
...
test:integration:
stage: test
script:
- npm run test_integration
services:
# - cassandra:3.11
- rabbitmq:management
variables:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
RABBITMQ_DEFAULT_VHOST: 'my_virtualhost'
AMQP_URL: 'amqp://guest:guest#rabbitmq:5672'
coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
artifacts:
reports:
junit: test-results.xml
...
I get the following outout :
...
Starting service rabbitmq:latest ...
Pulling docker image rabbitmq:latest ...
Using docker image sha256:1c609d1740383796a30facdb06e52905e969f599927c1a537c10e4bcc6990193 for rabbitmq:latest with digest rabbitmq#sha256:d5056e576d8767c0faffcb17b5604a4351edacb8f83045e084882cabd384d216 ...
Waiting for services to be up and running...
*** WARNING: Service runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 probably didn't start properly.
Health check error:
start service container: Error response from daemon: Cannot link to a non running container: /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0 AS /runner-tpg-ivpc-project-372-concurrent-0-e6aa2c66d0096694-rabbitmq-0-wait-for-service/service (docker.go:1156:0s)
Service container logs:
2021-08-05T13:14:33.024761664Z error: RABBITMQ_DEFAULT_PASS is set but deprecated
2021-08-05T13:14:33.024797191Z error: RABBITMQ_DEFAULT_USER is set but deprecated
2021-08-05T13:14:33.024802924Z error: deprecated environment variables detected
2021-08-05T13:14:33.024806771Z
2021-08-05T13:14:33.024810742Z Please use a configuration file instead; visit https://www.rabbitmq.com/configure.html to learn more
2021-08-05T13:14:33.024844321Z
...
because on the official Docker documentation (https://hub.docker.com/_/rabbitmq) it is stated that :
WARNING: As of RabbitMQ 3.9, all of the docker-specific variables listed below are deprecated and no longer used. Please use a configuration file instead; visit rabbitmq.com/configure to learn more about the configuration file. For a starting point, the 3.8 images will print out the config file it generated from supplied environment variables.
# Unavailable in 3.9 and up
RABBITMQ_DEFAULT_PASS
RABBITMQ_DEFAULT_PASS_FILE
RABBITMQ_DEFAULT_USER
RABBITMQ_DEFAULT_USER_FILE
RABBITMQ_DEFAULT_VHOST
RABBITMQ_ERLANG_COOKIE
...

unable to connect bee-queue to docker container

for some reason i seem to be having difficulty pointing bee-queue's arena to my other docker image for redis. just wondering if anyone has any experience with this.
this is the pattern of the config i'm using
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"redis": "redis://redis:3011"
},
...
I have also tried
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"redis": {
"url": "redis://redis:3011"
}
},
{
"queues": [
{
"hostId": "eyeshade-workers",
"type": "bee",
"name": "settlement-report",
"url": "redis://redis:3011"
},
here is my redis and arena docker-compose file are here:
version: "2.1"
networks:
ledger:
driver: bridge
services:
redis:
container_name: ledger-redis
image: redis:latest
ports:
- "3011:6379"
networks:
- ledger
arena:
container_name: worker-arena
image: mixmaxhq/arena:latest
networks:
- ledger
ports:
- "4567:4567"
depends_on:
- redis
- eyeshade-worker
volumes:
- ./queue/index.json:/opt/arena/src/server/config/index.json
but i continuously get this error saying that it is trying to connect to the default :/
worker-arena | (node:42) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | (node:42) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
worker-arena | (node:42) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | events.js:182
worker-arena | throw er; // Unhandled 'error' event
worker-arena | ^
worker-arena |
worker-arena | Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
worker-arena | at Object.exports._errnoException (util.js:1016:11)
worker-arena | at exports._exceptionWithHostPort (util.js:1039:20)
worker-arena | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1138:14)
worker-arena | [nodemon] app crashed - waiting for file changes before starting...

Resources