I am trying to run Activiti Cloud example in docker.
I am using this official tutorial:
https://activiti.gitbook.io/activiti-7-developers-guide/getting-started/getting-started-activiti-cloud/getting-started-docker-compose
All the prior steps succeeded till I try
"To start work, execute getKeycloakToken hruser in Postman Keycloak collection. Then run startProcess in rb-my-app Postman collection."
I succeeded to execute getKeycloakToken hruser in Postman Keycloak collection.
I failed to run startProcess in rb-my-app Postman collection.
I got 404 when I execute getModels in Postman modeling collection.
I got 500 when I execute startProcess in Postman rb collection.
The major error message in logs:
example-runtime-bundle | 2022-03-08 19:08:30.970 WARN [rb,,] 7 --- [nio-8080-exec-1] o.keycloak.adapters.KeycloakDeployment : Failed to load URLs from http://127.0.0.1.nip.io/auth/realms/activiti/.well-known/openid-configuration
example-runtime-bundle |
example-runtime-bundle | java.net.ConnectException: Connection refused (Connection refused)
The related configurations:
in docker-compose.yml:
keycloak:
container_name: keycloak
image: activiti/activiti-keycloak
volumes:
- ./activiti-realm.json:/opt/jboss/keycloak/activiti-realm.json
restart: unless-stopped
depends_on:
- nginx
example-runtime-bundle:
container_name: example-runtime-bundle
image: activiti/example-runtime-bundle:${VERSION}
environment:
# JAVA_OPTS: "-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=8000,suspend=n -noverify"
SPRING_JMX_ENABLED: "false"
ACT_KEYCLOAK_URL: "http://${DOCKER_IP}/auth"
SPRING_RABBITMQ_HOST: "rabbitmq"
SERVER_SERVLET_CONTEXT_PATH: /rb
SPRING_DATASOURCE_URL: jdbc:postgresql://activiti-postgres:5432/activitidb
SPRING_DATASOURCE_USERNAME: activiti
SPRING_DATASOURCE_PASSWORD: mypassword
SPRING_JPA_DATABASE_PLATFORM: org.hibernate.dialect.PostgreSQLDialect
SPRING_JPA_GENERATE_DDL: "true"
SPRING_JPA_HIBERNATE_DDL_AUTO: update
# ACTIVITI_SECURITY_POLICIES_0_NAME: "HR Group restricted to SimpleProcess and ConnectorProcess"
# ACTIVITI_SECURITY_POLICIES_0_GROUPS: "hr"
# ACTIVITI_SECURITY_POLICIES_0_ACCESS: "WRITE"
# ACTIVITI_SECURITY_POLICIES_0_SERVICENAME: "rb-my-app"
# ACTIVITI_SECURITY_POLICIES_0_KEYS: "SimpleProcess,ConnectorProcess,fixSystemFailure,twoTaskProcess"
# ACTIVITI_SECURITY_POLICIES_1_NAME: "testgroup not restricted at all"
# ACTIVITI_SECURITY_POLICIES_1_GROUPS: "testgroup"
# ACTIVITI_SECURITY_POLICIES_1_ACCESS: "WRITE"
# ACTIVITI_SECURITY_POLICIES_1_SERVICENAME: "rb-my-app"
# ACTIVITI_SECURITY_POLICIES_1_KEYS: "*"
restart: unless-stopped
depends_on:
- nginx
- keycloak
- rabbitmq
- activiti-postgres
in .env:
DOCKER_IP=127.0.0.1.nip.io
VERSION=7.1.0-M13
KEYCLOAK_REALM=activiti
KEYCLOAK_RESOURCE=activiti
Full logs output is here:
https://gist.github.com/chang4tech/affd504809249733ee1f553da1d03763
What am I supposed to do to debug/detect the problem and eliminate the errors?
Thanks.
Related
I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!
I created a ghost instance in my vps with the official docker compose file of the ghost cms
and I modified it to use a mailgun SMTP account as follows
version: '3.1'
services:
mariadb:
image: 'docker.io/bitnami/mariadb:10.3-debian-10'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- MARIADB_USER=bn_ghost
- MARIADB_DATABASE=bitnami_ghost
volumes:
- 'mariadb_data:/bitnami'
ghost:
image: 'ghost:3-alpine'
environment:
MARIADB_HOST: mariadb
MARIADB_PORT_NUMBER: 3306
GHOST_DATABASE_USER: bn_ghost
GHOST_DATABASE_NAME: bitnami_ghost
GHOST_HOST: localhost
mail__transport: SMTP
mail__options__service: Mailgun
mail__auth__user: ${MY_MAIL_USER}
mail__auth__pass: ${MY_MAIL_PASS}
mail__from: ${MY_FROM_ADDRESS}
ports:
- '80:2368'
volumes:
- 'ghost_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
ghost_data:
driver: local
but when I try to invite authors to the site
it gives me following error
Failed to send 1 invitation: dulara#thinksmart.lk. Please check your email configuration, see https://ghost.org/docs/concepts/config/#mail for instructions
I am certain that my SMTP credentials are correct.
I logged in to ghost containers bash shell and checked its files there.
it's mail section is empty
I still can't find what is my mistake. I am not sure about the variable names. but I took them from the official documentation.
My exemple :
url=https://www.exemple.com/
# admin__url=XXX // Remove it (For my side, the redirection is failed)
database__client=mysql
database__connection__host=...
database__connection__port=3306
database__connection__database=ghost
database__connection__user=ghost
database__connection__password=XXX
privacy__useRpcPing=false
mail__transport=SMTP
mail__options__host=smtp.exemple.com
mail__options__port=587
# mail__options__service=Exemple // Remove it
mail__options__auth__user=sys#exemple.com
mail__options__auth__pass=XXX
# mail__options__secureConnection=true // Remove it
mail__from=Exemple Corp. <sys#exemple.com>
In your case change :
mail__auth__user => mail__options__auth__user
mail__auth__pass => mail__options__auth__pass
And delete : mail__options__service
(https://github.com/metabase/metabase/issues/4272#issuecomment-566928334)
The shinyproxy page is displayed and after authentication I can see the nav bar, 2 links to the 2 applications. Then, when I click on one of them, I got en error 500 / "Failed to start container"
In the stack, I can see :
Caused by: java.io.IOException: Permission denied
Here is my configuration
application.yml:
proxy:
title: Open Analytics Shiny Proxy
# landing-page: /
port: 8080
authentication: simple
admin-groups: scientists
# Example: 'simple' authentication configuration
users:
- name: jack
password: password
groups: scientists
- name: jeff
password: password
groups: mathematicians
# Example: 'ldap' authentication configuration
# Docker configuration
#docker:
#cert-path: /home/none
#url: http://localhost:2375
#port-range-start: 20000
specs:strong text
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
access-groups: [scientists, mathematicians]
- id: 06_tabsets
container-cmd: ["R", "-e", "shinyproxy::run_06_tabsets()"]
container-image: openanalytics/shinyproxy-demo
access-groups: scientists
logging:
file:
shinyproxy.log
shinyproxy-docker-compose.yml:
version: '2.4'
services:
shinyproxy:
container_name: shinyproxy
image: openanalytics/shinyproxy:2.3.1
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./application.yml:/opt/shinyproxy/application.yml
privileged: true
ports:
- 35624:8080
I have the same problem, workaround
sudo chown $USER:docker /run/docker.sock
However, I do not understand why this is needed, because /run/docker.sock was already root:docker.
This is under WSL2.
I'm using a K3S Cluster in a docker(-compose) container in my CI/CD pipeline, to test my application code. However I have problem with the certificate of the cluster. I need to communicate on the cluster using the external addres. My docker-compose script looks as follows
version: '3'
services:
server:
image: rancher/k3s:v0.8.1
command: server --disable-agent
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
- k3s-server:/var/lib/rancher/k3s
# get the kubeconfig file
- .:/output
ports:
# - 6443:6443
- 6080:6080
- 192.168.2.110:6443:6443
node:
image: rancher/k3s:v0.8.1
tmpfs:
- /run
- /var/run
privileged: true
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
ports:
- 31000-32000:31000-32000
volumes:
k3s-server: {}
accessing the cluster from python gives me
MaxRetryError: HTTPSConnectionPool(host='192.168.2.110', port=6443): Max retries exceeded with url: /apis/batch/v1/namespaces/mlflow/jobs?pretty=True (Caused by SSLError(SSLCertVerificationError("hostname '192.168.2.110' doesn't match either of 'localhost', '172.19.0.2', '10.43.0.1', '172.23.0.2', '172.18.0.2', '172.23.0.3', '127.0.0.1', '0.0.0.0', '172.18.0.3', '172.20.0.2'")))
Here are my two (three) question
how can I add additional IP adresses to the cert generation? I was hoping the --bind-address in the server command triggers taht
how can I fall back on http providing an --http-listen-port didn't achieve the expected result
any other suggestion how I can enable communication with the cluster
changing the python code is not really an option as I would like o keep the code unaltered for testing. (Fallback on http works via kubeconfig.
The solution is to use the parameter tls-san
server --disable-agent --tls-san 192.168.2.110
I am trying to capture syslog messages sent over the network using rsyslog, and then have rsyslog capture, transform and send these messages to elasticsearch.
I found a nice article on the configuration on https://www.reddit.com/r/devops/comments/9g1nts/rsyslog_elasticsearch_logging/
Problem is that rsyslog keeps popping up an error at startup that it cannot connect to Elasticsearch on the same machine on port 9200. Error I get is
Failed to connect to localhost port 9200: Connection refused
2020-03-20T12:57:51.610444+00:00 53fd9e2560d9 rsyslogd: [origin software="rsyslogd" swVersion="8.36.0" x-pid="1" x-info="http://www.rsyslog.com"] start
rsyslogd: omelasticsearch: we are suspending ourselfs due to server failure 7: Failed to connect to localhost port 9200: Connection refused [v8.36.0 try http://www.rsyslog.com/e/2007 ]
Anyone can help on this?
Everything is running in docker on a single machine. I use below docker compose file to start the stack.
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
networks:
- logging-network
kibana:
image: docker.elastic.co/kibana/kibana:7.6.1
depends_on:
- logstash
ports:
- 5601:5601
networks:
- logging-network
rsyslog:
image: rsyslog/syslog_appliance_alpine:8.36.0-3.7
environment:
- TZ=UTC
- xpack.security.enabled=false
ports:
- 514:514/tcp
- 514:514/udp
volumes:
- ./rsyslog.conf:/etc/rsyslog.conf:ro
- rsyslog-work:/work
- rsyslog-logs:/logs
volumes:
rsyslog-work:
rsyslog-logs:
networks:
logging-network:
driver: bridge
rsyslog.conf file below:
global(processInternalMessages="on")
#module(load="imtcp" StreamDriver.AuthMode="anon" StreamDriver.Mode="1")
module(load="impstats") # config.enabled=`echo $ENABLE_STATISTICS`)
module(load="imrelp")
module(load="imptcp")
module(load="imudp" TimeRequery="500")
module(load="omstdout")
module(load="omelasticsearch")
module(load="mmjsonparse")
module(load="mmutf8fix")
input(type="imptcp" port="514")
input(type="imudp" port="514")
input(type="imrelp" port="1601")
# includes done explicitely
include(file="/etc/rsyslog.conf.d/log_to_logsene.conf" config.enabled=`echo $ENABLE_LOGSENE`)
include(file="/etc/rsyslog.conf.d/log_to_files.conf" config.enabled=`echo $ENABLE_LOGFILES`)
#try to parse a structured log
action(type="mmjsonparse")
# this is for index names to be like: rsyslog-YYYY.MM.DD
template(name="rsyslog-index" type="string" string="rsyslog-%$YEAR%.%$MONTH%.%$DAY%")
# this is for formatting our syslog in JSON with #timestamp
template(name="json-syslog" type="list") {
constant(value="{")
constant(value="\"#timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"program\":\"") property(name="programname")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",") property(name="$!all-json" position.from="2")
# closing brace is in all-json
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch" template="json-syslog" searchIndex="rsyslog-index" dynSearchIndex="on")
#################### default ruleset begins ####################
# we emit our own messages to docker console:
syslog.* :omstdout:
include(file="/config/droprules.conf" mode="optional") # this permits the user to easily drop unwanted messages
action(name="main_utf8fix" type="mmutf8fix" replacementChar="?")
include(text=`echo $CNF_CALL_LOG_TO_LOGFILES`)
include(text=`echo $CNF_CALL_LOG_TO_LOGSENE`)
First of all you need to run all the containers on the same docker network which in this case are not. Second , after running the containers on the same network , login to rsyslog container and check if 9200 is available.