error restarting JBoss/Keycloak docker container - docker

I am setting up a keycloak server using the Jboss docker image(v.4.0.0.final) and I am encountering this error upon reboot:
.....
keycloak_1 | at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:370)
keycloak_1 | at java.lang.Thread.run(Thread.java:748)
keycloak_1 |
keycloak_1 | 18:51:15,509 ERROR [org.jboss.as.controller.client] (Controller Boot Thread) WFLYCTL0190: Step handler org.jboss.as.server.DeployerChainAddHandler$FinalRuntimeStepHandler#52734583 for operation add-deployer-chains at address [] failed handling operation rollback -- java.util.concurrent.RejectedExecutionException: Task org.jboss.as.controller.notification.NotificationSupports$NonBlockingNotificationSupport$1#2211e00e rejected from java.util.concurrent.ThreadPoolExecutor#41133ff1[Shutting down, pool size = 50, active threads = 43, queued tasks = 0, completed tasks = 93]
keycloak_1 | docker_keycloak_1 exited with code 1
When I run the docker container for the first time, everything goes as planned. I can login, create roles, etc. I launch the container using jhipster's keycloak script:
version: '2'
services:
keycloak:
image: jboss/keycloak:4.0.0.Final
command: ["-b", "0.0.0.0", "-Dkeycloak.migration.action=import", "-Dkeycloak.migration.provider=dir", "-Dkeycloak.migration.dir=/opt/jboss/keycloak/realm-config", "-Dkeycloak.migration.strategy=OVERWRITE_EXISTING", "-Djboss.socket.binding.port-offset=1000"]
volumes:
- ./realm-config:/opt/jboss/keycloak/realm-config
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- DB_DATABASE=postgres
ports:
- 9080:9080
- 9443:9443
- 10990:10990
Am I shutting it down correctly?

Related

LocalStack stuck in "Waiting for all LocalStack services to be ready" messages

I have tried to run Localstack as it described on its GitHub page and I've used a command 'pip install localstack' as well as command 'docker-compose up' with docker-compose file from documentation:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4566:4566"
- "127.0.0.1:4571:4571"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER="${TMPDIR:-/tmp}/localstack"
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
But in both ways I get the same output:
localstack_main | 2021-09-21 15:32:26,633 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
localstack_main | 2021-09-21 15:32:26,645 INFO supervisord started with pid 14
localstack_main | 2021-09-21 15:32:27,650 INFO spawned: 'dashboard' with pid 20
localstack_main | 2021-09-21 15:32:27,653 INFO spawned: 'infra' with pid 21
localstack_main | 2021-09-21 15:32:27,659 INFO success: dashboard entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
localstack_main | 2021-09-21 15:32:27,660 INFO exited: dashboard (exit status 0; expected)
localstack_main | (. .venv/bin/activate; exec bin/localstack start --host)
localstack_main | 2021-09-21 15:32:28,663 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
localstack_main | LocalStack version: 0.12.1
localstack_main | Starting local dev environment. CTRL-C to quit.
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
And then nothing appears except these recurring messages.
Does anybody know how to fix this problem?
This might not be the solution for everyone, but worth suggesting to try update your Docker version.
I had the same issue and for few days, and then I tried to update my docker version
I use Docker version 20.10.11 on apple silicon and can confirm it works fine. So far, after this update, I did not encounter any new issues with local-stack.
Also this Github issue suggests deleting your local-stack's volume before each run. It works, however this obviously can't be long term solution, but might be good mitigation when one is in need.
Update your docker-compose.yml as below and then run docker-compose up. It should work as expected.
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack}"
image: localstack/localstack
hostname: localstack
networks:
- test-net
ports:
- "4566:4566"
environment:
- SERVICES=s3,sqs,cloudformation,iam,cloudwatch
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=false
- LAMBDA_REMOVE_CONTAINERS=true
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
test-net:
external: false
driver: bridge
name: test-net

Keycloak and Docker OutOfMemoryError - process/resource limits reached

I have rented a virtual ubuntu server. Various applications run on it in Docker containers and natively:
Plesk
Wordpress
Flarum
MySQL
Wiki.js (in Docker container)
Keycloak (in Docker container)
MariaDB (in Docker container)
I use Keycloak as SSO for Wordpress, Wiki.js and Flarum. Now I have the problem that Keycloak simply crashes after a while and I can't restarted it in Docker. I get the following error message:
keycloak_1 | 17:22:06,447 DEBUG [org.jboss.as.config] (MSC service thread 1-3) VM Arguments: -D[Standalone] -Xms512m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+UseAdaptiveSizePolicy -XX:MaxMetaspaceSize=1024m -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true-Djava.net.preferIPv4Stack=true --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=jdk.unsupported/sun.misc=ALL-UNNAMED --add-exports=jdk.unsupported/sun.reflect=ALL-UNNAMED -Dorg.jboss.boot.log.file=/opt/jboss/keycloak/standalone/log/server.log -Dlogging.configuration=file:/opt/jboss/keycloak/standalone/configuration/logging.properties
keycloak_1 | 17:22:19,493 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
keycloak_1 | ("subsystem" => "infinispan"),
keycloak_1 | ("cache-container" => "keycloak"),
keycloak_1 | ("thread-pool" => "transport")
keycloak_1 | ]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.clustering.infinispan.cache-container.keycloak" => "org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
keycloak_1 | Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
keycloak_1 | Caused by: org.infinispan.commons.CacheException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
keycloak_1 | Caused by: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached"}}
keycloak_1 | 17:22:19,505 INFO [org.jboss.as.server] (ServerService Thread Pool -- 46) WFLYSRV0010: Deployed "keycloak-server.war" (runtime-name : "keycloak-server.war")
keycloak_1 | 17:22:19,507 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183: Service status report
keycloak_1 | WFLYCTL0186: Services which failed to start: service org.wildfly.clustering.infinispan.cache.ejb.http-remoting-connector: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.expiration.impl.InternalExpirationManager
keycloak_1 | service org.wildfly.clustering.infinispan.cache-container.keycloak: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
keycloak_1 | WFLYCTL0448: 32 additional services are down due to their dependencies being missing or failed
keycloak_1 | 17:22:19,599 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
keycloak_1 | 17:22:19,606 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started (with errors) in 15455ms - Started 558 of 926 services (44 services failed or missing dependencies, 684 services are lazy, passive or on-demand)
keycloak_1 | 17:22:19,614 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:9990/management
keycloak_1 | 17:22:19,614 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
The critical mistake should be the following:
keycloak_1 | 17:48:15,196 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 60) MSC000001: Failed to start service org.wildfly.clustering.infinispan.cache-container.keycloak: org.jboss.msc.service.StartException in service org.wildfly.clustering.infinispan.cache-container.keycloak: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheException: java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
At first time I thought that Keycloak with Docker need more memory. Unfortunately, the change didn't bring the desired success. After some research, I read that sometime there are some problems with the threads on virtual servers. Unfortunately, I don't know that much about this topic. I hope someone can help me. :)
Am I right that it can be due to the thread limit of the virtual server?
Attached is my docker-compose file:
version: '3'
services:
mariadb:
image: mariadb:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: ******
MYSQL_DATABASE: app_keycloak
MYSQL_USER: ******
MYSQL_PASSWORD: ******
ports:
- 3308:3306
# Copy-pasted from https://github.com/docker-library/mariadb/issues/94
healthcheck:
test: ["CMD", "mysqladmin", "ping", "--silent"]
keycloak:
image: jboss/keycloak:latest
restart: always
environment:
DB_VENDOR: mariadb
DB_ADDR: mariadb
DB_DATABASE: ******
DB_USER: ******
DB_PASSWORD: ******
KEYCLOAK_USER: ******
KEYCLOAK_PASSWORD: ******
JGROUPS_DISCOVERY_PROTOCOL: JDBC_PING
JAVA_OPTS: "-server -Xms512m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -XX:+UseAdaptiveSizePolicy -XX:MaxMetaspaceSize=1024m -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.head$t.headless=true-Djava.net.preferIPv4Stack=true"
ports:
- 8080:8080
depends_on:
- mariadb
Update 1:
It does not seem to be due to the thread limit.
systemctl show --property=DefaultTasksMax
I looked to see if there was a limit. I read that Ubuntu set DefaultTasksMax to 15%.
cat /proc/user_beancounters
Overall I have by provider a limit of 700 threads.
Additionally, I looked at how many threads were using the current services. Docker in particular.
systemctl status *.service | grep -e Tasks
systemctl status docker.service | grep -e Tasks --> 75
With the findings I set DefaultTasksMax to 200.
nano /etc/systemd/system.conf
systemctl daemon-reload
In the end, I restarted the Docker Compose.
docker-compose down
docker-compose up
Unfortunately, I still get the same error. :(
Update 2:
An update to version 13 of Keycloak has apparently fixed the problem. I will continue to monitor the behavior.

Serverless Offline - ECONNREFUSED Elasticmq with Docker-Compose

I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!

docker-compose UnknownHostException : but docker run works

I have a docker image (lfs-service:latest) that I'm trying to run as part of a suite of micro services.
RHELS 7.5
Docker version: 1.13.1
docker-compose version 1.23.2
Postgres 11 (installed on RedHat host machine)
The following command works exactly as I would like:
docker run -d \
-p 9000:9000 \
-v "$PWD/lfs-uploads:/lfs-uploads" \
-e "SPRING_PROFILES_ACTIVE=dev" \
-e dbhost=$HOSTNAME \
--name lfs-service \
[corp registry]/lfs-service:latest
This successfully:
creates/starts a container with my Spring Boot Docker image on port
9000
writes the uploads to disk into the lfs-uploads directory
and connects to a local Postgres DB that's running on the host
machine (not in a Docker container).
My service works as expected. Great!
Now, my problem:
I'm tring to run/manage my services using Docker Compose with the following content (I have removed all other services and my api gateway from docker-compose.yaml to simplify the scenario):
version: '3'
services:
lfs-service:
image: [corp registry]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=$HOSTNAME
Relevant entries in application.yaml:
spring:
profiles: dev
datasource:
url: jdbc:postgresql://${dbhost}:5432/lfsdb
username: [dbusername]
password: [dbpassword]
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
Execution:
docker-compose up
...
The following profiles are active: dev
...
Tomcat initialized with port(s): 9000 (http)
...
lfs-service | Caused by: java.net.UnknownHostException: [host machine hostname]
lfs-service | at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_181]
lfs-service | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_181]
lfs-service | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_181]
lfs-service | at org.postgresql.core.PGStream.<init>(PGStream.java:70) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.5.jar!/:42.2.5]
...
lfs-service | 2019-01-11 18:46:54.495 WARN [lfs-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
lfs-service |
lfs-service | org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: The connection attempt failed.
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:328) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
...
Both methods of starting should be equivalent but obviously there's a functional difference... Any ideas on how to resolve this issue / write a comperable docker-compose file which is functionally identical to the "docker run" command at the top?
NOTE: I've also tried the following values for dbhost: localhost, 127.0.0.1 - this won't work as it attempts to find the DB in the container, and not on the host machine.
CORRECTION:
Unfortunately, while this solution works in the simplest use case - it will break Eureka & API Gateways from functioning, as the container will be running on a separate network. I'm still looking for working solution.
To anyone looking for a solution to this question, this worked for me:
docker-compose.yaml:
lfs-service:
image: [corp repo]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=localhost
network_mode: host
Summary of changes made to docker-compose.yaml:
change $HOSTNAME to "localhost"
Add "network_mode: host"
I have no idea if this is the "correct" way to resolve this, but since it's only for our remote development server the solution is working for me. I'm open to suggestions if you have a better solution.
Working solution
The simple solution is to just provide the host machine IP address (vs hostname).
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=172.18.0.1
Setting this via an environment variable would probably be more portable:
export DB_HOST_IP=172.18.0.1
docker-compose.yaml
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=${DB_HOST_IP}

Docker Compose - Logstash - exited with code 0 after start

I'm trying to use Logstash with Docker Compose, the .yml file looks like this:
user-service:
image: images/user-service
ports:
- "2222:2222"
links:
- logstash
logstash:
image: images/logstash
command: logstash -e 'input{} output{}'
ports:
- "5045:5045"
And logstash starts and ends, as the console shows:
logstash_1 | Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
logstash_1 | 01:51:30.164 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
logstash_1 | 01:51:30.246 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
logstash_1 | 01:51:30.860 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
logstash_1 | 01:51:33.318 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"main"}
docker_logstash_1 exited with code 0
What could be the problem?
I resolved it changing the input from the filter:
input{ tcp{
port=> 9600
host=>localhost
}
}

Resources