Concourse Worker fails to create containers - docker

I am trying to run the concourse worker using a docker image on a gentoo host. When running the docker image of the worker in privileged mode I get:
iptables: create-instance-chains: iptables: No chain/target/match by that name.
My docker-compose file is
version: '3'
services:
worker:
image: private-concourse-worker-with-keys
command: worker
ports:
- "7777:7777"
- "7788:7788"
- "7799:7799"
#restart: on-failure
privileged: true
environment:
- CONCOURSE_TSA_HOST=concourse-web-1.dev
- CONCOURSE_GARDEN_NETWORK
My Dockerfile
FROM concourse/concourse
COPY keys/tsa_host_key.pub /concourse-keys/tsa_host_key.pub
COPY keys/worker_key /concourse-keys/worker_key
Some more errors
worker_1 | {"timestamp":"1526507528.298546791","source":"guardian","message":"guardian.create.containerizer-create.finished","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2"}}
worker_1 | {"timestamp":"1526507528.298666477","source":"guardian","message":"guardian.create.containerizer-create.watch.watching","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.2.4"}}
worker_1 | {"timestamp":"1526507528.303164721","source":"guardian","message":"guardian.create.network.started","log_level":1,"data":{"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.303202152","source":"guardian","message":"guardian.create.network.config-create","log_level":1,"data":{"config":{"ContainerHandle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","HostIntf":"wbpuf2nmpege-0","ContainerIntf":"wbpuf2nmpege-1","IPTablePrefix":"w--","IPTableInstance":"bpuf2nmpege","BridgeName":"wbrdg-0afe0000","BridgeIP":"x.x.0.1","ContainerIP":"x.x.0.2","ExternalIP":"x.x.0.2","Subnet":{"IP":"x.x.0.0","Mask":"/////A=="},"Mtu":1500,"PluginNameservers":null,"OperatorNameservers":[],"AdditionalNameservers":["x.x.0.2"]},"handle":"426762cc-b9a8-47b0-711a-8f5ce18ff46c","session":"23.5","spec":""}}
worker_1 | {"timestamp":"1526507528.324085236","source":"guardian","message":"guardian.iptables-runner.command.failed","log_level":2,"data":{"argv":["/worker-state/3.6.0/assets/iptables/sbin/iptables","--wait","-A","w--instance-bpuf2nmpege-log","-m","conntrack","--ctstate","NEW,UNTRACKED,INVALID","--protocol","all","--jump","LOG","--log-prefix","426762cc-b9a8-47b0-711a-8f5c ","-m","comment","--comment","426762cc-b9a8-47b0-711a-8f5ce18ff46c"],"error":"exit status 1","exit-status":1,"session":"1.26","stderr":"iptables: No chain/target/match by that name.\n","stdout":"","took":"1.281243ms"}}

It turns out it was because we were missing the log kernel module for iptables compiled into our distro.

Related

Topic authorization failed error while KSQL tries connecting to cluster

I am trying to have my ksqldb-server docker instance up and connect to a remote Kafka cluster, but getting an error. Here are the details
docker-compose.yml
---
version: '2'
services:
ksqldb-server:
build:
context: .
dockerfile: ./Dockerfile
hostname: ksqldb-server
container_name: ksqldb-server-remote
ports:
- "8088:8088"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: broker1:port1,broker2:port2,broker3:port,broker4:port
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 2
KSQL__confluent-ksql-apptest__command_topic_REPLICATION_FACTOR: 2
KSQL_TOPIC_AUTHORIZATION_CHECKS: "false"
ksqldb-cli:
image: confluentinc/ksqldb-cli:0.24.0
container_name: ksqldb-cli-remote
depends_on:
# - brokers
- ksqldb-server
entrypoint: /bin/sh
tty: true
Dockerfile
FROM confluentinc/ksqldb-server:0.24.0
ADD ksql-server.properties /etc/meta/ksql-server.properties
ADD certificates /etc/meta/certificates
RUN rm -fr /etc/ksqldb/ksql-server.properties
RUN cp -a /etc/meta/ksql-server.properties /etc/ksqldb/ksql-server.properties
ENV KSQL_BOOTSTRAP_SERVERS="broker1:port,broker2:port,broker3:port,broker4:port"
ENV KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE="true"
ENV KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE="true"
ENV KSQL_LISTENERS="http://0.0.0.0:8088"
ENTRYPOINT ["/bin/sh","-c","java -cp /usr/share/java/ksqldb-rest-app/*: -Xmx3g -server -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+ExplicitGCInvokesConcurrent -XX:NewRatio=1 -Djava.awt.headless=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dksql.log.dir=/usr/logs -Dlog4j.configuration=file:/etc/ksqldb/log4j.properties -Dksql.server.install.dir=/usr -Xlog:gc*:file=/usr/logs/ksql-server-gc.log:time,tags:filecount=10,filesize=102400 io.confluent.ksql.rest.server.KsqlServerMain /etc/meta/ksql-server.properties"]
ksql-server.properties
#------ Endpoint config -------
### HTTP ###
listeners=http://0.0.0.0:8088
### HTTPS ###
security.protocol=SSL
ssl.truststore.location=/etc/meta/certificates/truststore.jks
ssl.truststore.password=prmcert
ssl.keystore.location=/etc/meta/certificates/keystore.jks
ssl.keystore.password=prmcert
# ssl.key.password=?
#------ Logging config -------
# Automatically create the processing log topic if it does not already exist:
ksql.logging.processing.topic.auto.create=true
# Automatically create a stream within KSQL for the processing log:
ksql.logging.processing.stream.auto.create=true
#------ Kafka -------
# The set of Kafka brokers to bootstrap Kafka cluster information from:
bootstrap.servers=elr6hz1-06-s13.uhc.com:16016,elr6hz1-06-s16.uhc.com:16016,elr6hz1-06-s17.uhc.com:16016,elr6hz1-06-s19.uhc.com:16016,elr6hz1-06-s12.uhc.com:16016
# Enable snappy compression for the Kafka producers
compression.type=snappy
#------ Schema Registry -------
ksql.service.id=apptest_
This is the error that I see
ksqldb-server-remote | [2022-04-21 04:26:48,821] ERROR Unhandled
exception in server startup
(io.confluent.ksql.rest.server.KsqlServerMain:97) ksqldb-server-remote
| io.confluent.ksql.exception.KafkaResponseGetFailedException: Failed
to set config for Kafka Topic _confluent-ksql-apptest__command_topic
ksqldb-server-remote | at
io.confluent.ksql.services.KafkaTopicClientImpl.addTopicConfig(KafkaTopicClientImpl.java:258)
ksqldb-server-remote | at
io.confluent.ksql.rest.util.KsqlInternalTopicUtils.validateTopicConfig(KsqlInternalTopicUtils.java:124)
ksqldb-server-remote | at
io.confluent.ksql.rest.util.KsqlInternalTopicUtils.ensureTopic(KsqlInternalTopicUtils.java:66)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.registerCommandTopic(KsqlRestApplication.java:1069)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.initialize(KsqlRestApplication.java:455)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.startKsql(KsqlRestApplication.java:390)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlRestApplication.startAsync(KsqlRestApplication.java:372)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlServerMain.tryStartApp(KsqlServerMain.java:93)
ksqldb-server-remote | at
io.confluent.ksql.rest.server.KsqlServerMain.main(KsqlServerMain.java:68)
ksqldb-server-remote | Caused by:
org.apache.kafka.common.errors.TopicAuthorizationException: Topic
authorization failed.
Can someone please suggest on why I am seeing this error and how to resolve this.
The exception shows that KSQL cannot create the command topic because lack of permissions. The exception does not have what permissions, though. You can look at the kafka-authorizer.log from the kafka broker to see what ACL was denied and add the required ACL to allow KSQL to start.

LocalStack stuck in "Waiting for all LocalStack services to be ready" messages

I have tried to run Localstack as it described on its GitHub page and I've used a command 'pip install localstack' as well as command 'docker-compose up' with docker-compose file from documentation:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack
network_mode: bridge
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4566:4566"
- "127.0.0.1:4571:4571"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER="${TMPDIR:-/tmp}/localstack"
volumes:
- "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
But in both ways I get the same output:
localstack_main | 2021-09-21 15:32:26,633 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
localstack_main | 2021-09-21 15:32:26,645 INFO supervisord started with pid 14
localstack_main | 2021-09-21 15:32:27,650 INFO spawned: 'dashboard' with pid 20
localstack_main | 2021-09-21 15:32:27,653 INFO spawned: 'infra' with pid 21
localstack_main | 2021-09-21 15:32:27,659 INFO success: dashboard entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
localstack_main | 2021-09-21 15:32:27,660 INFO exited: dashboard (exit status 0; expected)
localstack_main | (. .venv/bin/activate; exec bin/localstack start --host)
localstack_main | 2021-09-21 15:32:28,663 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
localstack_main | LocalStack version: 0.12.1
localstack_main | Starting local dev environment. CTRL-C to quit.
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
localstack_main | Waiting for all LocalStack services to be ready
And then nothing appears except these recurring messages.
Does anybody know how to fix this problem?
This might not be the solution for everyone, but worth suggesting to try update your Docker version.
I had the same issue and for few days, and then I tried to update my docker version
I use Docker version 20.10.11 on apple silicon and can confirm it works fine. So far, after this update, I did not encounter any new issues with local-stack.
Also this Github issue suggests deleting your local-stack's volume before each run. It works, however this obviously can't be long term solution, but might be good mitigation when one is in need.
Update your docker-compose.yml as below and then run docker-compose up. It should work as expected.
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack}"
image: localstack/localstack
hostname: localstack
networks:
- test-net
ports:
- "4566:4566"
environment:
- SERVICES=s3,sqs,cloudformation,iam,cloudwatch
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=false
- LAMBDA_REMOVE_CONTAINERS=true
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
test-net:
external: false
driver: bridge
name: test-net

Symfony 5: Why am I getting this error? SQLSTATE[HY000] [2002] Connection refused

I'm getting this error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I have tried changing the IP address in my .env to localhost but I then got a not found error.
I also tried changing my .env db host to match my docker compose file:
DB_HOST=mysql
docker composer file:
version: "3.7"
services:
app:
image: kooldev/php:7.4-nginx
ports:
- ${KOOL_APP_PORT:-80}:80
environment:
ASUSER: ${KOOL_ASUSER:-0}
UID: ${UID:-0}
volumes:
- .:/app:delegated
networks:
- kool_local
- kool_global
database:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
ports:
- ${KOOL_DATABASE_PORT:-3306}:3306
I used kool.dev to do the Symfony install, that looks ok and the DB seems to be working as expected:
user#DESKTOP-QSCSABV:/mnt/c/dev/symfony-project$ kool status
+----------+---------+------------------------------------------------------+-------------------------+
| SERVICE | RUNNING | PORTS
| STATE |
+----------+---------+------------------------------------------------------+-------------------------+
| app | Running | 0.0.0.0:80->80/tcp, :::80->80/tcp, 9000/tcp
| Up 15 minutes |
| database | Running | 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp,
33060/tcp | Up 15 minutes (healthy) |
+----------+---------+------------------------------------------------------+-------------------------+
[done] Fetching services status
g
in my .env file:
DB_USERNAME=myusername
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_VERSION=8.0
DATABASE_URL="mysql://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_DATABASE}?serverVersion=${DB_VERSION}"
Any suggestions on how to resolve this?
DB_HOST=127.0.0.1
in your environment file should be
DB_HOST=database
127.0.0.1 is the address of the container itself, so in your case, the app container tries to make a connection to itself. Docker compose creates a virtual network where each container can be addressed by its service name. So in your case, you want to connect to the database service.

Serverless Offline - ECONNREFUSED Elasticmq with Docker-Compose

I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!

docker-compose UnknownHostException : but docker run works

I have a docker image (lfs-service:latest) that I'm trying to run as part of a suite of micro services.
RHELS 7.5
Docker version: 1.13.1
docker-compose version 1.23.2
Postgres 11 (installed on RedHat host machine)
The following command works exactly as I would like:
docker run -d \
-p 9000:9000 \
-v "$PWD/lfs-uploads:/lfs-uploads" \
-e "SPRING_PROFILES_ACTIVE=dev" \
-e dbhost=$HOSTNAME \
--name lfs-service \
[corp registry]/lfs-service:latest
This successfully:
creates/starts a container with my Spring Boot Docker image on port
9000
writes the uploads to disk into the lfs-uploads directory
and connects to a local Postgres DB that's running on the host
machine (not in a Docker container).
My service works as expected. Great!
Now, my problem:
I'm tring to run/manage my services using Docker Compose with the following content (I have removed all other services and my api gateway from docker-compose.yaml to simplify the scenario):
version: '3'
services:
lfs-service:
image: [corp registry]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=$HOSTNAME
Relevant entries in application.yaml:
spring:
profiles: dev
datasource:
url: jdbc:postgresql://${dbhost}:5432/lfsdb
username: [dbusername]
password: [dbpassword]
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
Execution:
docker-compose up
...
The following profiles are active: dev
...
Tomcat initialized with port(s): 9000 (http)
...
lfs-service | Caused by: java.net.UnknownHostException: [host machine hostname]
lfs-service | at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_181]
lfs-service | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_181]
lfs-service | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_181]
lfs-service | at org.postgresql.core.PGStream.<init>(PGStream.java:70) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.5.jar!/:42.2.5]
...
lfs-service | 2019-01-11 18:46:54.495 WARN [lfs-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
lfs-service |
lfs-service | org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: The connection attempt failed.
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:328) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
...
Both methods of starting should be equivalent but obviously there's a functional difference... Any ideas on how to resolve this issue / write a comperable docker-compose file which is functionally identical to the "docker run" command at the top?
NOTE: I've also tried the following values for dbhost: localhost, 127.0.0.1 - this won't work as it attempts to find the DB in the container, and not on the host machine.
CORRECTION:
Unfortunately, while this solution works in the simplest use case - it will break Eureka & API Gateways from functioning, as the container will be running on a separate network. I'm still looking for working solution.
To anyone looking for a solution to this question, this worked for me:
docker-compose.yaml:
lfs-service:
image: [corp repo]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=localhost
network_mode: host
Summary of changes made to docker-compose.yaml:
change $HOSTNAME to "localhost"
Add "network_mode: host"
I have no idea if this is the "correct" way to resolve this, but since it's only for our remote development server the solution is working for me. I'm open to suggestions if you have a better solution.
Working solution
The simple solution is to just provide the host machine IP address (vs hostname).
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=172.18.0.1
Setting this via an environment variable would probably be more portable:
export DB_HOST_IP=172.18.0.1
docker-compose.yaml
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=${DB_HOST_IP}

Resources