I have defined 3 services in the docker-compose.yaml file. Out of which 2 services (my-app_my-app_1 & my-app_mongodb_1) are getting started automatically when firing this command docker-compose -f docker-compose.yaml up. But failing to start one of the service (my-app_mongo-express_1). Just to add, I can start the failed container successfully again by executing docker start my-app_mongo-express_1 separately.
Contents of file - docker-compose.yaml:
→ cat docker-compose.yaml
version: '3'
services:
my-app:
image: maryo/my-app:1.2
ports:
- 3000:3000
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo-data:/data/db
mongo-express:
image: mongo-express
ports:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
volumes:
mongo-data:
driver: local
Output of docker-compose ps
→ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
my-app_mongo-express_1 "tini -- /docker-ent…" mongo-express exited (0)
my-app_mongodb_1 "docker-entrypoint.s…" mongodb running 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp
my-app_my-app_1 "docker-entrypoint.s…" my-app running 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
Docker logs for mongo-express container:
→ docker logs my-app_mongo-express_1
Welcome to mongo-express
------------------------
(node:8) [MONGODB DRIVER] Warning: Current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
Could not connect to database using connectionString: mongodb://admin:password#mongodb:27017/"
(node:8) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [Error: connect ECONNREFUSED 172.25.0.4:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
name: 'MongoNetworkError'
}]
at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:441:11)
at Pool.emit (events.js:314:20)
at /node_modules/mongodb/lib/core/connection/pool.js:564:14
at /node_modules/mongodb/lib/core/connection/pool.js:1000:11
at /node_modules/mongodb/lib/core/connection/connect.js:32:7
at callback (/node_modules/mongodb/lib/core/connection/connect.js:289:5)
at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:319:7)
at Object.onceWrapper (events.js:421:26)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
(node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:8) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Works fine if I start that container separately:
→ docker start my-app_mongo-express_1
my-app_mongo-express_1
→ docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
my-app_mongo-express_1 "tini -- /docker-ent…" mongo-express running 0.0.0.0:8080->8081/tcp, :::8080->8081/tcp
my-app_mongodb_1 "docker-entrypoint.s…" mongodb running 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp
my-app_my-app_1 "docker-entrypoint.s…" my-app running 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
What am I missing? Why I am not able to start all the containers together using docker-compose up?
You can use the depends_on option to control the order in which your defined services startup.
In this specific case, the mongo-express service has a dependency on the mongodb service, and so if the mongo-express service is started before mongodb, it will fail to connect:
Could not connect to database using connectionString
This is why starting the mongo-express service manually succeeds (because mongodb is already running). However, note the following caveat from the documentation which you may still need to address:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running . . . To handle this, design your application to attempt to re-establish a connection to the database after a failure. If the application retries the connection, it can eventually connect to the database.
Related
I am trying to run a local mosquitto broker, publisher and subscriber setup via docker and docker-compose, but the publisher cannot connect to the broker. However, connecting to local broker via cli works fine.
Getting following error when running below setup.
{ Error: connect ECONNREFUSED 127.0.0.1:1883
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1088:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 1883 }
Local dockerized setup:
docker-compose.yml:
version: "3.5"
services:
publisher:
hostname: publisher
container_name: publisher
build:
context: ./
dockerfile: dev.Dockerfile
command: npm start
networks:
- default
depends_on:
- broker
broker:
image: eclipse-mosquitto
hostname: mosquitto-broker
container_name: mosquitto-broker
networks:
- default
ports:
- "1883:1883"
networks:
default:
dev.Dockerfile:
FROM node:11-alpine
RUN mkdir app
WORKDIR app
COPY package*.json ./
RUN npm ci
COPY ./src ./src
CMD npm start
src/index.js:
const mqtt = require("mqtt");
const client = mqtt.connect("mqtt://localhost:1883");
client.on("connect", () => {
console.log("Start publishing...");
client.publish("testTopic", "test");
});
client.on("error", (error) => {
console.error(error);
});
However, if I connect to the mosquitto broker via mqtt-js cli, it works as expected. E.g.
mqtt sub -t 'testTopic' -h 'localhost' and mqtt pub -t 'testTopic' -h 'localhost' -m 'from MQTT.js'.
What am I missing?
your publisher container and broker are running in two different containers that's mean that they are two different machines each machine has it's own ip.
you can't call broker service from your publisher container by using localhost:1883 and vice verse , from broker to publisher container
To reach broker container you have to call container ip or name or service name
in your case change mqtt.connect("mqtt://localhost:1883"); value to be mqtt.connect("mqtt://broker:1883"); and give it a try
The publisher and broker run in different containers, meaning they have different IPs.
When the publisher is trying to reach the broker at localhost:1883, it is normal to receive a ECONNREFUSED, hence the broker is not in the same container.
You should replace the 127.0.0.1 or localhost with the service name of the broker(broker in this case). The service name will be resolved to the correct IP of the broker container.
in your index.js you should change "localhost" to "broker". When inside a container "localhost" will resolve to that specific container so you should always use the service name instead and docker will take care of the routing to that specific service. Also by default all service in the same compose file are added to the same network so there is no need to specify it.
So basically change this: const client = mqtt.connect("mqtt://localhost:1883");
To this: const client = mqtt.connect("mqtt://broker:1883");
I am trying to work on a webscraper using the Serverless Framework that I want to be easily ran locally by users without having to install any necessary depedencies on their local machine. I am using serverless-offline-sqs with a local Elasticmq server hosted on a Docker container.
Currently, I have a docker-compose file that I run, then run serverless offline in another terminal which works well. That docker-compose.yml file looks like this:
# docker-compose.yml
version: '3'
services:
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
This works well with no issues, and after ensuring that all of my containers are up, I can run serverless offline and my app works. I am trying to also include the act of running Serverless in its own docker container. I have created the following Dockerfile:
# Dockerfile
FROM node:12
RUN npm --loglevel=error install -g serverless && npm --loglevel=error install -g serverless-offline
WORKDIR /usr/src/app
COPY package*.json ./
COPY ./scripts/wait-for-it.sh ./
RUN ["chmod", "+x", "/usr/src/app/wait-for-it.sh"]
RUN npm install
COPY . .
EXPOSE 3000
I am trying to follow the Docker documentation for affecting the start-up order, found here to ensure that my queue service is up before running this. This has led me to this docker-compose.yml:
version: '3'
services:
serverless:
container_name: 'serverless'
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.development
ports:
- '3000:3000'
depends_on:
- sqs
command: ["./wait-for-it.sh", "sqs:9324", "--", "serverless", "offline"]
database:
image: 'mongo'
container_name: 'database'
environment:
- MONGO_INITDB_DATABASE=scraper_database
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=admin
volumes:
- ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
- ./mongo-volume:/data/db
ports:
- '27017-27019:27017-27019'
command: mongod --quiet --logpath /dev/null
sqs:
image: softwaremill/elasticmq:latest
container_name: 'sqs'
ports:
- '9324:9324'
sqs-create:
image: infrastructureascode/aws-cli:latest
container_name: 'sqs-create'
links:
- sqs
entrypoint: sh
command: ./create-queues.sh
volumes:
- ./scripts/create-queues.sh:/project/create-queues.sh:ro
environment:
- AWS_ACCESS_KEY_ID=local
- AWS_SECRET_ACCESS_KEY=local
- AWS_DEFAULT_REGION=eu-east-1
- AWS_ENDPOINT_URL=http://sqs:9324
I am using the wait-for-it.sh script which the Docker documentation suggests, but it says that I am getting the following error:
Successfully built 38df0769a202
Successfully tagged assessorscraper_serverless:latest
Starting sqs ... done
Starting database ... done
Recreating serverless ... done
Starting sqs-create ... done
Attaching to sqs, database, sqs-create, serverless
serverless | wait-for-it.sh: waiting 15 seconds for sqs:9324
sqs | 07:54:45.046 [main] INFO org.elasticmq.server.Main$ - Starting ElasticMQ server (1.0.0) ...
sqs | 07:54:48.133 [elasticmq-akka.actor.default-dispatcher-6] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
sqs | 07:54:51.385 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.rest.sqs.TheSQSRestServerBuilder - Started SQS rest server, bind address 0.0.0.0:9324, visible server address http://localhost:9324
sqs | 07:54:51.643 [elasticmq-akka.actor.default-dispatcher-7] INFO o.e.r.s.TheStatisticsRestServerBuilder - Started statistics rest server, bind address 0.0.0.0:9325
sqs | 07:54:51.649 [main] INFO org.elasticmq.server.Main$ - === ElasticMQ server (1.0.0) started in 8819 ms ===
serverless | wait-for-it.sh: sqs:9324 is available after 9 seconds
sqs-create | Creating queue TownQueue
sqs | 07:54:53.808 [elasticmq-akka.actor.default-dispatcher-6] INFO o.elasticmq.actor.QueueManagerActor - Creating queue QueueData(TownQueue,MillisVisibilityTimeout(30000),PT0S,PT0S,2021-01-07T07:54:53.494Z,2021-01-07T07:54:53.494Z,None,false,false,None,None,Map())
sqs-create exited with code 0
serverless | Serverless: Running "serverless" installed locally (in service node_modules)
serverless | Serverless: DOTENV: Loading environment variables from .env.development:
serverless | Serverless: - DATABASE_URL
serverless | Serverless: - ACCOUNT_ID
serverless | Serverless: - QUEUE_URL
serverless | Serverless: Deprecation warning: Starting with next major version, default value of provider.lambdaHashingVersion will be equal to "20201221"
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#LAMBDA_HASHING_VERSION_V2
serverless | Serverless: Deprecation warning: Starting with next major version, API Gateway naming will be changed from "{stage}-{service}" to "{service}-{stage}".
serverless | Set "provider.apiGateway.shouldStartNameWithService" to "true" to adapt to the new behavior now.
serverless | More Info: https://www.serverless.com/framework/docs/deprecations/#AWS_API_GATEWAY_NAME_STARTING_WITH_SERVICE
serverless | offline: Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | Networking Error ---------------------------------------
serverless |
serverless | Error: connect ECONNREFUSED 0.0.0.0:9324
serverless | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16)
serverless |
serverless | For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
serverless |
serverless | Get Support --------------------------------------------
serverless | Docs: docs.serverless.com
serverless | Bugs: github.com/serverless/serverless/issues
serverless | Issues: forum.serverless.com
serverless |
serverless | Your Environment Information ---------------------------
serverless | Operating System: linux
serverless | Node Version: 12.20.1
serverless | Framework Version: 2.17.0 (local)
serverless | Plugin Version: 4.4.1
serverless | SDK Version: 2.3.2
serverless | Components Version: 3.4.4
serverless |
Am I still getting some race condition? Any suggestions here would be much appreciated!
The problem is likely to be in ECONNREFUSED 0.0.0.0:9324. Judging by the port number it is an attempt to reach the sqs service, but the IP-address is bad. It should connect to sqs:9324 or an IP-address of that container. 0.0.0.0 means 'any IP-address' and it is usually used to bind a port. Check your serverless configuration.
Also, you can easily check if you are in a 'race condition' or not. For that simply start your services one by one using several terminals:
docker-compose up database
docker-compose up sqs
docker-compose up sqs-create
docker-compose up serverless
If you can start services one by one then it is likely you are. In this case you can add restart: on-failure property to a service. This way if a container exits with a code other than 0 - docker restarts the container.
It turns out, my issue was actually in my serverless.yml configuration. Here, I had my serverless.yml with a custom configuration as follows:
custom:
serverless-offline-sqs:
autoCreate: true # create queue if not exists
apiVersion: '2012-11-05'
endpoint: http://0.0.0.0:9324
region: us-east-1
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
The correct endpoint was actually `http://sqs:9324'. Everything else was correct!
I am currently having an app with 3 different databases (it's for a test). I have the following docker image:
Dockerfile
FROM golang:1.15
WORKDIR /myapp
# Download wait for it tool.
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /wait-for-it
RUN chmod +x /wait-for-it
and the following docker-compose.yml
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- .:/app
command: sh -c "/wait-for-it postgres:10001 -- /wait-for-it oracle:10000 -- /wait-for-it mongodb:10002"
depends_on:
- oracle
- mongodb
- postgres
ports:
- "8080:8080"
oracle:
image: chameleon82/oracle-xe-10g:latest
ports:
- "10000:8080"
expose:
- 10000
postgres:
image: postgres:9.6-alpine
ports:
- "10001:5432"
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_DB=testdb
expose:
- 10001
mongodb:
image: mongo:latest
ports:
- "10002:27017"
expose:
- 10002
The thing is, as you saw, my app listens on 8080, but so Oracle does. I know I can change my app port, but still, I would like to switch Oracle to another port. I am trying to achieve it from port mapping, but I do feel that it only works for the host machine, not for a use from within the docker-compose, am I wrong?
Think of services within a Docker Compose network (i.e. app, oracle) as distinct hosts. Each is addressable by its service name, i.e. app should refer to the oracle service by this name (oracle).
The port mapping allows you to expose (and map) service ports within a Docker Compose network to (possibly different) host ports. This is commonly because, the host has a singular dimension of port spaces (0...65535) whereas each service within the Docker Compose network has a port space. 2 services (e.g. http1 and http2) may each use port e.g. 8080 but there's only one 8080 on the host and so, to expose each of these services on your host, one would have to yield; one could also be on the host's 8080 but the other would need to be elsewhere, perhaps 8081.
In your case, e.g. oracle runs on 8080 within the Docker Compose network and is exposed on the host's port 10000. As far as the app service is concerned, this service is available as oracle:8080 (8080 not 10000) within the Docker Compose network.
The expose syntax is purely documentary and has no functional effect.
Responding to comments
If I run your Compose script as-is, it does not work. This is expected because e.g. postgres is available on 5432 within the Compose network not on 10001
docker-compose logs app
Attaching to 63690852_app_1
app_1 | wait-for-it: waiting 15 seconds for postgres:10001
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for postgres:10001
app_1 | wait-for-it: waiting 15 seconds for oracle:10000
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for oracle:10000
app_1 | wait-for-it: waiting 15 seconds for mongodb:10002
app_1 | wait-for-it: timeout occurred after waiting 15 seconds for mongodb:10002
If I correct the ports:
command: sh -c "/wait-for-it postgres:5432 -- /wait-for-it oracle:8080 -- /wait-for-it mongodb:27017"
It works as expected:
docker-compose logs app
Attaching to 63690852_app_1
app_1 | wait-for-it: waiting 15 seconds for postgres:5432
app_1 | wait-for-it: postgres:5432 is available after 0 seconds
app_1 | wait-for-it: waiting 15 seconds for oracle:8080
app_1 | wait-for-it: oracle:8080 is available after 8 seconds
app_1 | wait-for-it: waiting 15 seconds for mongodb:27017
app_1 | wait-for-it: mongodb:27017 is available after 0 seconds
I am having troubles with microservice health checks in my consul docker setup, which i believe is a symptom of failure in service discovery as i only have one server in my registry.
Below is consul list of members from inside the docker container.
/ # consul members
Node Address Status Type Build Protocol DC Segment
7b1edb14a647 172.19.0.6:8301 alive server 1.7.4 2 dc1 <all>
/ #
Consul container logs repeat the same error below for all the microservices:
consul | 2020-06-16T12:19:11.087Z [WARN] agent: Check socket connection failed: check=service:ffa44b66c4869601c04abdbea6dc5be5 error="dial tcp 172.19.0.6:50044: connect: connection refused"
I am using docker-compose v.3.2 to create a network for containers.
This is a consul service definition
consul:
container_name: consul
ports:
- '8400:8400'
- '8500:8500'
- '8600:53/udp'
image: consul
command: ['agent', '-server', '-bootstrap', '-ui', '-client', '0.0.0.0']
Microservice definition
service-notification:
build:
context: .
dockerfile: apps/service-notification/Dockerfile
args:
NODE_ENV: development
depends_on:
- consul
image: 'service-notification:latest'
restart: always
environment:
- CONSUL_HOST=consul
ports:
- '50044:50044'
I am using CONSUL_HOST env variable to pass in correct host url.
Consul config for the microservice
consul:
host: ${{CONSUL_HOST}}
port: 8500
service:
discoveryHost: ${{CONSUL_HOST}}
healthCheck:
timeout: 1s
interval: 10s
tcp: ${{ service.discoveryHost }}:${{ service.port }}
maxRetry: 5
retryInterval: 5000
tags: ["v1.0.0", "microservice"]
name: io.ultimatebackend.srv.notification
port: 50044
My conclusion so far is that consul server container fails to reach the agents somehow. But i don't know why and i feel like i am missing some obvious peace of consul structure. Please advise.
I was incorrectly configuring my service. The dicoveryHost should be an IP and port of a micro-service inside docker network.
I have a docker image (lfs-service:latest) that I'm trying to run as part of a suite of micro services.
RHELS 7.5
Docker version: 1.13.1
docker-compose version 1.23.2
Postgres 11 (installed on RedHat host machine)
The following command works exactly as I would like:
docker run -d \
-p 9000:9000 \
-v "$PWD/lfs-uploads:/lfs-uploads" \
-e "SPRING_PROFILES_ACTIVE=dev" \
-e dbhost=$HOSTNAME \
--name lfs-service \
[corp registry]/lfs-service:latest
This successfully:
creates/starts a container with my Spring Boot Docker image on port
9000
writes the uploads to disk into the lfs-uploads directory
and connects to a local Postgres DB that's running on the host
machine (not in a Docker container).
My service works as expected. Great!
Now, my problem:
I'm tring to run/manage my services using Docker Compose with the following content (I have removed all other services and my api gateway from docker-compose.yaml to simplify the scenario):
version: '3'
services:
lfs-service:
image: [corp registry]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=$HOSTNAME
Relevant entries in application.yaml:
spring:
profiles: dev
datasource:
url: jdbc:postgresql://${dbhost}:5432/lfsdb
username: [dbusername]
password: [dbpassword]
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
Execution:
docker-compose up
...
The following profiles are active: dev
...
Tomcat initialized with port(s): 9000 (http)
...
lfs-service | Caused by: java.net.UnknownHostException: [host machine hostname]
lfs-service | at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_181]
lfs-service | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_181]
lfs-service | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_181]
lfs-service | at org.postgresql.core.PGStream.<init>(PGStream.java:70) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.5.jar!/:42.2.5]
...
lfs-service | 2019-01-11 18:46:54.495 WARN [lfs-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
lfs-service |
lfs-service | org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: The connection attempt failed.
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:328) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
...
Both methods of starting should be equivalent but obviously there's a functional difference... Any ideas on how to resolve this issue / write a comperable docker-compose file which is functionally identical to the "docker run" command at the top?
NOTE: I've also tried the following values for dbhost: localhost, 127.0.0.1 - this won't work as it attempts to find the DB in the container, and not on the host machine.
CORRECTION:
Unfortunately, while this solution works in the simplest use case - it will break Eureka & API Gateways from functioning, as the container will be running on a separate network. I'm still looking for working solution.
To anyone looking for a solution to this question, this worked for me:
docker-compose.yaml:
lfs-service:
image: [corp repo]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=localhost
network_mode: host
Summary of changes made to docker-compose.yaml:
change $HOSTNAME to "localhost"
Add "network_mode: host"
I have no idea if this is the "correct" way to resolve this, but since it's only for our remote development server the solution is working for me. I'm open to suggestions if you have a better solution.
Working solution
The simple solution is to just provide the host machine IP address (vs hostname).
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=172.18.0.1
Setting this via an environment variable would probably be more portable:
export DB_HOST_IP=172.18.0.1
docker-compose.yaml
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=${DB_HOST_IP}