I am running docker containers successfully on ubuntu machines.
And I'm having trouble running the same docker on mac machines.
I've tried on two macs, and the error messages are the same.
> spark-worker_1 | java.net.UnknownHostException: docker-desktop:
> docker-desktop: Name does not resolve spark-worker_1 | at
> java.net.InetAddress.getLocalHost(InetAddress.java:1506)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:946)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:939)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:939)
> spark-worker_1 | at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1 | at
> org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:1003)
> spark-worker_1 | at scala.Option.getOrElse(Option.scala:121)
> spark-worker_1 | at
> org.apache.spark.util.Utils$.localHostName(Utils.scala:1003)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.WorkerArguments.<init>(WorkerArguments.scala:31)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.Worker$.main(Worker.scala:778)
> spark-worker_1 | at
> org.apache.spark.deploy.worker.Worker.main(Worker.scala)
> spark-worker_1 | Caused by: java.net.UnknownHostException:
> docker-desktop: Name does not resolve spark-worker_1 | at
> java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
> spark-worker_1 | at
> java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
> spark-worker_1 | at
> java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
> spark-worker_1 | at
> java.net.InetAddress.getLocalHost(InetAddress.java:1501)
> spark-worker_1 | ... 10 more docker_spark-worker_1 exited with
> code 51
Here are my docker-compose.yml file
services:
spark-master:
build:
context: ../../
dockerfile: ./danalysis/docker/spark/Dockerfile
image: spark:latest
container_name: spark-master
hostname: node-master
ports:
- "7077:7077"
network_mode: host
environment:
- "SPARK_LOCAL_IP=node-master"
- "SPARK_MASTER_PORT=7077"
- "SPARK_MASTER_WEBUI_PORT=10080"
command: "/start-master.sh"
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
spark-worker:
image: spark:latest
environment:
- "SPARK_MASTER=spark://node-master:7077"
- "SPARK_WORKER_WEBUI_PORT=8080"
command: "/start-worker.sh"
ports:
- 8080
network_mode: host
depends_on:
- spark-master
dns:
- 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
** edit **
So I found a way to make it work by commenting few lines out. so why those two are problems?
And even though the container runs fine and connects to the spark-master, it is using some internal ip, as you can see, the 172.18.0.2 is not what we see normally in our network, I think the ip is from docker container not the host
# network_mode: host
depends_on:
- spark-master
# dns:
# - 192.168.1.1 # IP necessary to connect to a database instance external to where the server in which the container is running
Try changing the docker network type to macvlan in docker compose file. This should attach the container directly to your network (making it seem like another physical machine) with an ip similar to host. And you can try adding this to your etc hosts.
The proper way to run containers on different machine would be to use network type overlay connect the docker demons on these machines.
Or create a docker swarm cluster using the laptops.
https://docs.docker.com/network/
Related
Goal:
I like to be able to ping and access the docker clients from my host network. And if possible, I like to have as much as possible configured in my docker-compose.yml.
Remark:
ICMP (ping) is just used for simplification. Actually, I like to access ssh on 22 and some other ports. Mapping ports is my current solution, but since I have many docker client container it becomes messy.
___________ ___________ ___________
| host | | docker | | docker |
| client | | host | | client |
| ..16.50 | <--> | ..16.10 | | |
| | | ..20.1 | <--> | ..20.5 |
| | | |
| | <----- not working ----> | |
Problem:
I am able to ping my docker host from docker clients and host clients, but not the docker clients from host clients.
That's my configuration on ubuntu 22.04.
docker host: 192.168.16.10/24
client host network: 192.168.16.50/24
default gw host network: 192.168.161 /24
docker client (container): 192.168.20.5 /24
docker-compose.yml
version: '3'
networks:
ipvlan20:
name: ipvlan20
driver: ipvlan
driver_opts:
parent: enp3s0.20
com.docker.network.bridge.name: br-ipvlan20
ipvlan-mode: l3
ipam:
config:
- subnet: "192.168.20.0/24"
gateway: "192.168.20.1"
services:
portainer:
image: alpine
hostname: ipvlan20
container_name: ipvlan20
restart: always
command: ["sleep","infinity"]
dns: 192.168.16.1
networks:
ipvlan20:
ipv4_address: 192.168.20.5
On my docker host, I added the following link with the vlan gateway IP.
ip link add myipvlan20 link enp3s0.20 type ipvlan mode l3
ip addr add 192.168.20.1/24 dev myipvlan20
ip link set myipvlan20 up
And on my host client, I added a rout to the docker host for the docker client network.
ip route add 192.168.20.0/24 via 192.168.16.10
I tried also:
Do I have to use macvlan? I tried that, but also unsuccessfully.
Do I have to use l3? I also tried with l2, but unsuccessfully as well.
Consider a simple Docker Compose file.
version: "3.0"
networks:
basic:
services:
srv:
image: alpine
networks:
basic:
aliases:
- server.nowhere.fake
domainname: server.nowhere.fake
entrypoint: tail -f
cli:
image: alpine
networks:
basic:
aliases:
- client.nowhere.fake
domainname: client.nowhere.fake
entrypoint: nslookup server.nowhere.fake
Successful DNS resolutions is easily shown.
$ docker-compose up
Creating network "tmp_basic" with the default driver
Creating tmp_srv_1 ... done
Creating tmp_cli_1 ... done
Attaching to tmp_srv_1, tmp_cli_1
cli_1 | Server: 127.0.0.11
cli_1 | Address: 127.0.0.11:53
cli_1 |
cli_1 | Non-authoritative answer:
cli_1 |
cli_1 | Non-authoritative answer:
cli_1 | Name: server.nowhere.fake
cli_1 | Address: 192.168.192.2
cli_1 |
tmp_cli_1 exited with code 0
However, a more manual approach yields less productive results.
$ docker-compose run -d srv
Creating network "tmp_basic" with the default driver
tmp_srv_run_8ff7ac6b8cc8
$
$ docker-compose run cli
Server: 127.0.0.11
Address: 127.0.0.11:53
** server can't find server.nowhere.fake: NXDOMAIN
** server can't find server.nowhere.fake: NXDOMAIN
In fact, it seems irrelevant whether the server is running, as its address is not resolved.
For some scenarios, finer control is required, as with using run for single services instead of upĀ for all, such as in cases of terminal interaction.
In my case, I am seeking to test terminal I/O using a tool that simulates a human, by providing prescribed responses to various prompts.
Why does the lookup fail when the container is started in a separate operation? What solution is available?
I'm getting this error:
An exception occurred in driver: SQLSTATE[HY000] [2002] Connection refused
I have tried changing the IP address in my .env to localhost but I then got a not found error.
I also tried changing my .env db host to match my docker compose file:
DB_HOST=mysql
docker composer file:
version: "3.7"
services:
app:
image: kooldev/php:7.4-nginx
ports:
- ${KOOL_APP_PORT:-80}:80
environment:
ASUSER: ${KOOL_ASUSER:-0}
UID: ${UID:-0}
volumes:
- .:/app:delegated
networks:
- kool_local
- kool_global
database:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
ports:
- ${KOOL_DATABASE_PORT:-3306}:3306
I used kool.dev to do the Symfony install, that looks ok and the DB seems to be working as expected:
user#DESKTOP-QSCSABV:/mnt/c/dev/symfony-project$ kool status
+----------+---------+------------------------------------------------------+-------------------------+
| SERVICE | RUNNING | PORTS
| STATE |
+----------+---------+------------------------------------------------------+-------------------------+
| app | Running | 0.0.0.0:80->80/tcp, :::80->80/tcp, 9000/tcp
| Up 15 minutes |
| database | Running | 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp,
33060/tcp | Up 15 minutes (healthy) |
+----------+---------+------------------------------------------------------+-------------------------+
[done] Fetching services status
g
in my .env file:
DB_USERNAME=myusername
DB_PASSWORD=mypassword
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_VERSION=8.0
DATABASE_URL="mysql://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_DATABASE}?serverVersion=${DB_VERSION}"
Any suggestions on how to resolve this?
DB_HOST=127.0.0.1
in your environment file should be
DB_HOST=database
127.0.0.1 is the address of the container itself, so in your case, the app container tries to make a connection to itself. Docker compose creates a virtual network where each container can be addressed by its service name. So in your case, you want to connect to the database service.
I'm trying to connect two containers with docker compose. this I can succesfully do this when working on my own machine, but not when I try to run the same docker-compose.yml file on a kitty enviroment.
Any ideas what the problem could be, since it works on my own machine?
The error that I get looks like this:
Successfully built 8970545ddd5e
Starting postgres_db
Starting jasper.mobylife.com
Attaching to jasper.mobylife.com, postgres_db
jasper.mobylife.com | psql: could not connect to server: Connection refused
jasper.mobylife.com | Is the server running on host "postgres_db" (172.19.0.3) and accepting
jasper.mobylife.com | TCP/IP connections on port 5432?
jasper.mobylife.com | Waiting for PostgreSQL...
postgres_db | 2019-05-13 13:22:16.087 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_db | 2019-05-13 13:22:16.087 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_db | 2019-05-13 13:22:16.088 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_db | 2019-05-13 13:22:16.097 UTC [20] LOG: database system was shut down at 2019-05-13 13:22:02 UTC
postgres_db | 2019-05-13 13:22:16.100 UTC [1] LOG: database system is ready to accept connections
jasper.mobylife.com | psql: could not connect to server: Connection refused
jasper.mobylife.com | Is the server running on host "postgres_db" (172.19.0.3) and accepting
jasper.mobylife.com | TCP/IP connections on port 5432?
jasper.mobylife.com | Waiting for PostgreSQL...
jasper.mobylife.com | psql: could not connect to server: Connection refused
jasper.mobylife.com | Is the server running on host "postgres_db" (172.19.0.3) and accepting
jasper.mobylife.com | TCP/IP connections on port 5432?
jasper.mobylife.com | Waiting for PostgreSQL...
my docker-compose.yml:
# Copyright (c) 2016. TIBCO Software Inc.
# This file is subject to the license terms contained
# in the license file that is distributed with this file.
# version: 6.3.0-v1.0.4
version: '2'
# network used by both JasperReports Server and PostgreSQL containers
networks:
default:
ipam:
config:
- subnet: "192.168.5.1/24"
jasper:
external:
name: jasper
services:
jasperserver:
build: .
# expose port 8082 and bind it to 8080 on host
ports:
- "8082:8080"
- "8443:8443"
# set depends on js_database service
# point to env file with key=value entries
container_name: jasper.mobylife.com
env_file: .env
# setting following values here will override settings from env_file
environment:
- DB_HOST=postgres_db
- DB_PASSWORD=12345678
volumes:
- jrs_webapp:/usr/local/tomcat/webapps/jasperserver-pro
- jrs_license:/usr/local/share/jasperreports-pro/license
- jrs_customization:/usr/local/share/jasperreports-pro/customization
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
# for Mac OS you may want to define local path for volume mounts.
# Note that defining path for a named volume is not supported
# by Compose. For example:
# - /some-local-path:/usr/local/tomcat/webapps/jasperserver-pro
# - ~/jasperreports-pro/license:/usr/local/share/jasperreports-pro/license
# - /tmp/customization:/usr/local/share/jasperreports-pro/customization
networks:
- default
- jasper
etel-postgres:
image: postgres:10.3
container_name: postgres_db
hostname: postgres_db
environment:
- POSTGRES_PASSWORD=12345678
ports:
- "5432:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- db.volume:/var/lib/postgresql
networks:
- default
- jasper
volumes:
jrs_webapp:
driver: local
jrs_license:
jrs_customization:
init.sql:
db.volume:
db.volume.data:
Dockerfile:
# Copyright (c) 2016. TIBCO Software Inc.
# This file is subject to the license terms contained
# in the license file that is distributed with this file.
# version: 6.3.0-v1.0.4
FROM tomcat:8.0-jre8
# Copy jasperreports-server-<ver> zip file from resources dir.
# Build will fail if file not present.
COPY resources/jasperreports-server*zip /tmp/jasperserver.zip
##edited out actual proxy
ENV HTTP_PROXY=http://*.**.***.**:*****\
http_proxy=http://*.**.***.**:*****\
HTTPS_PROXY=https://*.**.***.**:*****\
https_proxy=https://*.**.***.**:*****
RUN apt-get update && apt-get install -y postgresql-client unzip xmlstarlet && \
rm -rf /var/lib/apt/lists/* && \
unzip /tmp/jasperserver.zip -d /usr/src/ && \
mv /usr/src/jasperreports-server-* /usr/src/jasperreports-server && \
mkdir -p /usr/local/share/jasperreports-pro/license
# Set default environment options.
ENV CATALINA_OPTS="${JAVA_OPTIONS:--Xmx2g -XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC} \
-Djs.license.directory=${JRS_LICENSE:-/usr/local/share/jasperreports-pro/license}"
# Expose ports. Note that you must do one of the following:
# map them to local ports at container runtime via "-p 8080:8080 -p 8443:8443"
# or use dynamic ports.
EXPOSE ${HTTP_PORT:-8081} ${HTTPS_PORT:-8443}
COPY scripts/entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
# Default action executed by entrypoint script.
CMD ["run"]
my solution is heavily based on this project: https://github.com/TIBCOSoftware/js-docker
Turns out the default network was already running on the enviroment, but for some reason didn't work properly.
Restarting the default network then fixed the problem
I have a docker image (lfs-service:latest) that I'm trying to run as part of a suite of micro services.
RHELS 7.5
Docker version: 1.13.1
docker-compose version 1.23.2
Postgres 11 (installed on RedHat host machine)
The following command works exactly as I would like:
docker run -d \
-p 9000:9000 \
-v "$PWD/lfs-uploads:/lfs-uploads" \
-e "SPRING_PROFILES_ACTIVE=dev" \
-e dbhost=$HOSTNAME \
--name lfs-service \
[corp registry]/lfs-service:latest
This successfully:
creates/starts a container with my Spring Boot Docker image on port
9000
writes the uploads to disk into the lfs-uploads directory
and connects to a local Postgres DB that's running on the host
machine (not in a Docker container).
My service works as expected. Great!
Now, my problem:
I'm tring to run/manage my services using Docker Compose with the following content (I have removed all other services and my api gateway from docker-compose.yaml to simplify the scenario):
version: '3'
services:
lfs-service:
image: [corp registry]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=$HOSTNAME
Relevant entries in application.yaml:
spring:
profiles: dev
datasource:
url: jdbc:postgresql://${dbhost}:5432/lfsdb
username: [dbusername]
password: [dbpassword]
jpa:
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: update
Execution:
docker-compose up
...
The following profiles are active: dev
...
Tomcat initialized with port(s): 9000 (http)
...
lfs-service | Caused by: java.net.UnknownHostException: [host machine hostname]
lfs-service | at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) ~[na:1.8.0_181]
lfs-service | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_181]
lfs-service | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_181]
lfs-service | at org.postgresql.core.PGStream.<init>(PGStream.java:70) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91) ~[postgresql-42.2.5.jar!/:42.2.5]
lfs-service | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.5.jar!/:42.2.5]
...
lfs-service | 2019-01-11 18:46:54.495 WARN [lfs-service,,,] 1 --- [ main] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource
lfs-service |
lfs-service | org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is org.postgresql.util.PSQLException: The connection attempt failed.
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:328) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
lfs-service | at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) ~[spring-jdbc-5.1.2.RELEASE.jar!/:5.1.2.RELEASE]
...
Both methods of starting should be equivalent but obviously there's a functional difference... Any ideas on how to resolve this issue / write a comperable docker-compose file which is functionally identical to the "docker run" command at the top?
NOTE: I've also tried the following values for dbhost: localhost, 127.0.0.1 - this won't work as it attempts to find the DB in the container, and not on the host machine.
CORRECTION:
Unfortunately, while this solution works in the simplest use case - it will break Eureka & API Gateways from functioning, as the container will be running on a separate network. I'm still looking for working solution.
To anyone looking for a solution to this question, this worked for me:
docker-compose.yaml:
lfs-service:
image: [corp repo]/lfs-service:latest
container_name: lfs-service
stop_signal: SIGINT
ports:
- 9000:9000
expose:
- 9000
volumes:
- "./lfs-uploads:/lfs-uploads"
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=localhost
network_mode: host
Summary of changes made to docker-compose.yaml:
change $HOSTNAME to "localhost"
Add "network_mode: host"
I have no idea if this is the "correct" way to resolve this, but since it's only for our remote development server the solution is working for me. I'm open to suggestions if you have a better solution.
Working solution
The simple solution is to just provide the host machine IP address (vs hostname).
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=172.18.0.1
Setting this via an environment variable would probably be more portable:
export DB_HOST_IP=172.18.0.1
docker-compose.yaml
environment:
- SPRING_PROFILES_ACTIVE=dev
- dbhost=${DB_HOST_IP}