Basically I'm trying to setup a environment with elasticsearch and kibana with docker on a m1 mac. I've setup the de env variable DOCKER_DEFAULT_PLATFORMto linux/amd64. Everything seems fine on running the container but when I try to connect kibana to elastic they just can't see each other. This is my current docker-composer file:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.3.3-amd64
environment:
- discovery.type=single-node
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms128M -Xmx128M"
ports:
- 9200:9200
networks:
- my-network
kibana:
image: docker.elastic.co/kibana/kibana:8.3.3-amd64
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200/
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- my-network
Before that I was using links insted of networks, no luck with that either. From my terminal or browser I can see both elastic and kibana running on their respective ports. I'm without ideas here, appreciate any help!
EDIT
docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
265023669cfd docker.elastic.co/kibana/kibana:8.3.3-amd64 "/bin/tini -- /usr/l…" 14 minutes ago Up 14 minutes 0.0.0.0:5601->5601/tcp folha3_kibana_1
48ee37663dda docker.elastic.co/elasticsearch/elasticsearch:8.3.3-amd64 "/bin/tini -- /usr/l…" 14 minutes ago Up 14 minutes 0.0.0.0:9200->9200/tcp, 9300/tcp folha3_elasticsearch_1
6b1f6dd9473f redis "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 0.0.0.0:6379->6379/tcp folha3_redis_1
2a3ade65634a mysql:5.7 "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 0.0.0.0:3306->3306/tcp, 33060/tcp folha3_mysql_1
ELASTICSEARCH_URL: http://localhost:9200/ should be http://my-network:9200/, localhost can't access elasticsearch container.
I am trying to run a kafka-spark streaming application on Docker. Below is my project structure.
Dockerfile contents:
from gcr.io/datamechanics/spark:platform-3.1-dm14
ENV PYSPARK_MAJOR_PYTHON_VERSION=3
WORKDIR /opt/application/
RUN wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jar
RUN mv postgresql-42.2.5.jar /opt/spark/jars
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY main.py .
Docker-compose.yml contents:
version: "2"
services:
spark:
image: docker.io/bitnami/spark:3
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
ports:
- '8080:8080'
spark-worker:
image: docker.io/bitnami/spark:3
environment:
- SPARK_MODE=worker
- SPARK_MASTER_URL=spark://spark:7077
- SPARK_WORKER_MEMORY=1G
- SPARK_WORKER_CORES=1
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
ports:
- "22181:22181"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
kafka-server1:
image: confluentinc/cp-kafka:latest
ports:
- '9092:9092'
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper-server
I was able to pull the images and create containers successfully.
(venv) (base) johnny#Johnnys-MBP~/PyCharmProjects/dockerpractice/Johnny> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d47d4b091b2c confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 16 minutes ago Up 12 minutes 2181/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:22181->22181/tcp zookeeper
59c06d3cf754 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 48 minutes ago Up 48 minutes 9092/tcp, 0.0.0.0:29092->29092/tcp kafka
26794da88c7d bitnami/spark:3 "/opt/bitnami/script…" About an hour ago Up About an hour dockerpractice-spark-worker-1
5a308035bd18 bitnami/spark:3 "/opt/bitnami/script…" About an hour ago Up About an hour 0.0.0.0:8080->8080/tcp dockerpractice-spark-1
I connected to the kafka zookeeper image and started it like below:
docker run -i -t confluentinc/cp-zookeeper:latest /bin/bash
zookeeper-server-start /etc/kafka/zookeeper.properties
The above command starts zookeeper with no exceptions and I tried to start my kafka-server in the same way:
docker run -i -t confluentinc/cp-kafka:latest /bin/bash
kafka-server-start /etc/kafka/server.properties
But when I submit the command for kafka-server, I see the below exception:
[2022-04-04 18:13:11,276] WARN Session 0x0 for sever localhost/127.0.0.1:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:344)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290)
I tried various combinations like changing the ip address & ports of kafka server to new ones and by creating multiple kafka servers but all of them are resulting in same exception.
Could anyone let me know what is the mistake I am doing here and how can I correct it?
I am using docker-compose to start up some containers which form a cluster of Solr and Zookeeper nodes.
This is my compose file (taken from here):
version: '3.8'
services:
solr1:
image: solr:8.7.0
container_name: solr1
ports:
- "8981:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr2:
image: solr:8.7.0
container_name: solr2
ports:
- "8982:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
solr3:
image: solr:8.7.0
container_name: solr3
ports:
- "8983:8983"
environment:
- ZK_HOST=zoo1:2181,zoo2:2181,zoo3:2181
networks:
- solr
depends_on:
- zoo1
- zoo2
- zoo3
zoo1:
image: zookeeper:3.6.2
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
- 7001:7000
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr, conf, ruok
ZOO_CFG_EXTRA: "metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7000 metricsProvider.exportJvmInfo=true"
networks:
- solr
zoo2:
image: zookeeper:3.6.2
container_name: zoo2
restart: always
hostname: zoo2
ports:
- 2182:2181
- 7002:7000
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr, conf, ruok
ZOO_CFG_EXTRA: "metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7000 metricsProvider.exportJvmInfo=true"
networks:
- solr
zoo3:
image: zookeeper:3.6.2
container_name: zoo3
restart: always
hostname: zoo3
ports:
- 2183:2181
- 7003:7000
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr, conf, ruok
ZOO_CFG_EXTRA: "metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7000 metricsProvider.exportJvmInfo=true"
networks:
- solr
networks:
solr:
If I run docker compose ls, I see that the project is correctly running:
NAME STATUS
solr-cluster running(6)
and this is the output of docker container ls:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5333378e78c 25b74ae1e488 "docker-entrypoint.s…" 23 hours ago Up 52 minutes 0.0.0.0:8983->8983/tcp, :::8983->8983/tcp solr3
5fac27cd0443 25b74ae1e488 "docker-entrypoint.s…" 23 hours ago Up 52 minutes 0.0.0.0:8982->8983/tcp, :::8982->8983/tcp solr2
db37ef90ca98 25b74ae1e488 "docker-entrypoint.s…" 23 hours ago Up 52 minutes 0.0.0.0:8981->8983/tcp, :::8981->8983/tcp solr1
b8b55694e281 a72350516291 "/docker-entrypoint.…" 23 hours ago Up 52 minutes 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp, :::2183->2181/tcp, 0.0.0.0:7003->7000/tcp, :::7003->7000/tcp zoo3
10885eafe4e8 a72350516291 "/docker-entrypoint.…" 23 hours ago Up 52 minutes 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp, :::2182->2181/tcp, 0.0.0.0:7002->7000/tcp, :::7002->7000/tcp zoo2
558f8574c036 a72350516291 "/docker-entrypoint.…" 23 hours ago Up 52 minutes 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp, 0.0.0.0:7001->7000/tcp, :::7001->7000/tcp zoo1
If I try to navigate a solr node with my browser using host localhost (e.g. http://localhost:8983/), the container responds correctly with the Solr Admin interface.
However, if I try to access with my browser the same container but from its IP (e.g. http://172.23.0.7:8983/, where 172.23.0.7 is the IP of solr3 container) I get a connection timeout. I also get a connection timeout if I try to ping that IP.
Why does this happen? Shouldn't both localhost and 172.23.0.7 hosts work?
I need to access my container from its IP because the Solr nodes registers to Zookeeper using their IP. So, since I can't access the containers from their IP, I get a ConnectionTimeout exception when I try to connect to them programmatically through the SolrJ APIs:
public static void main(String[] args)
{
List<String> zkNodes = new ArrayList<>(3);
zkNodes.add("localhost:2181");
zkNodes.add("localhost:2182");
zkNodes.add("localhost:2183");
SolrClient solrClient = new CloudSolrClient.Builder(zkNodes, Optional.empty())
.withConnectionTimeout(10000)
.withSocketTimeout(10000)
.build();
try
{
CollectionAdminRequest.listCollections(solrClient);
}
catch (IOException | SolrServerException e)
{
e.printStackTrace();
}
}
This is the exception I get:
org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: http://172.23.0.7:8983/solr
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:695)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:266)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:370)
at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:298)
at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1157)
at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:918)
at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:850)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:214)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:231)
at org.apache.solr.client.solrj.request.CollectionAdminRequest.listCollections(CollectionAdminRequest.java:2690)
at Main.main(Main.java:28)
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 172.23.0.7:8983 [/172.23.0.7] failed: connect timed out
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 172.23.0.7:8983 [/172.23.0.7] failed: connect timed out
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:374)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:571)
... 11 more
Caused by: java.net.SocketTimeoutException: connect timed out
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
... 21 more
I don't know if it that may help, but anyway I am on a Windows host, so I cannot use the --network host flag for Docker.
The 172.x.x.x IPs are IPs only reachable within the Docker network, that is between the Solr and Zookeeper nodes.
When you connect to Zookeeper from outside, Zookeeper doesn't know there is an "outside", it only knows IP of Solr servers in its network and send them back to you. But these IPs are not reachable from "outside".
One workaround can be to connect to Solr servers directly instead of using Zookeeper (with URLs like localhost:8981 and HttpSolrClient).
Or, you need to tell Solr servers to "present" themselves with the outside IP to Zookeeper.
I think the advertised address for solar instances are the IP's of docker instances and the ports are the default ones instead of the redirected ones. You should register the docker ports instead of the solr ports in the pods.
Maybe the solrcloud part of the solr.xml can help, but not sure...
https://solr.apache.org/guide/8_8/format-of-solr-xml.html
Even though Gaël J's answer solves the problem in the general case, I'll post an answer to explain what needs to be done for Solr.
First, you need to tell Solr what is its hostname, by using the SOLR_HOST environment variable. This is the hostname which Solr will use when it registers itself to Zookeeper and it's also the hostname that Zookeeper will send to its clients when they ask for a Solr URL.
In my case, such clients were:
a SolrJ application which run on the Docker host machine (but outside from Docker);
the other Solr nodes, which may ask to Zookeeper the URL of other nodes
To make the SOLR_HOST understandable by both of these clients, I used the value of services.solrX.container_name as the SOLR_HOST, and I have added that value also to my hosts file on the Docker host machine.
So, this is the docker-compose.yml file:
version: '3.7'
services:
solr1:
image: solr:8.7.0
container_name: solr1
ports:
- "8981:8981"
environment:
- ZK_HOST=zoo1:2181
- SOLR_HOST=solr1
- SOLR_PORT=8981
networks:
- solr
depends_on:
- zoo1
solr2:
image: solr:8.7.0
container_name: solr2
ports:
- "8982:8982"
environment:
- ZK_HOST=zoo1:2181
- SOLR_HOST=solr2
- SOLR_PORT=8982
networks:
- solr
depends_on:
- zoo1
zoo1:
image: zookeeper:3.6.2
container_name: zoo1
restart: unless-stopped
hostname: zoo1
ports:
- 2181:2181
- 7001:7000
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: mntr, conf, ruok
ZOO_CFG_EXTRA: "metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7000 metricsProvider.exportJvmInfo=true"
networks:
- solr
networks:
solr:
and this is the hosts file:
# local solr cloud cluster
127.0.0.1 solr1
127.0.0.1 solr2
Note also that both solr1 and solr2 point to localhost, so they should have different ports (this is why the SOLR_PORT environment variable is needed; otherwise both nodes would use the default 8983 port).
I am having an application service and a MySQL service but I am not able to connect the two containers and it keeps returning me this error
jango.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
I have included the links in my application service but nothing is working out.
Mine MySQL container is working up fine and even I can log into the MySQL container.
Here is the snapshot of the services:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cc26d09a81d1 gmasmatrix_worker:latest "/entrypoint.sh /sta…" 17 seconds ago Exited (1) 11 seconds ago gmasmatrix_celeryworker_1
749f23c37b16 gmasmatrix_application:latest "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 9 seconds ago gmasmatrix_application_1
666029ad063a gmasmatrix_flower "/entrypoint.sh /sta…" 18 seconds ago Exited (1) 10 seconds ago gmasmatrix_flower_1
50ac0497e66b mysql:5.7.10 "/entrypoint.sh mysq…" 21 seconds ago Up 17 seconds 0.0.0.0:3306->3306/tcp gmasmatrix_db_1
669fbbe0a81d mailhog/mailhog:v1.0.0 "MailHog" 21 seconds ago Up 18 seconds 1025/tcp, 0.0.0.0:8025->8025/tcp gmasmatrix_mailhog_1
235a46c8d453 redis:5.0 "docker-entrypoint.s…" 21 seconds ago Up 17 seconds 6379/tcp gmasmatrix_redis_1
Docker-compose file
version: '2'
services:
application: &application
image: gmasmatrix_application:latest
command: /start.sh
volumes:
- .:/app
# env_file:
# - .env
ports:
- 8000:8000
# cpu_shares: 874
# mem_limit: 1610612736
# mem_reservation: 1610612736
build:
context: ./
dockerfile: ./compose/local/application/Dockerfile
args:
- GMAS_ENV_TYPE=local
links:
- "db"
celeryworker:
<<: *application
image: gmasmatrix_worker:latest
depends_on:
- redis
- mailhog
ports: []
command: /start-celeryworker
links:
- "db"
flower:
<<: *application
image: gmasmatrix_flower
ports:
- "5555:5555"
command: /start-flower
links:
- "db"
mailhog:
image: mailhog/mailhog:v1.0.0
ports:
- "8025:8025"
redis:
image: redis:5.0
db:
image: mysql:5.7.10
environment:
MYSQL_DATABASE: gmas_mkt
MYSQL_ROOT_PASSWORD: pulkit1607
ports:
- "3306:3306"
``
Your application is trying to connect to 127.0.0.1 - which in docker points to the app container itself.
Instead you should use the IP of the db container. You can utilize the built-in docker DNS service to do this. In your application configuration, use db (the name of the mysql container) as the host to connect to instead of localhost or 127.0.0.1
I have below YAML file :
seleniumhub:
image: selenium/hub
ports:
- 4444:4444
firefoxnode:
image: selenium/node-firefox-debug
ports:
- 4577
links:
- seleniumhub:hub
expose:
- "5900"
chromenode:
image : selenium/node-chrome-debug
ports:
- 4578
links:
- seleniumhub:hub
expose:
- "5900"
docker ps:
time="2017-04-01T17:57:44+03:00" level=info msg="Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d2ccb193b54 selenium/node-firefox-debug "/opt/bin/entry_po..." 6 seconds ago Up 5 seconds 5900/tcp, 0.0.0.0:32785->4577/tcp dockercompose_firefoxnode_1
4be6223fe043 selenium/node-chrome-debug "/opt/bin/entry_po..." 6 seconds ago Up 5 seconds 5900/tcp, 0.0.0.0:32784->4578/tcp dockercompose_chromenode_1
7d95d3e73016 selenium/hub "/opt/bin/entry_po..." 7 seconds ago Up 6 seconds 0.0.0.0:4444->4444/tcp dockercompose_seleniumhub_1
But whenever I run below command in Docker quick start terminal:
docker port 9d2ccb193b54 5900
I got below:
Error: No public port '5900/tcp' published for 9d2ccb193b54
and I'm not able to connect to the node machines through VNC
for firefoxnode try this configuration:
image: selenium/node-firefox-debug
ports:
- 4577
- 5900
links:
- seleniumhub:hub
expose:
- "5900"
expose does not publish port to the host machine and only accessible to linked services. It works for inter-container communication. ports will expose that port to host machine.
For exposing ports the syntax is:
ports:
- "host:container"
In your case:
image: selenium/node-firefox-debug
ports:
- 4577
- "5900:5900"
links:
- seleniumhub:hub