According to the docs, SSL config of a wurstmeister/kafka-docker is to be done in server.properties file, as follow:
listeners=PLAINTEXT://host.name:port,SSL://host.name:port
# The following is only needed if the value is different from ``listeners``, but it should contain
# the same security protocols as ``listeners``
advertised.listeners=PLAINTEXT://host.name:port,SSL://host.name:port
and
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
Source: https://docs.confluent.io/3.0.0/kafka/ssl.html#configuring-kafka-brokers
I also followed the rest of the docs, so I also have the SSL and port 9093 configured:
listeners=PLAINTEXT://:9092,SSL://:9093
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
After I've done that, I have tried to stop and start the server again:
docker stop wurstmeister_kafka_1
docker start wurstmeister_kafka_1
and also
docker restart wurstmeister_kafka_1
But when I inspect with docker ps, I do not see port 9093 being bound:
λ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
b6c5685414ec wurstmeister/kafka:latest "start-kafka.sh" 3 days ago Up 6 minutes 0.0.0.0:9092->9092/tcp
wurstmeister_kafka_1
ded10e44873a wurstmeister/zookeeper:latest "/bin/sh -c '/usr/sb…" 3 days ago Up 3 days 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp wurstmeister_zookeeper_1
and the following command openssl s_client -debug -connect localhost:9093 -tls1 said errors:
λ openssl s_client -debug -connect localhost:9093 -tls1
20024:error:0200274D:system library:connect:reason(1869):../openssl-1.1.1a/crypto/bio/b_sock2.c:110:
20024:error:2008A067:BIO routines:BIO_connect:connect error:../openssl-1.1.1a/crypto/bio/b_sock2.c:111:
20024:error:0200274D:system library:connect:reason(1869):../openssl-1.1.1a/crypto/bio/b_sock2.c:110:
20024:error:2008A067:BIO routines:BIO_connect:connect error:../openssl-1.1.1a/crypto/bio/b_sock2.c:111:
connect:errno=0
How can I restart the docker so that changes in server.properties takes effect? If that's not the right approach, then what is?
Docker doesn't preserve file changes within the image.
You either have to volume mount your own server.properties over the one in the container, or see if the environment variables allow you to update the configuration during the startup of the image (similar to the confluentinc/kafka image)
Related
Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible
Before upgrading my system, I was able to successfully connect to mongo running in a docker container using published ports. After upgrading, as shown in Case #1 connecting via published ports no longer work for me.
Case #1
~ docker run --rm -d -p 27017:27017 mongo:3.6
2594b7e5cbf481526589d221361c853338ff55ecb32d9e02eae17383960e971a
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2594b7e5cbf4 mongo:3.6 "docker-entrypoint.s…" 4 seconds ago Up 3 seconds 0.0.0.0:27017->27017/tcp dazzling_fermat
Robo3T Logs
Cannot connect to the MongoDB at localhost:27017.
Error:
Network is unreachable. Reason: network error while attempting to run command 'isMaster' on host 'localhost:27017'
~ sudo lsof -i -P -n | grep LISTEN
...
docker-pr 263637 root 4u IPv4 3723123 0t0 TCP *:27017 (LISTEN)
✘ ~ sudo ufw status
Status: inactive
Now I can only connect using the host networking stack.
Case #2
~ docker run --rm -d --network=host mongo:3.6
39929a8d50cc8554d256f7516d039621cd22ed8be86680ac0e1400809464b619
~ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39929a8d50cc mongo:3.6 "docker-entrypoint.s…" 5 seconds ago Up 4 seconds admiring_grothendieck
Robo3T Logs
4:13:20 PM Info: Connecting to localhost:27017...
4:13:20 PM Info: Establish connection successful. Connection: localhost
Pre-upgrade:
Linux Mint 19 - Tricia,
Docker version was 19.xx something I believe.
Post Upgrade:
~ lsb_release -a
No LSB modules are available.
Distributor ID: Linuxmint
Description: Linux Mint 20
Release: 20
Codename: ulyana
~ docker --version
Docker version 20.10.7, build 20.10.7-0ubuntu1~20.04.1
I verified there are no running firewalls (UFD, etc), I can connect from container to container when specifying a private docker network for both the server and client. What am I missing? How can I connect using published ports again? Thanks in advance.
Docker on Linux generally uses the host's DNS and modifies your iptables to provide the connectivity between the host and container. If there's a problem with connectivity, in your case the most likely culprits are (in order of likelihood):
DNS entry missing for localhost or wrong IP version target. Try using 127.0.0.1 or ::1 as the hostname instead.
iptables rules are missing. Check the earlier link in my response for remediations and flags that can affect this.
The container might actually have issues starting up. Check the output of docker log <container_id> for errors after you start it. I would say this option is unlikely as things work under host network but don't discount this possibility too quickly.
I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/
I had ran the docker image and it is showing the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cbcc0a6d5c1e programming_applicationserver "bin/wait-for-it.sh …" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 5436/tcp programming_applicationserver_run_3
4cb5bdbb6c1d programming_onlineaccountverifier "bin/wait-for-it.sh …" 5 days ago Up About an hour 127.0.0.1:5435->5435/tcp programming_onlineaccountverifier_1
bf39ba383cec programming_onlineballotregulator "bin/docker_entrypoi…" 5 days ago Up About an hour 8545/tcp, 127.0.0.1:5434->5434/tcp, 30303/tcp programming_onlineballotregulator_1
but when I go to localhost:80 nothing is shown.
What should I do now ?
This is likely due to an error in the application itself and not docker.
To verify that, you can go into the container and make sure the application is port is working reachable:
docker exec -it programming_applicationserver_run_3 bash
Once you are inside the container, try accessing the port using one of the following command:
curl localhost:80
wget localhost:80
If non of these are successful, this would imply that the problem is related to the application and not to docker itself.
I have a service running in a docker container (local machine). I can see the service URL in the Ambari service config.
Now I want to connect to that service using my local development environment.
I found I can connect to that within the container but when I use that URL outside in my local I get connection refused.
Cause: org.apache.http.conn.HttpHostConnectException: Connect to
xx.xx.xx.com:12008 [xx.xx.xx.com/195.169.98.101] failed: Connection refused
How to connect to a service running inside a container from outside?
In my case code execute in my local machine.
If your container has mapped its port on the VM 12008 port, you would need to make sure you have port forwarded 12008 in your VirtualBox connection settings, as I mention in "How to connect mysql workbench to running mysql inside docker?"
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port12008 ,tcp,,12008,,12008"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port12008 ,udp,,12008,,12008"
The question needs more clarification, but I will answer with some assumptions.
I used an Ambari docker image (chose this randomly based on popularity).
Then I started 3 clusters as mentioned and my amb-settings and docker ps looked like this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ amb-settings
NODE_PREFIX=amb
CLUSTER_SIZE=3
AMBARI_SERVER_NAME=amb-server
AMBARI_SERVER_IMAGE=hortonworks/ambari-server:latest
AMBARI_AGENT_IMAGE=hortonworks/ambari-agent:latest
DOCKER_OPTS=
AMBARI_SERVER_IP=172.17.0.6
CONSUL=amb-consul
CONSUL_IMAGE=sequenceiq/consul:v0.5.0-v6
EXPOSE_DNS=false
DRY_RUN=false
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2483a74d919 hortonworks/ambari-agent:latest "/usr/sbin/init syste" 20 minutes ago Up 20 minutes amb2
4acaec766eaa hortonworks/ambari-agent:latest "/usr/sbin/init syste" 21 minutes ago Up 20 minutes amb1
47e9419de59f hortonworks/ambari-server:latest "/usr/sbin/init syste" 21 minutes ago Up 21 minutes 8080/tcp amb-server
548730bb1824 sequenceiq/consul:v0.5.0-v6 "/bin/start -server -" 22 minutes ago Up 22 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8301-8302/udp, 8500/tcp amb-consul
27c725af6531 sequenceiq/ambari "/usr/sbin/init" 23 minutes ago Up 23 minutes 8080/tcp awesome_tesla
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$
As of now, I can visit the Ambari server through: http://172.17.0.6:8080/
This works also from my host computer. However, if you want this to be connected from another computer from a similar network, then one option is to have a haproxy which does the redirection from:
localhost:8080 -> 172.17.0.6:8080
So, I created a small haproxy.cfg and Dockerfile to achieve this:
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat Dockerfile
FROM haproxy:1.6
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ cat haproxy.cfg
frontend localnodes
bind *:8080
mode http
default_backend ambari
backend ambari
mode http
server ambari-server 172.17.0.6:8080 check
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker build --rm -t ambariproxy .
Sending build context to Docker daemon 9.635 MB
Step 1 : FROM haproxy:1.6
---> af749d0291b2
Step 2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
---> Using cache
---> 60cdd2c7bb05
Successfully built 60cdd2c7bb05
anovil#anovil-Latitude-E6440:~/tmp/docker-ambari$ docker run -d -p 8080:8080 ambariproxy
63dd026349bbb6752dbd898e1ae70e48a8785e792b35040e0d0473acb00c2834
Now if I say localhost:8080 or MY_HOST_IP:8080 I can see the ambari-server and this should work also from computers in the same network.
Hope I managed to answer your question :)
Thanks,