SSL(curl) connection error in ElasticSearch setup - docker

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316

(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Related

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

port 80 refused - digital ocean droplet web console w/ caprover instance

I have a cap rover instance in my digital ocean instance that I created. I want to use teh caprover instance to run cap rover sample apps.
I opened the digital ocean droplet web console in order to run a caprover isntance.
I ran the following lines of code:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
and got this:
Skipping adding existing rule
Skipping adding existing rule (v6)
Skipping adding existing rule
Skipping adding existing rule (v6)
I then ran this:
docker run -p 80:80 -p 443:443 -p 3000:3000 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
I got this:
Unable to find image 'caprover/caprover:latest' locally
latest: Pulling from caprover/caprover
Digest: sha256:39c3f188a8f425775cfbcdc4125706cdf614cd38415244ccf967cd1a4e692b4f
Status: Downloaded newer image for caprover/caprover:latest
docker: Error response from daemon: driver failed programming external connectivity on endpoint priceless_sammet (9da9028cfc4873818f113458237ebd00f9c64fa648b853730a60b10bea39c720): Bind for 0.0.0.0:3000 failed: port is already allocated.
I tried changing the ports to:
docker run -p 81:81 -p 444:444 -p 3321:3321 -v /var/run/docker.sock:/var/run/docker.sock -v /captain:/captain caprover/caprover
and got this:
Captain Starting ...
Installing Captain Service ...
Installation of CapRover is starting...
For troubleshooting, please see: https://caprover.com/docs/troubleshooting.html
>>> Checking System Compatibility <<<
Docker Version passed.
Ubuntu detected.
X86 CPU detected.
Total RAM 1033 MB
Are your trying to run CapRover on a local machine or a machine without public IP?
In that case, you need to add this to your installation command:
-e MAIN_NODE_IP_ADDRESS='127.0.0.1'
Otherwise, if you are running CapRover on a VPS with public IP:
Your firewall may have been blocking an in-use port: 80
A simple solution on Ubuntu systems is to run "ufw disable" (security risk)
Or [recommended] just allowing necessary ports:
ufw allow 80,443,3000,996,7946,4789,2377/tcp; ufw allow 7946,4789,2377/udp;
See docs for more details on how to fix firewall issues
Finally, if you are an advanced user, and you want to bypass this check (NOT RECOMMENDED),
you can append the docker command with an addition flag: -e BY_PASS_PROXY_CHECK='TRUE'
Installation failed.
Error: Port seems to be closed: 80
at Request._callback (/usr/src/app/built/utils/CaptainInstaller.js:149:24)
at Request.self.callback (/usr/src/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:400:28)
at Request.<anonymous> (/usr/src/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:400:28)
at IncomingMessage.<anonymous> (/usr/src/app/node_modules/request/request.js:1076:12)
at Object.onceWrapper (events.js:519:28)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
How can I open port 80, 443, and 3000 so that I can run the cap rover instance

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

Clustering vernemq docker containers running on different machines

I was hoping this would be an easy one by just using the below snippet on the second instance's docker-compose.yml file
- DOCKER_VERNEMQ_DISCOVERY_NODE=<ip address of the first instance>
but that doesn't seem to work.
Log of the second instance confirms it's attempting to cluster:
13:56:09.795 [info] Sent join request to: 'VerneMQ#<ip address of the first instance>'
13:56:16.800 [info] Unable to connect to 'VerneMQ#<ip address of the first instance>'
While the log of the first instance does not show anything at all.
From within the second instance I can confirm that the endpoint is accessible:
$ docker exec -it vernemq /bin/sh
$ curl <ip address of the first instance>:44053
curl: (56) Recv failure: Connection reset by peer
then in the log of the first instance I see an error which is totally expected and confirms I've reached the first instance
13:58:33.572 [error] CRASH REPORT Process <0.3050.0> with 0 neighbours crashed with reason: bad argument in vmq_cluster_com:process_bytes/3 line 142
13:58:33.572 [error] Ranch listener {{172,19,0,2},44053} terminated with reason: bad argument in vmq_cluster_com:process_bytes/3 line 142
It might have to do with the fact that ip address as seen from within the docker container is 172.19.0.2 while the external one is 10. ....
Also tried adding hostname of the first instance to known_hosts to no avail.
Please advise.
I'm using erlio/docker-vernemq:1.10.0
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ docker-compose --version
docker-compose version 1.27.2, build 18f557f9
I managed to get this sorted by creating a docker overlay network
on machine1: docker swarm init
on machine2: docker swarm join --token ...
on machine1: docker network create --driver=overlay --attachable vernemq-overlay-net
The relevant bits of my dockerfile are:
version: '3.6'
services:
vernemq:
container_name: ${NODE_NAME:?Node name not specified}
image: vernemq/vernemq:1.10.4.1
environment:
- DOCKER_VERNEMQ_NODENAME=${NODE_NAME:?Node name not specified}
- DOCKER_VERNEMQ_DISCOVERY_NODE=${DISCOVERY_NODE:-}
networks:
default:
external:
name: vernemq-overlay-net
with the following env vars:
machine1:
NODE_NAME=vernemq1.example.com
DISCOVERY_NODE=
machine2:
NODE_NAME=vernemq2.example.com
DISCOVERY_NODE=vernemq1.example.com
Note:
Chances are machine2 won't find vernemq-overlay-net due to a bug in docker-compose as far as I remember.
In that case you start a container with docker: docker run -dit --name alpine --net=vernemq-overlay-net alpine which will make it available for docker-compose.

Unable to connect to ElasticSearch docker container from the transport client in another docker container

I started elasticsearch docker container using:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4
then I tried to link another docker container which actually uses the transport client (same version - 6.2.4) to access the elasticsearch container using:
docker run -p 5000:5000 --link 57b2dc1f05b3:elasticsearch <container_name>
I got the following error message:
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
[error] application -
! #7832clj69 - Internal server error, for (GET) [/create-question-index] ->
play.api.UnexpectedException: Unexpected exception[NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300}]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:261) ~[com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:191) ~[com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:179) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:212) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:94) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [com.typesafe.play.play-netty-server_2.11-2.4.8.jar:2.4.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:159) [com.typesafe.play.play-netty-server_2.11-2.4.8.jar:2.4.8]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.util.Try$.apply(Try.scala:192) [org.scala-lang.scala-library-2.11.7.jar:na]
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:371) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1247) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:53) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at data.ElasticConnector.createIndex(ElasticConnector.scala:28) ~[candidate-retrieval.candidate-retrieval-1.0-sans-externalized.jar:na]
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
Also, when I tried connecting from the command-line of the client docker container:
docker run -it --link 57b2dc1f05b3:elasticsearch <container_name> bash
on "elasticsearch:9200" there was no problem connecting with the elasticsearch docker.
Code snippets for support:
val host = "elasticsearch"
val port = 9300
val clustername = "elasticsearch"
Settings settings = Settings.builder().put("cluster.name", clustername).build()
val client = new PreBuiltTransportClient(settings).addTransportAddress(new TransportAddress(InetAddress.getByName(host), port))

Resources