JaegerTracing : Jaeger Ingester unable to read from Kafka Queue and store into ElasticSearch - docker

I am new to Jaeger and Kafka,
I am trying to use Kafka as intermediate buffer.
I am using OpenTelemetry to send data to Jaeger-Collector directly using -Dotel.exporter.jaeger.endpoint.
Jaeger-Collector is deployed on Kubernetes and the Kafka is on another network but is accessible. I can confirm that the traces are sent to Jaeger-collector.
On hitting the /metrics of collector and output tells me that spans were written successfully to Kafka.
jaeger_kafka_spans_written_total{status="success"} 21
The Collector logs indicate what topic I am writing to
{"Brokers":["myKafkaBroker......"}},"topic":"tp6"}
I want to get this (Span) data from Kafka Queue to ElasticSearch. To do this I am starting the Jaeger Ingester as follows
docker run -e "SPAN_STORAGE_TYPE=elasticsearch" jaegertracing/jaeger-ingester:1.22 --kafka.consumer.topic=tp6 --kafka.consumer.brokers='myKafkaBroker' --es.tls.skip-host-verify
But the container is stopped after fatal error
{"level":"fatal","ts":1615546463.7784193,"caller":"command-line-arguments/main.go:64","msg":"Failed to init storage factory","error":"failed to create primary Elasticsearch client: health check timeout: Head \"http://127.0.0.1:9200\": dial tcp 127.0.0.1:9200: connect: connection refused: no Elasticsearch node available","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:64\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v0.0.7/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:883\nmain.main\n\tcommand-line-arguments/main.go:113\nruntime.main\n\truntime/proc.go:204"}
The elasticsearch and ingester are being run on the same machine using docker. The elasticsearch is running on docker using
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"ocker.elastic.co/elasticsearch/elasticsearch:7.11.2
I have disabled TLS so that shouldn't be a problem. I am unable to get this to work.

Related

SSL(curl) connection error in ElasticSearch setup

Have setup a 3-node Elasticsearch cluster using docker-compose. Followed below steps:
On one of the master nodes, es11, gets below error, however same curl command works fine on other 2 nodes i.e. es12, es13:
Error:
curl -X GET 'https://localhost:9316'
curl: (35) Encountered end of file
Below error in logs:
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [es13][SOMEIP:9316][internal:cluster/coordination/join]",
"Caused by: org.elasticsearch.transport.ConnectTransportException: [es11][SOMEIP:9316] handshake failed. unexpected remote node {es13}{SOMEVALUE}{SOMEVALUE
"at org.elasticsearch.transport.TransportService.lambda$connectionValidator$6(TransportService.java:468) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.action.ActionListener$MappedActionListener.onResponse(ActionListener.java:95) ~[elasticsearch-7.17.6.jar:7.17.6]",
"at org.elasticsearch.transport.TransportService.lambda$handshake$9(TransportService.java:577) ~[elasticsearch-7.17.6.jar:7.17.6]",
https://localhost:9316 on browser gives site can't be reached error as well.It seems SSL certificate as created in step 4 below is having some issues in es11.
Any leads please? OR If I repeat step 4, do i need to copy the certs again to es12 & es13?
Below elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
Ports as defined in all 3 nodes docker-compose.yml
environment:
- node.name=es11
- transport.port=9316
ports:
- 9216:9200
- 9316:9316
Initialize a docker swarm. On ES11 run docker swarm init. Follow the instructions to join 12 and 13 to the swarm.
Create an overlay network docker network create -d overlay --attachable elastic
If necessary, bring down the current cluster and remove all the associated volumes by running docker-compose down -v
Create SSL certificates for ES with docker-compose -f create-certs.yml run --rm create_certs
Copy the certs for es12 and 13 to the respective servers
Use this busybox to create the overlay network on 12 and 13 sudo docker run -itd --name containerX --net [network name] busybox
Configure certs on 12 and 13 with docker-compose -f config-certs.yml run --rm config_certs
Start the cluster with docker-compose up -d on each server
Set the passwords for the built-in ES accounts by logging into the cluster docker exec -it es11 sh then running bin/elasticsearch-setup-passwords interactive --url localhost:9316
(as per your https://discuss.elastic.co thread)
you cannot talk HTTP to the transport protocol port, which you have defined in transport.port. you need to talk to port 9200 in the container, which you have mapped to 9216 outside the container
the transport port runs a binary protocol that is not HTTP accessible

Error accessing Scylladb cluster outside docker container

I'm running Scylladb locally in a docker container and I want to access the cluster outside the docker container. That's when I'm getting the following error: cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers')
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.17.0.2 776 KB 256 ? ad698c75-a465-4deb-a92c-0b667e82a84f rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Cluster Information:
Name: Test Cluster
Snitch: org.apache.cassandra.locator.SimpleSnitch
DynamicEndPointSnitch: disabled
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
443048b2-c1fe-395e-accd-5ae9b6828464: [172.17.0.2]
I have no problem accessing the cluster using cqlsh on port 9042:
Connected to at 172.17.0.2:9042.
[cqlsh 5.0.1 | Cassandra 3.0.8 | CQL spec 3.3.1 | Native protocol v4]
Now I'm trying to access the cluster from my fastapi app that is outside the docker container.
from cassandra.cluster import Cluster
cluster = Cluster(['172.17.0.2'])
session = cluster.connect('Test Cluster')
And here's the Error that I'm getting:
raise NoHostAvailable("Unable to connect to any servers", errors)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'172.17.0.2:9042': OSError(51, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Network is unreachable")})
with a little bit of tinkering, it's possible to achieve a connection to the Scylla running in a container outside of the container for local development.
I've tried on M1 Mac with docker desktop:
Run scylla container with couple of new parameters[src]:
--listen-address 0.0.0.0 for simplification as we are spawning Scylla inside the container to allow connection to the container from any network
--broadcast-rpc-address 127.0.0.1 required if --listen-address set to 0.0.0.0. We are going to port forward 9042 from container to host (local) machine, so this is an IP where it will be acessible.
The final command to spawn the container is:
$ docker run --rm -ti \
-p 127.0.0.1:9042:9042 \
scylladb/scylla \
--smp 1 \
--listen-address 0.0.0.0 \
--broadcast-rpc-address 127.0.0.1
The -p 127.0.0.1:9042:9042 is to make port 9042 accessible on host (local) machine.
Install pip3 install scylla-driver as it has support of darwin/arm64 architecture.
Write a simple python script:
# so74265199.py
from cassandra.cluster import Cluster
cluster = Cluster(['127.0.0.1'])
session = cluster.connect()
# Select from a table that is available without keyspace
res = session.execute('SELECT * FROM system.versions')
print(res.one())
Run your script
$ python3 so74265199.py
Row(key='local', build_id='71178cf6db7021896cd8251751b78b3d9e3afa8d', build_mode='release', version='5.0.5-0.20221009.5a97a1060')
Disclaimer: I'm not an expert in Scylla's configuration, so feel free to point out a better approach.

schema registry docker from confluent

I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two

Unable to connect to ElasticSearch docker container from the transport client in another docker container

I started elasticsearch docker container using:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4
then I tried to link another docker container which actually uses the transport client (same version - 6.2.4) to access the elasticsearch container using:
docker run -p 5000:5000 --link 57b2dc1f05b3:elasticsearch <container_name>
I got the following error message:
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
[error] application -
! #7832clj69 - Internal server error, for (GET) [/create-question-index] ->
play.api.UnexpectedException: Unexpected exception[NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300}]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:261) ~[com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:191) ~[com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:179) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:212) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:94) [com.typesafe.play.play_2.11-2.4.8.jar:2.4.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:162) [com.typesafe.play.play-netty-server_2.11-2.4.8.jar:2.4.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$9$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:159) [com.typesafe.play.play-netty-server_2.11-2.4.8.jar:2.4.8]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216) [org.scala-lang.scala-library-2.11.7.jar:na]
at scala.util.Try$.apply(Try.scala:192) [org.scala-lang.scala-library-2.11.7.jar:na]
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:60) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:371) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:394) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1247) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:46) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:53) ~[org.elasticsearch.elasticsearch-6.2.4.jar:6.2.4]
at data.ElasticConnector.createIndex(ElasticConnector.scala:28) ~[candidate-retrieval.candidate-retrieval-1.0-sans-externalized.jar:na]
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
node {#transport#-1}{zPkzTwpJRNmzGiEM8MOQQA}{elasticsearch}{172.17.0.2:9300} not part of the cluster Cluster [elasticsearch-cluster], ignoring...
Also, when I tried connecting from the command-line of the client docker container:
docker run -it --link 57b2dc1f05b3:elasticsearch <container_name> bash
on "elasticsearch:9200" there was no problem connecting with the elasticsearch docker.
Code snippets for support:
val host = "elasticsearch"
val port = 9300
val clustername = "elasticsearch"
Settings settings = Settings.builder().put("cluster.name", clustername).build()
val client = new PreBuiltTransportClient(settings).addTransportAddress(new TransportAddress(InetAddress.getByName(host), port))

Docker: cant send data from logstash container to Kafka container

I have 2 docker containers, 1 running Logstash and the other running Zookeeper and Kafka. I am trying to send data from Logstash to Kafka but can't seem to get data across to my topic in Kafka.
I can log into the Docker Kafka container and produce a message to my topic from the terminal and then also consume it.
I am using the output kafka plugin:
output {
kafka {
topic_id => "MyTopicName"
broker_list => "kafkaIPAddress:9092"
}
}
The ipAddress I got from running docker inspect kafka2
When I run ./bin/logstash agent --config /etc/logstash/conf.d/01-input.conf I get this error.
Settings: Default pipeline workers: 4
Unknown setting 'broker_list' for kafka {:level=>:error}
Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
stopping pipeline {:id=>"main"}
I have checker the configuration of the file by running the following command which returned OK.
./bin/logstash agent --configtest --config /etc/logstash/conf.d/01-input.conf
Configuration OK
Has anyone ever come across this, is it that I haven't opened the ports on the kafka container and if so how can I do that while keeping Kafka running?
The error is here broker_list => "kafkaIPAddress:9092"
try bootstrap_servers => "KafkaIPAddress:9092"
if you have the containers on separate machines, map kafka to the host 9092 and use the host address:port, if on same host use the internal Docker IP:port

Resources