I have 2 docker containers, 1 running Logstash and the other running Zookeeper and Kafka. I am trying to send data from Logstash to Kafka but can't seem to get data across to my topic in Kafka.
I can log into the Docker Kafka container and produce a message to my topic from the terminal and then also consume it.
I am using the output kafka plugin:
output {
kafka {
topic_id => "MyTopicName"
broker_list => "kafkaIPAddress:9092"
}
}
The ipAddress I got from running docker inspect kafka2
When I run ./bin/logstash agent --config /etc/logstash/conf.d/01-input.conf I get this error.
Settings: Default pipeline workers: 4
Unknown setting 'broker_list' for kafka {:level=>:error}
Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
stopping pipeline {:id=>"main"}
I have checker the configuration of the file by running the following command which returned OK.
./bin/logstash agent --configtest --config /etc/logstash/conf.d/01-input.conf
Configuration OK
Has anyone ever come across this, is it that I haven't opened the ports on the kafka container and if so how can I do that while keeping Kafka running?
The error is here broker_list => "kafkaIPAddress:9092"
try bootstrap_servers => "KafkaIPAddress:9092"
if you have the containers on separate machines, map kafka to the host 9092 and use the host address:port, if on same host use the internal Docker IP:port
Related
I am new to Jaeger and Kafka,
I am trying to use Kafka as intermediate buffer.
I am using OpenTelemetry to send data to Jaeger-Collector directly using -Dotel.exporter.jaeger.endpoint.
Jaeger-Collector is deployed on Kubernetes and the Kafka is on another network but is accessible. I can confirm that the traces are sent to Jaeger-collector.
On hitting the /metrics of collector and output tells me that spans were written successfully to Kafka.
jaeger_kafka_spans_written_total{status="success"} 21
The Collector logs indicate what topic I am writing to
{"Brokers":["myKafkaBroker......"}},"topic":"tp6"}
I want to get this (Span) data from Kafka Queue to ElasticSearch. To do this I am starting the Jaeger Ingester as follows
docker run -e "SPAN_STORAGE_TYPE=elasticsearch" jaegertracing/jaeger-ingester:1.22 --kafka.consumer.topic=tp6 --kafka.consumer.brokers='myKafkaBroker' --es.tls.skip-host-verify
But the container is stopped after fatal error
{"level":"fatal","ts":1615546463.7784193,"caller":"command-line-arguments/main.go:64","msg":"Failed to init storage factory","error":"failed to create primary Elasticsearch client: health check timeout: Head \"http://127.0.0.1:9200\": dial tcp 127.0.0.1:9200: connect: connection refused: no Elasticsearch node available","stacktrace":"main.main.func1\n\tcommand-line-arguments/main.go:64\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:838\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra#v0.0.7/command.go:943\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra#v0.0.7/command.go:883\nmain.main\n\tcommand-line-arguments/main.go:113\nruntime.main\n\truntime/proc.go:204"}
The elasticsearch and ingester are being run on the same machine using docker. The elasticsearch is running on docker using
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node"ocker.elastic.co/elasticsearch/elasticsearch:7.11.2
I have disabled TLS so that shouldn't be a problem. I am unable to get this to work.
I have a HBase + HDFS setup, in which each of the HBase master, regionservers, HDFS namenode and datanodes are containerized.
When running all of these containers on a single host VM, things work fine as I can use the docker container names directly, and set configuration variables as:
CORE_CONF_fs_defaultFS: hdfs://namenode:9000
for both the regionserver and datanode. The system works as expected in this configuration.
When attempting to distribute these across multiple host VMs however, I run into issue.
I updated the config variables above to look like:
CORE_CONF_fs_defaultFS: hdfs://hostname:9000
and make sure the namenode container is exposing port 9000 and mapping it to the host machine's port 9000.
It looks like the names are not resolving correctly when I use the hostname, and the error I see in the datanode logs looks like:
2019-08-24 05:46:08,630 INFO impl.FsDatasetAsyncDiskService: Deleted BP-1682518946-<ip1>-1566622307307 blk_1073743161_2337 URI file:/hadoop/dfs/data/current/BP-1682518946-<ip1>-1566622307307/current/rbw/blk_1073743161
2019-08-24 05:47:36,895 INFO datanode.DataNode: Receiving BP-1682518946-<ip1>-1566622307307:blk_1073743166_2342 src: /<ip3>:48396 dest: /<ip2>:9866
2019-08-24 05:47:36,897 ERROR datanode.DataNode: <hostname>-datanode:9866:DataXceiver error processing WRITE_BLOCK operation src: /<ip3>:48396 dst: /<ip2>:9866
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:786)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
at java.lang.Thread.run(Thread.java:748)
Where <hostname>-datanode is the name of the datanode container, and the IPs are various container IPs.
I'm wondering if I'm missing some configuration variable that would let containers from other VMs connect to the namenode, or some other change that'd allow this system to be distributed correctly. I'm wondering if the system is expecting the containers to be named a certain way, for example.
I have three physical nodes with Docker installed on each of them. I have configured Marathon, Flink, Mesos, Zookeeper and Hadoop on each docker. They work really well. I have to distribute data to Flink cluster, so I need Kafka.
Zookeeper has been already run; so,Kafka is run without error. The problem is that in this situation, when I want to create Kafka Topic, I see this error which I think it is because I do not run Zookeeper that is in Kafka folder:
Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply$mcV$sp(ZooKeeperClient.scala:230)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
at kafka.zookeeper.ZooKeeperClient$$anonfun$kafka$zookeeper$ZooKeeperClient$$waitUntilConnected$1.apply(ZooKeeperClient.scala:226)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.zookeeper.ZooKeeperClient.kafka$zookeeper$ZooKeeperClient$$waitUntilConnected(ZooKeeperClient.scala:226)
at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:95)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1580)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Also, I change my plan to use Zookeeper in Kafka folder. To do that I configure Zookeeper in Kafka folder with new ports like 2186,2889,3889. But when I run Zookeeper with this command:
/home/kafka_2.11-2.0.0/bin/zookeeper-server-start.sh /home/kafka_2.11-2.0.0/config/zookeeper.properties
I receive this error:
WARN Cannot open channel to 2 at election address /10.32.0.3:3889 (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:558)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:534)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:454)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:435)
at java.lang.Thread.run(Thread.java:748)
The configuration of zookeeper which is in "/home/zookeeper-3.4.14/conf/zoo.cfg" in first node:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper/data
clientPort=2181
server.1=0.0.0.0:2888:3888
server.2=10.32.0.3:2888:3888
server.3=10.32.0.4:2888:3888
The Zookeeper configuration which is in Kafka folder is like this for first node:
dataDir=/tmp/zookeeper/data
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2186
server.1=0.0.0.0:2889:3889
server.2=10.32.0.3:2889:3889
server.3=10.32.0.4:2889:3889
Would you please guide me on how to run two Zookeepers in one docker container? By the way, I cannot use another container for a Kafka cluster, since, I need two containers with one common IP address.
Any help would be really appreciated.
Problem solved. I used above configuration, but used dataDir=/var/lib/zookeeper/data for both Zookeeper. Also, first, I ran Hadoop and then ran Kafka with Zookeeper.
So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
And in my standalone.xml I have this:
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
The console output seems promising:
Listening for transport dt_socket at address: 9999
I can even access the admin console with the credentials admin:admin on localhost:9990/console
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
followed by
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999
In case someone else in the future comes to this thread with he same issue, I found this solution here:
https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272
Basically, apparart from the --debug parameter, you also need to pass *:8787
Dockerfile:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
docker-compose:
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
I have not tested the docker-compose solution, as my solution was on dockerfile.
Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that
I have set of microservices running as docker container. One microservice say A wants to connect to cassnadra running locally on my laptop. in order to do so i have below configurations
snippet from yaml file of service A
cassandra:
hosts: [127.0.0.1]
keyspace: "My keyspace"
protocol_version: 3
ports: 9042
In other side i ran cassandra by calling ./bin/cassandra . and then i connected to cqlsh locally whose output is as below
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.6 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.
now when my container comes up and try to connect to this running cassandra hosted on my machine it says as says connection refused . please see the trace below
File "cassandra/cluster.py", line 2076, in cassandra.cluster.ControlConnection._reconnect_internal (cassandra/cluster.c:36914)
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'127.0.0.1': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
[start] application exit with code 1, killing container
more info
I am using apache-cassandra-3.0.6.
Please advise. Thanks
As Shibashis has mentioned, probably you cannot reach the host via docker container with 127.0.0.1
Please find the IP represented as HOST.
How to get the IP address of the docker host from inside a docker container
Start the cassandra instance by changing the conf\cassandra.yaml
listen_address
rpc_address
to the recongnized HOST IP.
Hope it helps!