Fluentd: How to solve fluentd forward plugin error? - fluentd

I'm trying to transfer log from one server(A) to another server(B) using fluentd.
I use forward_output plugin in A and use forward input server in B, but I got there following error.
2017-10-26 16:59:27 +0900 [warn]: #0 [forward_output] detached forwarding server 'boxil-log' host="xxx.xxx.xxx.xxx" port=24224 hard_timeout=true
2017-10-26 17:00:43 +0900 [warn]: #0 [forward_output] failed to flush the buffer. retry_time=0 next_retry_seconds=2017-10-26 17:00:44 +0900 chunk="55c6e86321afbab5bed145d53e679865" error_class=Errno::ETIMEDOUT error="Connection timed out - connect(2) for \"xxx.xxx.xxx.xxx\" port 24224"
2017-10-26 17:01:42 +0900 [warn]: #0 [forward_output] failed to flush the buffer. retry_time=6 next_retry_seconds=2017-10-26 17:01:42 +0900 chunk="55c6e86321afbab5bed145d53e679865" error_class=Fluent::Plugin::ForwardOutput::NoNodesAvailable error="no nodes are available"
This is the code for output in A.
<match system.*.*>
#type forward
#id forward_output
<server>
name boxil-log
host xxx.xxx.xxx.xxx
port 24224
</server>
</match>
This is the code for input in B.
<source>
#type forward
#id forward_input
</source>
I want to know the meaning of this error, the reason I got this and the way solving this.
Thank you.

Related

FluentD Output in Plain Text (non-json) format

I'm new to FluentD and I'm trying to determine if we can replace our current syslog application with FluentD. The issue that I'm trying to solve is compatability between FluentD and Legacy Application (which works w/ rsyslog) but cannot handle json.
Can FluentD output data in the format that it receives it - plain text (non-json) format that is RFC5424 compliant ?
From my research on the topic, the output is always json. I've explored using the single_value option, but that just extracts the message component which is incomplete without the host.
Any inputs or suggestions are welcome.
Here is the Fluentd config
##########
# INPUTS #
##########
# udp syslog
<source>
#type syslog
<transport udp>
</transport>
bind 0.0.0.0
port 514
tag syslog
<parse>
message_format auto
with_priority true
</parse>
</source>
###########
# OUTPUTS #
###########
<match syslog**>
#type copy
<store>
#type file
path /var/log/td-agent/syslog
compress gzip
</store>
<store>
#type file
path /var/log/td-agent/rfc_syslog
compress gzip
<format>
#type single_value
message_key message
</format>
</store>
</match>
Based on the configuration above, I receive the following outputs
File Output from the syslog location - which is all JSON
2022-10-21T09:34:53-05:00 syslog.user.info {"host":"icw-pc01.lab","ident":"MSWinEventLog\t2\tSystem\t136\tFri","message":"34:52 2022\t7036\tService Control Manager\tN/A\tN/A\tInformation\ticw-pc01.lab\tNone\t\tThe AppX Deployment Service (AppXSVC) service entered the running state.\t6 "}
File Output from the rfc_syslog location - which contains the message_key message single value
34:52 2022 7036 Service Control Manager N/A N/A Information icw-pc01.lab None The AppX Deployment Service (AppXSVC) service entered the running state. 6
Desired Output that we'd like (to support our legacy apps and legacy integrations)
Oct 21 09:34:53 icw-pc01.lab MSWinEventLog 2 System 136 Fri Oct 21 09:34:52 2022 7036 Service Control Manager N/A N/A Information icw-pc01.lab None The AppX Deployment Service (AppXSVC) service entered the running state. 6
Update:
The suggestion below solved the parsing as desired. However, when I try to forward the data to a remote syslog server, it is still going out as JSON. Below is the revised fluentd config
##########
# INPUTS #
##########
# udp syslog
<source>
#type syslog
<transport udp>
</transport>
bind 0.0.0.0
port 514
tag syslog
<parse>
#type none
message_format auto
with_priority true
</parse>
</source>
###########
# OUTPUTS #
###########
<match syslog**>
#type copy
<store>
#type file
path /var/log/td-agent/syslog
compress gzip
</store>
<store>
#type file
path /var/log/td-agent/rfc_syslog
compress gzip
<format>
#type single_value
message_key message
</format>
tag rfc_syslog
</store>
<store>
#type forward
<server>
host 192.168.0.2
port 514
</server>
</store>
</match>
<match rfc_syslog**>
#type forward
<server>
host 192.168.0.3
port 514
</server>
</match>
When configured as above, there is no forwarding happening on the 192.168.0.3 (my guess is the tag is not getting applied).
As far as the forwarding for 192.168.0.2 goes, I see the messages in the Kiwi Syslog Server - but they are in json (which is what I was trying to avoid for my legacy app).
Here is the output on the Kiwi Syslog App: kiwi-syslog-output
Update 2 [11/11/2022] : After applying the suggested config
2022-11-11 09:36:59 -0600 [info]: Received graceful stop
2022-11-11 09:36:59 -0600 [info]: Received graceful stop
2022-11-11 09:36:59 -0600 [info]: #0 fluentd worker is now stopping worker=0
2022-11-11 09:36:59 -0600 [info]: #0 shutting down fluentd worker worker=0
2022-11-11 09:36:59 -0600 [info]: #0 shutting down input plugin type=:syslog plugin_id="object:7e4"
2022-11-11 09:36:59 -0600 [info]: #0 shutting down output plugin type=:copy plugin_id="object:780"
2022-11-11 09:36:59 -0600 [info]: #0 shutting down output plugin type=:stdout plugin_id="object:7bc"
2022-11-11 09:37:15 -0600 [info]: #0 shutting down output plugin type=:forward plugin_id="object:794"
2022-11-11 09:37:16 -0600 [info]: Worker 0 finished with status 0
2022-11-11 09:49:03 -0600 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf"
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-elasticsearch' version '5.1.4'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-kafka' version '0.17.3'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-multi-format-parser' version '1.0.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-prometheus' version '2.0.2'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-remote_syslog' version '1.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-s3' version '1.6.1'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-sd-dns' version '0.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-splunk-hec' version '1.2.10'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-syslog_rfc5424' version '0.8.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-systemd' version '1.0.5'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-td' version '1.1.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-utmpx' version '0.5.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluent-plugin-webhdfs' version '1.5.0'
2022-11-11 09:49:03 -0600 [info]: gem 'fluentd' version '1.14.4'
2022-11-11 09:49:03 -0600 [info]: gem 'fluentd' version '1.14.3'
2022-11-11 09:49:03 -0600 [info]: adding forwarding server '192.168.0.2:514' host="192.168.0.2" port=514 weight=60 plugin_id="object:794"
2022-11-11 09:49:03 -0600 [info]: using configuration file: <ROOT>
<system>
process_name "aggregator1"
</system>
<source>
#type syslog
bind "0.0.0.0"
port 514
tag "syslog"
<transport udp>
</transport>
<parse>
#type "none"
message_format auto
with_priority true
</parse>
</source>
<match syslog**>
#type copy
<store>
#type "forward"
<server>
host "192.168.0.2"
port 514
</server>
</store>
<store>
#type "stdout"
</store>
</match>
</ROOT>
2022-11-11 09:49:03 -0600 [info]: starting fluentd-1.14.4 pid=25424 ruby="2.7.5"
2022-11-11 09:49:03 -0600 [info]: spawn command to main: cmdline=["/opt/td-agent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/td-agent/bin/fluentd", "--log", "/var/log/td-agent/td-agent.log", "--daemon", "/var/run/td-agent/td-agent.pid", "--under-supervisor"]
2022-11-11 09:49:04 -0600 [info]: adding match pattern="syslog**" type="copy"
2022-11-11 09:49:04 -0600 [info]: #0 adding forwarding server '192.168.0.2:514' host="192.168.0.2" port=514 weight=60 plugin_id="object:794"
2022-11-11 09:49:04 -0600 [info]: adding source type="syslog"
2022-11-11 09:49:04 -0600 [warn]: parameter 'message_format' in <parse>
#type "none"
message_format auto
with_priority true
</parse> is not used.
2022-11-11 09:49:04 -0600 [info]: #0 starting fluentd worker pid=25440 ppid=25437 worker=0
2022-11-11 09:49:04 -0600 [info]: #0 listening syslog socket on 0.0.0.0:514 with udp
2022-11-11 09:49:04 -0600 [info]: #0 fluentd worker is now running worker=0
2022-11-11 09:49:04.682972925 -0600 syslog.auth.notice: {"message":"date=2022-11-11 time=15:49:04 devname=\"fg101.lab.local\" devid=\"FG101\" logid=\"0000000013\" type=\"traffic\" subtype=\"forward\" level=\"notice\" vd=\"vdom1\" eventtime=1668181744 srcip=10.1.100.155 srcport=40772 srcintf=\"port12\" srcintfrole=\"undefined\" dstip=35.197.51.42 dstport=443 dstintf=\"port11\" dstintfrole=\"undefined\" poluuid=\"707a0d88-c972-51e7-bbc7-4d421660557b\" sessionid=8058 proto=6 action=\"close\" policyid=1 policytype=\"policy\" policymode=\"learn\" service=\"HTTPS\" dstcountry=\"United States\" srccountry=\"Reserved\" trandisp=\"snat\" transip=172.16.200.2 transport=40772 duration=180 sentbyte=82 rcvdbyte=151 sentpkt=1 rcvdpkt=1 appcat=\"unscanned\""}
2022-11-11 09:49:04.683460611 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.407Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1051289] [Originator#6876 sub=Proxy Req 87086] Resolved endpoint : [N7Vmacore4Http16LocalServiceSpecE:0x000000fa0ed298d0] _serverNamespace = /sdk action = Allow _port = 8307"}
2022-11-11 09:49:04.683737270 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.408Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1051277] [Originator#6876 sub=Proxy Req 87086] Connected to localhost:8307 (/sdk) over <io_obj p:0x000000f9cc153648, h:18, <TCP '127.0.0.1 : 59272'>, <TCP '127.0.0.1 : 8307'>>"}
2022-11-11 09:49:04.683950628 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.410Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1082351] [Originator#6876 sub=Proxy Req 87086] The client closed the stream, not unexpectedly."}
2022-11-11 09:49:04.684235085 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.422Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1051291] [Originator#6876 sub=Proxy Req 87087] New proxy client <SSL(<io_obj p:0x000000fa0ea0bff8, h:17, <TCP '10.1.233.128 : 443'>, <TCP '10.0.0.250 : 46140'>>)>"}
2022-11-11 09:49:04.684453505 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.423Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1287838] [Originator#6876 sub=Proxy Req 87087] Resolved endpoint : [N7Vmacore4Http16LocalServiceSpecE:0x000000fa0ed298d0] _serverNamespace = /sdk action = Allow _port = 8307"}
2022-11-11 09:49:04.684749571 -0600 syslog.local4.debug: {"message":"2022-11-11T15:49:04.423Z esx01.lab.local Rhttpproxy: verbose rhttpproxy[1051278] [Originator#6876 sub=Proxy Req 87087] Connected to localhost:8307 (/sdk) over <io_obj p:0x000000f9cc153648, h:18, <TCP '127.0.0.1 : 51121'>, <TCP '127.0.0.1 : 8307'>>"}
2022-11-11 09:49:10.521901882 -0600 syslog.auth.info: {"message":"Nov 11 09:49:10 icw-pc01.lab MSWinEventLog\t2\tSecurity\t744984\tFri Nov 11 09:49:10 2022\t6417\tMicrosoft-Windows-Security-Auditing\tN/A\tN/A\tSuccess Audit\ticw-pc01.lab\tSystem Integrity\t\tThe FIPS mode crypto selftests succeeded. Process ID: 0x17cc Process Name: C:\\Python27\\python.exe\t717211 "}
As stated in my response above (on Nov 29, 2022) - I was missing some dependencies for the Remote Syslog plugin.
Once the dependencies were installed, I was able to get the Remote Syslog plugin to work as desired (w/ the extra text as outlined in my comment above).

fluentd error hostname is tested built-in placeholder(s) but there is no valid placeholder

I am trying to setup EFK stack and our environment is as below
(a) ElasticSearch and Kibana runs on Windows machine
(b) FluentD runs on CentOS
I am able to setup EFK and send logs to ElasticSearch and view it in Kibana successfully with default fluent.conf
However, I would like to create indexes using the format ${record['kubernetes']['pod_name']} and I created a ConfigMap as follows
#include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf"
#include "#{ENV['FLUENTD_PROMETHEUS_CONF'] || 'prometheus'}.conf"
##include kubernetes.conf
##include kubernetes/*.conf
<match fluent.**>
# this tells fluentd to not output its log on stdout
#type null
</match>
# here we read the logs from Docker's containers and parse them
<source>
#type tail
path /var/log/containers/*.log
pos_file /var/log/containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
#type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
</parse>
</source>
# we use kubernetes metadata plugin to add metadatas to the log
<filter kubernetes.**>
#type kubernetes_metadata
</filter>
<match kubernetes.var.log.containers.**kube-logging**.log>
#type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
#type null
</match>
<match kubernetes.var.log.containers.**monitoring**.log>
#type null
</match>
<match kubernetes.var.log.containers.**infra**.log>
#type null
</match>
# we send the logs to Elasticsearch
<match kubernetes.**>
#type elasticsearch_dynamic
#id out_es
#log_level debug
include_tag_key true
host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
reload_connections true
logstash_format true
logstash_prefix ${record['kubernetes']['pod_name']}
<buffer>
#type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever true
retry_max_interval 30
chunk_limit_size 2M
queue_limit_length 32
overflow_action block
</buffer>
</match>
However, with my own fluent.conf it failed with the following error message
Error Message
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'host 192.xxx.xx.xxx' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'host: 192.xxx.xx.xxx' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'index_name fluentd' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'index_name: fluentd' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'template_name ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'template_name: ' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'logstash_prefix index-%Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_prefix: index-%Y.%m.%d' has timestamp placeholders, but chunk key 'time' is not configured
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'logstash_prefix index-%Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_prefix: index-%Y.%m.%d' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' has timestamp placeholders, but chunk key 'time' is not configured
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'logstash_dateformat %Y.%m.%d' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'logstash_dateformat: %Y.%m.%d' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'deflector_alias ' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'deflector_alias: ' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'application_name default' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'application_name: default' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] 'ilm_policy_id logstash-policy' is tested built-in placeholder(s) but there is no valid placeholder(s). error: Parameter 'ilm_policy_id: logstash-policy' doesn't have tag placeholder
2022-03-03 11:23:59 +0000 [debug]: #0 [out_es] Need substitution: false
I tried suggestions googling however I am not sure what's missing in the config file. Any help is highly appreciated

how to fix fluentd(td-agent) buffer problem?

I have an fluentd, elasticsearch, graylog setup and I'm getting below error intermittently in td-agent log
[warn]: temporarily failed to flush the buffer. next_retry=2019-01-27
19:00:14 -0500 error_class="ArgumentError" error="Data too big (189382
bytes), would create more than 128 chunks!"
plugin_id="object:3fee25617fbc"
Because of this cache memory increases and td-agent fails to send messages to graylog
I have tried setting the buffer_chunk_limit to 8m and flush_interval time to 5sec

Log management on Docker

I want to send STDOUT log from Docker container to fluentd.
But, when one container outputs access logs and error logs, logs are mixed.
example
# rails access log
2017-04-07 12:10:01 +0000 6a51e389e724: {"log":"I, [2017-04-07T12:10:01.825923 #7] INFO -- : Started GET \"/users/new\" for 172.21.0.1 at 2017-04-07 12:10:01 +0000","container_id":"6a51e389e724c67be4e714402b69da192db4a304cbfdf638594de6cff9774c23","container_name":"/app","source":"stdout"}
# rails error log
2017-04-07 12:10:01 +0000 6a51e389e724: {"container_id":"6a51e389e724c67be4e714402b69da192db4a304cbfdf638594de6cff9774c23","container_name":"/app","source":"stdout","log":"E, [2017-04-07T12:10:01.830039 #7] ERROR -- : Invoke logger error"}
# rails access log
2017-04-07 12:10:03 +0000 6a51e389e724: {"log":"I, [2017-04-07T12:10:01.825923 #7] INFO -- : Started POST \"/users/create\" for 172.21.0.1 at 2017-04-07 12:10:01 +0000","container_id":"6a51e389e724c67be4e714402b69da192db4a304cbfdf638594de6cff9774c23","container_name":"/app","source":"stdout"}
Can I add a label for each log type?
Please tell me if you have a good solution.
Thanks you and best regards.
As your app's access and error logs are sent to STDOUT of container, then you have no way to separate them by logging driver. As solution you can send access log to STDOUT, error log to STDERR of container, then later differentiate logs by "source" field of each json message (this can be done if you connect fluentd to elasticsearch+kibana).

Failed to bind to: spark-master, using a remote cluster with two workers

I am managing to get everything working with the local master and two remote workers. Now, I want to connect to a remote master that has the same remote workers. I have tried different combinations of settings withing the /etc/hosts and other reccomendations on the Internet, but NOTHING worked.
The Main class is:
public static void main(String[] args) {
ScalaInterface sInterface = new ScalaInterface(CHUNK_SIZE,
"awsAccessKeyId",
"awsSecretAccessKey");
SparkConf conf = new SparkConf().setAppName("POC_JAVA_AND_SPARK")
.setMaster("spark://spark-master:7077");
org.apache.spark.SparkContext sc = new org.apache.spark.SparkContext(
conf);
sInterface.enableS3Connection(sc);
org.apache.spark.rdd.RDD<Tuple2<Path, Text>> fileAndLine = (RDD<Tuple2<Path, Text>>) sInterface.getMappedRDD(sc, "s3n://somebucket/");
org.apache.spark.rdd.RDD<String> pInfo = (RDD<String>) sInterface.mapPartitionsWithIndex(fileAndLine);
JavaRDD<String> pInfoJ = pInfo.toJavaRDD();
List<String> result = pInfoJ.collect();
String miscInfo = sInterface.getMiscInfo(sc, pInfo);
System.out.println(miscInfo);
}
It fails at:
List<String> result = pInfoJ.collect();
The error I am getting is:
1354 [sparkDriver-akka.actor.default-dispatcher-3] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
1354 [main] WARN org.apache.spark.util.Utils - Service 'sparkDriver' could not bind on port 0. Attempting port 1.
1355 [main] DEBUG org.apache.spark.util.AkkaUtils - In createActorSystem, requireCookie is: off
1363 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1364 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
1364 [sparkDriver-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1367 [sparkDriver-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
1370 [sparkDriver-akka.actor.default-dispatcher-6] INFO Remoting - Starting remoting
1380 [sparkDriver-akka.actor.default-dispatcher-4] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
Exception in thread "main" 1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
java.net.BindException: Failed to bind to: spark-master/192.168.0.191:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
1383 [sparkDriver-akka.actor.default-dispatcher-7] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1385 [delete Spark temp dirs] DEBUG org.apache.spark.util.Utils - Shutdown hook called
Thank you kindly for your help!
Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.
I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.
The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.
I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.
hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com
After that my spark could not work and showed error as below:
16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.
I solve it by setting SPARK_LOCAL_IP to as below:
export SPARK_LOCAL_IP="localhost"
then just launched sparkling shell as below:
$SPARK_HOME/bin/spark-shell
Possily your master is running on non-default port. Can you post your submit command?
Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster

Resources