I have a Cassandra instance running in a Docker container on an Intel NUC. From my mac, I can ssh onto this and connect to the cqlsh fine. However, when I try to connect using DBeaver Enterprise I get:
com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: myhostname/xx.xx.xx.xx:9042
(com.datastax.driver.core.exceptions.TransportException:
[myhostname/xx.xx.xx.xx] Cannot connect))
This occurs with and without SSL enabled. I also have an SSH tunnel enabled successfully configured, but I have also tried with this disabled.
On my NUC, I appear to have three ethernet interfaces; the usual enp0s25 and also docker0 and br-65e6f96b0e4f. I have tried the associated IP addresses with these two interfaces which seem to appear to be doing more, as they pause a few seconds, but ultimately throw the same exception.
Versions:
Cassandra - 2.2.8
CQLSH - 5.0.1
DBeaver Enterprise - 3.7.8
Looking in the debug log I see the following messages (SSH tunnel enabled with public key auth):
2016-11-10 15:45:45.616 - Connect with 'cql://xx.xx.xx.xx:9042/mwih_ks' (cql-1582dfdf6b1-3209dg9f13234351a)
2016-11-10 15:45:45.622 - Instantiate SSH tunnel
2016-11-10 15:45:45.623 - Connect to tunnel host
2016-11-10 15:45:45.623 - SSH INFO: Connecting to xx.xx.xx.xx port 22
2016-11-10 15:45:45.632 - SSH INFO: Connection established
...
2016-11-10 15:45:45.759 - SSH INFO: Host 'xx.xx.xx.xx' is known and matches the RSA host key
2016-11-10 15:45:45.760 - SSH INFO: SSH_MSG_NEWKEYS sent
2016-11-10 15:45:45.760 - SSH INFO: SSH_MSG_NEWKEYS received
2016-11-10 15:45:45.761 - SSH INFO: SSH_MSG_SERVICE_REQUEST sent
2016-11-10 15:45:45.761 - SSH INFO: SSH_MSG_SERVICE_ACCEPT received
2016-11-10 15:45:45.763 - SSH INFO: Authentications that can continue: publickey
2016-11-10 15:45:45.763 - SSH INFO: Next authentication method: publickey
2016-11-10 15:45:45.806 - SSH INFO: Authentication succeeded (publickey).
2016-11-10 15:45:45.814 - Connection failed (cql-1582dfdf6b1-3209dg9f13234351a)
2016-11-10 15:53:18.364 - org.jkiss.dbeaver.model.exec.DBCException: All host(s) tried for query failed (tried: /xx.xx.xx.xx:9042 (com.datastax.driver.core.exceptions.TransportException: [/xx.xx.xx.xx] Cannot connect))
org.jkiss.dbeaver.model.exec.DBCException: All host(s) tried for query failed (tried: /192.168.2.2:9042 (com.datastax.driver.core.exceptions.TransportException: [/xx.xx.xx.xx] Cannot connect))
at com.jkiss.dbeaver.ent.cassandra.model.CasExecutionContext.connect(CasExecutionContext.java:56)
at com.jkiss.dbeaver.ent.cassandra.model.CasDataSource.<init>(CasDataSource.java:103)
at com.jkiss.dbeaver.ent.cassandra.CasDataSourceProvider.openDataSource(CasDataSourceProvider.java:65)
at org.jkiss.dbeaver.registry.DataSourceDescriptor.connect(DataSourceDescriptor.java:644)
at org.jkiss.dbeaver.runtime.jobs.ConnectJob.run(ConnectJob.java:74)
at org.jkiss.dbeaver.ui.dialogs.connection.ConnectionWizard$ConnectionTester.run(ConnectionWizard.java:223)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:103)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /xx.xx.xx.xx:9042 (com.datastax.driver.core.exceptions.TransportException: [/xx.xx.xx.xx] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1382)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connect(Cluster.java:283)
at com.jkiss.dbeaver.ent.cassandra.model.CasExecutionContext.reconnect(CasExecutionContext.java:112)
at com.jkiss.dbeaver.ent.cassandra.model.CasExecutionContext.connect(CasExecutionContext.java:52)
... 7 more
Related
Configured Node:
Launch method: Launchagents via SSH
Host: host
Credentials: added a new one by specifying username and Private key
Host Key Verification Strategy: Not verifying (The error does not differ when choosing a different value)
When I try to connect, I get an error:
[08/24/22 14:39:14] [SSH] Opening SSH connection to host:22.
[08/24/22 14:39:14] [SSH] WARNING: SSH Host Keys are not being verified. Man-in-the-middle attacks may be possible against this connection.
ERROR: Server rejected the 1 private key(s) for cred_name (credentialId:cred_name/method:publickey)
ERROR: Failed to authenticate as cred_name with credential=cred_name
java.io.IOException: Publickey authentication failed.
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePublicKey(AuthenticationManager.java:349)
at com.trilead.ssh2.Connection.authenticateWithPublicKey(Connection.java:472)
at com.cloudbees.jenkins.plugins.sshcredentials.impl.TrileadSSHPublicKeyAuthenticator.doAuthenticate(TrileadSSHPublicKeyAuthenticator.java:110)
at com.cloudbees.jenkins.plugins.sshcredentials.SSHAuthenticator.authenticate(SSHAuthenticator.java:431)
at com.cloudbees.jenkins.plugins.sshcredentials.SSHAuthenticator.authenticate(SSHAuthenticator.java:468)
at hudson.plugins.sshslaves.SSHLauncher.openConnection(SSHLauncher.java:881)
at hudson.plugins.sshslaves.SSHLauncher.lambda$launch$0(SSHLauncher.java:434)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: Could not generate signature
at com.trilead.ssh2.signature.KeyAlgorithm.generateSignature(KeyAlgorithm.java:43)
at com.trilead.ssh2.auth.AuthenticationManager.authenticatePublicKey(AuthenticationManager.java:316)
... 10 more
Caused by: java.security.SignatureException: Could not sign data
at java.base/sun.security.rsa.RSASignature.engineSign(RSASignature.java:196)
at java.base/java.security.Signature$Delegate.engineSign(Signature.java:1423)
at java.base/java.security.Signature.sign(Signature.java:712)
at com.trilead.ssh2.signature.KeyAlgorithm.generateSignature(KeyAlgorithm.java:41)
... 11 more
Caused by: javax.crypto.BadPaddingException: RSA private key operation failed
at java.base/sun.security.rsa.RSACore.crtCrypt(RSACore.java:209)
at java.base/sun.security.rsa.RSACore.rsa(RSACore.java:130)
at java.base/sun.security.rsa.RSASignature.engineSign(RSASignature.java:193)
... 14 more
[08/24/22 14:39:14] [SSH] Authentication failed.
From the machine on which Jenkins is installed, I can connect to a remote one using:
ssh name#host -p 22
All the solutions I found to this issue were solved through the console under the user Jenkins
But how to solve this issue using the jenkins UI, because the connection to the previously created Node is active, the problems are only with this. Maybe he indicated something wrong
The private key was created using:
ssh-keygen -t rsa
I took the key from rsa_id
I had the same issue. I think it's a bug.
I changed the type of ssh-key to ed25519 and it worked.
ssh-keygen -t ed25519
getting below error while running packer using vsphere-iso builder.
Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password keyboard-interactive], no supported methods remain
config.json
"communicator": "ssh",
"ssh_username": "{{user `ssh_username`}}",
"ssh_password": "{{user `ssh_password`}}",
"ssh_timeout": "30m",
username and password is coming from Jenkins at run time. same has beed updated in autounattend.xml , if i hard-code the credential in config.json file then its working fine. dont know what's the issue
packer debug log
2022/05/09 10:16:20 packer.exe plugin: [DEBUG] Detected authentication error. Increasing handshake attempts.
2022/05/09 10:16:27 packer.exe plugin: [INFO] Attempting SSH connection to 172.16.112.59:22...
2022/05/09 10:16:27 packer.exe plugin: [DEBUG] reconnecting to TCP connection for SSH
2022/05/09 10:16:27 packer.exe plugin: [DEBUG] handshaking with SSH
2022/05/09 10:16:28 packer.exe plugin: [DEBUG] SSH handshake err: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password keyboard-
I got the solution as below
In autounattend.xml file password was given as plain text true , which needs to be change to false.
I'm running iotedge on kubernetes.
The K8S cluster is a local cluster setup largely using the "Kubernetes the hard way" method, with some modifications.
I did manage to get things working on one installation. However, I'm now getting this on another installation. The initial installation works fine, but after shutting down a machine to simulate a hardware failure, the pod gets recreated, but starts to show this error again. This error happens EVEN if the node shutdown is NOT the one iotedged is running on.
Environment
3 Nodes running Ubuntu 20.04 LTS
Two networks on each node, one for the internet, one for an internal network. K8S is setup using the internal, static IP address
HAProxy/Keepalived for HA without a load balancer, running on a Virtual IP address
Multus CNI for attaching pods to additional networks
CoreDNS
Troubleshooting
Confirmed that CoreDNS seems to be functioning fine, and is able to resolve internal and external addresses
Remaining nodes are able to ping pods on other nodes
Deleting the iotedged pod and allowing k8s to recreate it works, but then edgeAgent an edgeHub have errors until I delete/recreate them as well
Re-run the entire k8s installation. Initial installation works fine, but simulating machine failure continues to be problematic.
Kubernetes Versions:
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
edgeiotd error:
<6>2021-07-09T22:00:02Z [INFO] - Starting Azure IoT Edge Security Daemon - Kubernetes mode
<6>2021-07-09T22:00:02Z [INFO] - Version - 1.1.3
<6>2021-07-09T22:00:02Z [INFO] - Using config file: /etc/iotedged/config.yaml
<6>2021-07-09T22:00:02Z [INFO] - Configuring /var/lib/iotedge as the home directory.
<6>2021-07-09T22:00:02Z [INFO] - Configuring certificates...
<6>2021-07-09T22:00:02Z [INFO] - Transparent gateway certificates not found, operating in quick start mode...
<6>2021-07-09T22:00:02Z [INFO] - Finished configuring provisioning environment variables and certificates.
<6>2021-07-09T22:00:02Z [INFO] - Initializing hsm...
<6>2021-07-09T22:00:02Z [INFO] - Finished initializing hsm.
<6>2021-07-09T22:00:02Z [INFO] - Provisioning edge device...
<6>2021-07-09T22:00:02Z [INFO] - Starting provisioning edge device via manual mode using a device connection string...
<6>2021-07-09T22:00:02Z [INFO] - Manually provisioning device "********" in hub "********.azure-devices.net"
<6>2021-07-09T22:00:02Z [INFO] - Finished provisioning edge device.
<6>2021-07-09T22:00:02Z [INFO] - Initializing the module runtime...
<6>2021-07-09T22:00:02Z [INFO] - Attempting to use config from /home/edgeletuser/.kube/config file.
<6>2021-07-09T22:00:02Z [INFO] - Using in-cluster config
<3>2021-07-09T22:00:34Z [ERR!] - The daemon could not start up successfully: Could not initialize module runtime
<3>2021-07-09T22:00:34Z [ERR!] - caused by: Could not initialize kubernetes module runtime
<3>2021-07-09T22:00:34Z [ERR!] - caused by: HTTP response error: SelfSubjectAccessReviewCreate
<3>2021-07-09T22:00:34Z [ERR!] - caused by: Hyper HTTP error
<3>2021-07-09T22:00:34Z [ERR!] - caused by: error trying to connect: Connection timed out (os error 110)
<6>2021-07-09T22:00:02Z [INFO] (/project/hsm-sys/azure-iot-hsm-c/src/hsm_log.c:log_init:41) Initialized logging
edgeHub Logs after recreating iotedged:
2021-08-18 19:05:40 Starting Edge Hub
2021-08-18 19:05:40.481 +00:00 Edge Hub Main()
<7> 2021-08-18 19:05:40.609 +00:00 [DBG] [Microsoft.Azure.Devices.Edge.Util.Edged.WorkloadClient] - Making a Http call to http://localhost:35001/ to CreateServerCertificateAsync
<7> 2021-08-18 19:05:40.912 +00:00 [DBG] [Microsoft.Azure.Devices.Edge.Util.Edged.WorkloadClient] - Error when getting an Http response from http://localhost:35001/ for CreateServerCertificateAsync
HTTP Response:
{"message":"Module not found"}
Microsoft.Azure.Devices.Edge.Util.Edged.Version_2019_01_30.GeneratedCode.IoTEdgedException`1[Microsoft.Azure.Devices.Edge.Util.Edged.Version_2019_01_30.GeneratedCode.ErrorResponse]: Not Found
at Microsoft.Azure.Devices.Edge.Util.Edged.Version_2019_01_30.GeneratedCode.HttpWorkloadClient.CreateServerCertificateAsync(String api_version, String name, String genid, ServerCertificateRequest request, CancellationToken cancellationToken) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/edged/version_2019_01_30/generatedCode/HttpWorkloadClient.cs:line 624
at Microsoft.Azure.Devices.Edge.Util.TaskEx.TimeoutAfter[T](Task`1 task, TimeSpan timeout) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/TaskEx.cs:line 126
at Microsoft.Azure.Devices.Edge.Util.Edged.WorkloadClientVersioned.Execute[T](Func`1 func, String operation) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/edged/WorkloadClientVersioned.cs:line 59
Unhandled exception. System.AggregateException: One or more errors occurred. (Error calling CreateServerCertificateAsync: Module not found)
---> Microsoft.Azure.Devices.Edge.Util.Edged.WorkloadCommunicationException- Message:Error calling CreateServerCertificateAsync: Module not found, StatusCode:404, at: at Microsoft.Azure.Devices.Edge.Util.Edged.Version_2019_01_30.WorkloadClient.HandleException(Exception ex, String operation) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/edged/version_2019_01_30/WorkloadClient.cs:line 109
at Microsoft.Azure.Devices.Edge.Util.Edged.WorkloadClientVersioned.Execute[T](Func`1 func, String operation) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/edged/WorkloadClientVersioned.cs:line 77
at Microsoft.Azure.Devices.Edge.Util.Edged.Version_2019_01_30.WorkloadClient.CreateServerCertificateAsync(String hostname, DateTime expiration) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/edged/version_2019_01_30/WorkloadClient.cs:line 35
at Microsoft.Azure.Devices.Edge.Util.CertificateHelper.GetServerCertificatesFromEdgelet(Uri workloadUri, String workloadApiVersion, String workloadClientApiVersion, String moduleId, String moduleGenerationId, String edgeHubHostname, DateTime expiration) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/CertificateHelper.cs:line 260
at Microsoft.Azure.Devices.Edge.Hub.Service.EdgeHubCertificates.LoadAsync(IConfigurationRoot configuration, ILogger logger) in /home/vsts/work/1/s/edge-hub/src/Microsoft.Azure.Devices.Edge.Hub.Service/EdgeHubCertificates.cs:line 54
at Microsoft.Azure.Devices.Edge.Hub.Service.Program.MainAsync(IConfigurationRoot configuration) in /home/vsts/work/1/s/edge-hub/src/Microsoft.Azure.Devices.Edge.Hub.Service/Program.cs:line 54
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at System.Threading.Tasks.Task`1.get_Result()
at Microsoft.Azure.Devices.Edge.Hub.Service.Program.Main() in /home/vsts/work/1/s/edge-hub/src/Microsoft.Azure.Devices.Edge.Hub.Service/Program.cs:line 33
Are you still blocked? what troubleshooting steps you have tried so far? Did you check the Common issues and resolutions for Azure IoT Edge? As per the error messages Transparent gateway certificates not found, operating in quick start mode and The daemon could not start up successfully: Could not initialize module runtime looks like the setup is not configured properly. Try restarting the server and check the transparent gateway setup. Please refer the transparent gateway setup and check if you have missed anything.
I'm running Kubernetes on an AWS (version 1.5.2). I have installed helm using
helm init --node-selectors="nodeType=master"
forcing it running on the master.
When I try to run helm list i get the following error Error: Get https://192.0.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: x509: certificate signed by unknown authority
The logs from the tiller container (seems the issue is from the tiller to Kubernetes-api):
E0219 08:15:12.546100 1 config.go:330] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
E0219 08:15:12.547957 1 config.go:330] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
[main] 2018/02/19 08:15:12 Starting Tiller v2.7.0 (tls=false)
[main] 2018/02/19 08:15:12 GRPC listening on :44134
[main] 2018/02/19 08:15:12 Probes listening on :44135
[main] 2018/02/19 08:15:12 Storage driver is ConfigMap
[main] 2018/02/19 08:15:12 Max history per release is 0
[storage] 2018/02/19 08:20:47 listing all releases with filter
[storage/driver] 2018/02/19 08:20:47 list: failed to list: Get https://192.0.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: x509: certificate signed by unknown authority
Is there a way to configure tiller to ignore the untrusted certificate?
It looks like your Kubernetes cluster isn't properly configured. Usually there is a CA certificate for every pod in /var/run/secrets/kubernetes.io/serviceaccount/ca.crt that allows pods to communicate with the API server.
The first two lines in your log show that no such file could be found:
Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory.
I am managing to get everything working with the local master and two remote workers. Now, I want to connect to a remote master that has the same remote workers. I have tried different combinations of settings withing the /etc/hosts and other reccomendations on the Internet, but NOTHING worked.
The Main class is:
public static void main(String[] args) {
ScalaInterface sInterface = new ScalaInterface(CHUNK_SIZE,
"awsAccessKeyId",
"awsSecretAccessKey");
SparkConf conf = new SparkConf().setAppName("POC_JAVA_AND_SPARK")
.setMaster("spark://spark-master:7077");
org.apache.spark.SparkContext sc = new org.apache.spark.SparkContext(
conf);
sInterface.enableS3Connection(sc);
org.apache.spark.rdd.RDD<Tuple2<Path, Text>> fileAndLine = (RDD<Tuple2<Path, Text>>) sInterface.getMappedRDD(sc, "s3n://somebucket/");
org.apache.spark.rdd.RDD<String> pInfo = (RDD<String>) sInterface.mapPartitionsWithIndex(fileAndLine);
JavaRDD<String> pInfoJ = pInfo.toJavaRDD();
List<String> result = pInfoJ.collect();
String miscInfo = sInterface.getMiscInfo(sc, pInfo);
System.out.println(miscInfo);
}
It fails at:
List<String> result = pInfoJ.collect();
The error I am getting is:
1354 [sparkDriver-akka.actor.default-dispatcher-3] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
1354 [main] WARN org.apache.spark.util.Utils - Service 'sparkDriver' could not bind on port 0. Attempting port 1.
1355 [main] DEBUG org.apache.spark.util.AkkaUtils - In createActorSystem, requireCookie is: off
1363 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1364 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
1364 [sparkDriver-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1367 [sparkDriver-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
1370 [sparkDriver-akka.actor.default-dispatcher-6] INFO Remoting - Starting remoting
1380 [sparkDriver-akka.actor.default-dispatcher-4] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
Exception in thread "main" 1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
java.net.BindException: Failed to bind to: spark-master/192.168.0.191:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
1383 [sparkDriver-akka.actor.default-dispatcher-7] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1385 [delete Spark temp dirs] DEBUG org.apache.spark.util.Utils - Shutdown hook called
Thank you kindly for your help!
Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.
I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.
The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.
I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.
hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com
After that my spark could not work and showed error as below:
16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.
I solve it by setting SPARK_LOCAL_IP to as below:
export SPARK_LOCAL_IP="localhost"
then just launched sparkling shell as below:
$SPARK_HOME/bin/spark-shell
Possily your master is running on non-default port. Can you post your submit command?
Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster