Confused about Nitrogen listen IP address - erlang

Am running Nitrogen 2.0.X on Windows 7 Home Premium, HP Pavilion Entertainment PC Laptop.
Nitrogen starts with inets and i have failed to change or dictate the IP address of the webserver.
Once it starts, it tells me to go to my browser and hit http://localhost:8000 in the shell output below:
erl -make
Starting Nitrogen on Inets (http://localhost:8000)...
Eshell V5.8.4 (abort with ^G)
Hitting the link in almost all available browsers shows that page could not be found. When i ask the emulator about the ports, this is its output:
(motv#josh.ekampus.internal)1> inet:i().
Port Module Recv Sent Owner Local Address Foreign Address State
3109 inet6_tcp 0 0 *:8000 *:* ACCEPTING
618 inet_tcp 0 0 *:9543 *:* ACCEPTING
637 inet_tcp 4 19 localhost:9544 localhost:4369 CONNECTED
Port Module Recv Sent Owner Local Address Foreign Address State
ok
(motv#josh.ekampus.internal)2>
Am having a strong thought that inet6_tcp means that its using IPv6 while inet_tcp means IPv4, not very sure about this. But all in all, i cannot connect to my Nitrogen. These below are the running applications
(motv#josh.ekampus.internal)2> application:which_applications().
[{quickstart,"Nitrogen Quickstart",[]},
{inets,"INETS CXC 138 49","5.6"},
{nprocreg,"NProcReg - Simple Erlang Process Registry.",
"0.1"},
{stdlib,"ERTS CXC 138 10","1.17.4"},
{kernel,"ERTS CXC 138 10","2.14.4"}]
(motv#josh.ekampus.internal)3>
Can someone explain why i cannot Reach my Local Nitrogen Framework by just hitting http://localhost:8000 in the browser, given the observations above? And, how can i connect to it from my browser?

Some guesses:
Did you try http://127.0.0.1:8000 ?
If that doesn't work, can you startup erlang with forced ip4 support (i think):
-proto_dist inet_tcp

Related

Q: Apache Geode - Unable to reconnect a node after SO patching "15 seconds have elapsed while waiting for replies"

I have a cluster situation consisting of 4 total nodes, 3 servers and 1 management node, working properly.
At the beginning of the month we planned to patch the OS and we started from the first server node with this procedure:
Stop service
S.O. patching
Server restart
Start service
The service of the first patched node named "serverA" fails to restart with this error:
Log entries cluster join:
serverA:
| INFO | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: shared=true ordered=false failed to connect to peer 10.237.110.195( Server serverB:9993):1024 because: java.net.ConnectException: Connection timed out (Connection timed out)
| WARN | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: Attempting reconnect to peer 10.237.110.195( Server serverB:9993):1024
ServerMgmt:
| WARN | pool-3-thread-1 | tributed.internal.ReplyProcessor21 | --> 15 seconds have elapsed while waiting for replies: <CreateRegionProcessor$CreateRegionReplyProcessor 44180 waiting for 1 replies from [10.237.110.194( Server serverA:632):1024]> on 10.237.110.225( Management:6033):1024 whose current membership list is: [[10.237.110.196( Server serverC:16805):1024, 10.237.110.225( Management:6033):1024, 10.237.110.195( Server serverB:9993):1024, 10.237.110.194( Server serverA:632):1024]]
The connection between the systems was verified with tcpdumps, udp 1024 is running fine.
We have tried redeploying the service and making numerous attempts but we always get the same error during startup.
Any suggestions? Thank you.
Marco.
I think to see this error message, serverA was probably able to send UDP messages to serverB but it is failing to create a TCP connection. It's hard to say why though - a firewall issue, some TCP configuration issue, ... ?
Check to see if serverB has anything interesting in its logs. Since you are using TCP dump, you should be watching for that TCP connection for serverB:9993, since it looks like that is wwhat failed.
There is no firewall between the systems, we've analyzed again the network connection, during startup from node a, and we can see that the communication can be established between all systems. But what we detected is, that on port 2323 which is configured as locater, the node sends packages to the b and c node, but only receives back packages from the c node, and not from the b node. This is for us again a sign that the b node has an issue. Does it give a way to check our assumption from the b node?
A node ip .194
B node ip .195
C node ip .196
Management ip .225

OpenVPN 3 client on iOS connects, but fails to send data, "unknown IP version"

I've got a build of the OpenVPN3 client library (https://github.com/OpenVPN/openvpn3) connecting to an OpenVPN 2 server (2.4.4). This is working for my mac and windows builds, but failing when the client is iOS.
The iOS client appears to connect, in the sense that I get my custom up script invoked and I can see what I assume are keepalive/heartbeat packets going back and forth between client and server. The client doesn't time out as long as these packets are allowed to continue. However, as soon as the client attempts to access any web page over the tunnel, I get packets dropped on the server side with errors like the following:
Fri Mar 15 20:08:27 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=10 seenFri Mar 15 20:08:28 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
Fri Mar 15 20:08:29 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=5 seen
Fri Mar 15 20:08:30 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=9 seen
Fri Mar 15 20:08:31 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=8 seen
Fri Mar 15 20:08:32 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=2 seen
Fri Mar 15 20:08:34 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=13 seen
Fri Mar 15 20:08:38 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
I'm using the same server and client configs for iOS as I was using when the client was Mac and Windows.
Server configs:
port 1194
proto udp
dev tun
ca /opt/certs/ca-cert.pem
cert /opt/certs/server.pem
key /opt/certs/server-key.pem
dh /opt/certs/dh2048.pem
tls-auth /opt/certs/ta.key 0
server 10.8.0.0 255.255.0.0
keepalive 5 15
verb 3
script-security 3
client-connect "/usr/local/bin/sdp-updown"
client-disconnect "/usr/local/bin/sdp-updown"
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
tun-mtu 1500
tun-mtu-extra 32
mssfix 1450
Client configs:
dev tun
proto udp
remote ... server and port omitted
remote-cert-tls server
key-direction 1
server-poll-timeout 5
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
... routes omitted
<ca>
... CA omitted
</ca>
<cert>
... cert omitted
</cert>
<key>
... private key omitted
</key>
<tls-auth>
... OpenVPN static key omitted
</tls-auth>
I've tried a number of different settings for cipher and tls-cipher. When those settings are set to values that are supported on both sides I can get connected, but get the same IP packet with unknown IP version error. Obviously when either cipher or tls-cipher isn't supported on either server or client we fail to negotiate TLS and don't get connected at all.
I found a number of troubleshooting forum posts regarding this error and most of them are resolved by setting the compression settings to the same value on both ends. My iOS client build seems to think that it has no ability to perform compression, even though I think I've linked successfully against the LZ4 library. I compiled the LZ4 library for iOS, and included the LZ4=1 when building a dylib for OpenVPN itself. However, when the iOS client connects it reports settings like:
ENV[IV_AUTO_SESS] = 1
ENV[IV_COMP_STUBv2] = 1
ENV[IV_COMP_STUB] = 1
ENV[IV_LZO_STUB] = 1
ENV[IV_PROTO] = 2
ENV[IV_TCPNL] = 1
ENV[IV_NCP] = 2
ENV[IV_PLAT] = ios
ENV[IV_VER] = 3.1.2
I notice that this does not include IV_LZ4, which I take to mean that the client thinks it can't perform compression. That said, even when my configs include disabled compression I get the same results. I tried omitting any compression setting at all, comp-lzo no, compress stub, and compress stub-v2. None of these resulted in any different behavior.
My questions are thus:
What could be the cause of my IP packet with unknown IP version errors when actually sending packets over the data channel?
If what I'm seeing is actually a compression setting error, how do I convince OpenVPN to disable compression entirely? Alternatively, what have I done wrong to link LZ4 into my iOS OpenVPN dylib?

Trying to connect from Jenkins to Cygwin sshd randomly does not work (could not write ident string to [clientIP]

I'm trying to connect from Jenkins (docker container) to a windows server (VM) running a Cygwin sshd. The problem I'm facing is that (seemingly) at random I can or cannot connect. This is both with the 'SSH Plugin' (username/password) and via shell SSH command (key pair).
From Jenkins the debug information tells me:
debug1: connect to address [serverIP] port 22: Connection refused
When it isn't working the sshd log tells me:
debug1: fd 4 clearing O_NONBLOCK
debug1: Forked child 1128.
debug3: send_rexec_state: entering fd = 7 config len 232
debug3: ssh_msg_send: type 0
debug3: send_rexec_state: done
debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7
debug1: inetd sockets after dupping: 3, 3
Connection from [clientIP] port 59440 on 0.0.0.0 port 22
Could not write ident string to [clientIP] port 59440
When it is working I get the following in the sshd log:
debug1: fd 4 clearing O_NONBLOCK
debug1: Forked child 1708.
debug3: send_rexec_state: entering fd = 7 config len 232
debug3: ssh_msg_send: type 0
debug3: send_rexec_state: done
debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7
debug1: inetd sockets after dupping: 3, 3
Connection from [clientIP] port 56742 on [serverIP] port 22
debug1: Client protocol version 2.0; client software version OpenSSH_7.4
Difference I'm seeing is the 0.0.0.0 instead of the serverIP but I cannot find why this is.
I've tried setting up a job that runs every 5 minutes to see if there was a pattern, but I could find none.
On the server I've made a wireshark trace these are the packages I get
Client to server: [SYN]
Client to server: [TCP Out-Of-Order] (same package as previous [SYN])
Server to client: [RST, ACK]
Client to server: [SYN, ACK]
Client to server: [TCP Retransmission] (same package as previous [SYN, ACK])
I'm a bit stumped on the "Could not write ident string to [clientIP]" message and I'm having some trouble finding more information about why this is happening.
Any help on troubleshooting this further or information on why this message is displayed is welcome.
"Connection refused" normally means the server isn't accepting connections to the IP address and port that you requested. The service that you're trying to connect to may not be listening for connections, or it may be listening to a different address and/or port.
"Connection refused" can also be caused by a firewall blocking connections. In your case, given that the service is logging an incoming connection but without the client IP address, my guess is that you have some kind of firewall or malware detection software running on the server, and it's interfering with these connection attempts.
You'll need to access this firewall software, figure out why it's blocking these connections, and configure it to stop interfering.

Failed to bind to: spark-master, using a remote cluster with two workers

I am managing to get everything working with the local master and two remote workers. Now, I want to connect to a remote master that has the same remote workers. I have tried different combinations of settings withing the /etc/hosts and other reccomendations on the Internet, but NOTHING worked.
The Main class is:
public static void main(String[] args) {
ScalaInterface sInterface = new ScalaInterface(CHUNK_SIZE,
"awsAccessKeyId",
"awsSecretAccessKey");
SparkConf conf = new SparkConf().setAppName("POC_JAVA_AND_SPARK")
.setMaster("spark://spark-master:7077");
org.apache.spark.SparkContext sc = new org.apache.spark.SparkContext(
conf);
sInterface.enableS3Connection(sc);
org.apache.spark.rdd.RDD<Tuple2<Path, Text>> fileAndLine = (RDD<Tuple2<Path, Text>>) sInterface.getMappedRDD(sc, "s3n://somebucket/");
org.apache.spark.rdd.RDD<String> pInfo = (RDD<String>) sInterface.mapPartitionsWithIndex(fileAndLine);
JavaRDD<String> pInfoJ = pInfo.toJavaRDD();
List<String> result = pInfoJ.collect();
String miscInfo = sInterface.getMiscInfo(sc, pInfo);
System.out.println(miscInfo);
}
It fails at:
List<String> result = pInfoJ.collect();
The error I am getting is:
1354 [sparkDriver-akka.actor.default-dispatcher-3] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
1354 [main] WARN org.apache.spark.util.Utils - Service 'sparkDriver' could not bind on port 0. Attempting port 1.
1355 [main] DEBUG org.apache.spark.util.AkkaUtils - In createActorSystem, requireCookie is: off
1363 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1364 [sparkDriver-akka.actor.default-dispatcher-3] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
1364 [sparkDriver-akka.actor.default-dispatcher-5] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1367 [sparkDriver-akka.actor.default-dispatcher-4] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
1370 [sparkDriver-akka.actor.default-dispatcher-6] INFO Remoting - Starting remoting
1380 [sparkDriver-akka.actor.default-dispatcher-4] ERROR akka.remote.transport.netty.NettyTransport - failed to bind to spark-master/192.168.0.191:0, shutting down Netty transport
Exception in thread "main" 1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
1382 [sparkDriver-akka.actor.default-dispatcher-6] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
java.net.BindException: Failed to bind to: spark-master/192.168.0.191:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
1383 [sparkDriver-akka.actor.default-dispatcher-7] INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
1385 [delete Spark temp dirs] DEBUG org.apache.spark.util.Utils - Shutdown hook called
Thank you kindly for your help!
Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.
I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.
The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.
I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.
hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com
After that my spark could not work and showed error as below:
16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.
I solve it by setting SPARK_LOCAL_IP to as below:
export SPARK_LOCAL_IP="localhost"
then just launched sparkling shell as below:
$SPARK_HOME/bin/spark-shell
Possily your master is running on non-default port. Can you post your submit command?
Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster

epmd error opening stream socket: Address family not supported by protocol

When trying to start rabbitmq server, I get the following error:
{error_logger,{{2014,9,26},{15,30,21}},"Protocol: ~tp: register/listen error: ~tp~n",["inet_tcp",econnrefused]}
{error_logger,{{2014,9,26},{15,30,21}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.21.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}},{ancestors,[net_sup,kernel_sup,<0.10.0>]},{messages,[]},{links,[#Port<0.93>,<0.18.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,27},{reductions,799}],[]]}
{error_logger,{{2014,9,26},{15,30,21}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[rabbitmqprelaunch791,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]}
{error_logger,{{2014,9,26},{15,30,21}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]}
{error_logger,{{2014,9,26},{15,30,21}},crash_report,[[{initial_call,{application_master,init,['Argument__1','Argument__2','Argument__3','Argument__4']}},{pid,<0.9.0>},{registered_name,[]},{error_info,{exit,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}},[{application_master,init,4,[{file,"application_master.erl"},{line,133}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}},{ancestors,[<0.8.0>]},{messages,[{'EXIT',<0.10.0>,normal}]},{links,[<0.8.0>,<0.7.0>]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,376},{stack_size,27},{reductions,117}],[]]}
{error_logger,{{2014,9,26},{15,30,21}},std_info,[{application,kernel},{exited,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}},{type,permanent}]}
{"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{{shutdown,{failed_to_start_child,net_sup,{shutdown,{failed_to_start_child,net_kernel,{'EXIT',nodistribution}}}}},{kernel,start,[normal,[]]}}}"}
I tried erl -sname foo command and it gave a similar error.
Then (as suggested here: http://permalink.gmane.org/gmane.comp.networking.rabbitmq.general/23204) I tried
epmd -debug and it gave the following output:
epmd: Mon Sep 29 11:56:16 2014: epmd running - daemon = 0
epmd: Mon Sep 29 11:56:16 2014: error opening stream socket: Address family not supported by protocol
Tried to google for epmd error Address family not supported by protocol, but couldn't find anything.
It might be that you are using IP6 address which might be not supported by epmd in your Erlang version. This mail might shed some light on issue (or just force IP4 if you can).

Resources