neo4j-shell Connection refused java.rmi.ConnectException - neo4j

I correctly start server:
~/Downloads/neo4j-community-3.2.1/bin $ ./neo4j start Active database:
graph.db Directories in use: home:
/home/user/Downloads/neo4j-community-3.2.1 config:
/home/user/Downloads/neo4j-community-3.2.1/conf logs:
/home/user/Downloads/neo4j-community-3.2.1/logs plugins:
/home/user/Downloads/neo4j-community-3.2.1/plugins import:
/home/user/Downloads/neo4j-community-3.2.1/import data:
/home/user/Downloads/neo4j-community-3.2.1/data certificates:
/home/user/Downloads/neo4j-community-3.2.1/certificates run:
/home/user/Downloads/neo4j-community-3.2.1/run Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended.
See the Neo4j manual. Started neo4j (pid 29246). It is available at
http://localhost:7474/ There may be a short delay until the server is
ready. See /home/user/Downloads/neo4j-community-3.2.1/logs/neo4j.log
for current status.
then when I try to launch the neo4j-shell:
~/Downloads/neo4j-community-3.2.1/bin $ ./neo4j-shell -v ERROR (-v for expanded information): Connection
refused java.rmi.ConnectException: Connection refused to host:
localhost; nested exception is: java.net.ConnectException:
Connection refused at
sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619) at
sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
at
sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:342) at
sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at
java.rmi.Naming.lookup(Naming.java:101) at
org.neo4j.shell.impl.RmiLocation.getBoundObject(RmiLocation.java:191)
at
org.neo4j.shell.impl.RemoteClient.findRemoteServer(RemoteClient.java:72)
at org.neo4j.shell.impl.RemoteClient.(RemoteClient.java:65) at
org.neo4j.shell.impl.RemoteClient.(RemoteClient.java:46) at
org.neo4j.shell.ShellLobby.newClient(ShellLobby.java:204) at
org.neo4j.shell.StartClient.startRemote(StartClient.java:358) at
org.neo4j.shell.StartClient.start(StartClient.java:229) at
org.neo4j.shell.StartClient.main(StartClient.java:147) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:589) at
java.net.Socket.connect(Socket.java:538) at
java.net.Socket.(Socket.java:434) at
java.net.Socket.(Socket.java:211) at
sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:40)
at
sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:148)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:613)
... 13 more
-host Domain name or IP of host to connect to (default:
localhost) -port Port of host to connect to (default: 1337)
-name RMI name, i.e. rmi://:/ (default: shell) -pid Process ID to connect to -c Command line to execute. After executing it the shell exits -file File
containing commands to execute, or '-' to read from stdin. After
executing it the shell exits -readonly Connect in readonly mode
(only for connecting with -path) -path Points to a neo4j db path
so that a local server can be started there -config Points to a
config file when starting a local server
Example arguments for remote: -port 1337 -host 192.168.1.234 -port
1337 -name shell -host localhost -readonly ...or no arguments for
default values Example arguments for local: -path /path/to/db -path
/path/to/db -config /path/to/neo4j.config -path /path/to/db -readonly
the server is in its default initial configuration, the only thing I changed is the graph username and password.

neo4j-shell is deprecated since version 3.1, you should use cypher-shell instead.
But you can enable it by adding this configuration : dbms.shell.enabled=true
Cheers.

Related

schema registry docker from confluent

I want to use the schema registry docker (image owned by confluent) with my open-source Kafka I installed locally on my PC.
I am using the following command to run the image :
docker run -p 8081:8081 \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=PLAINTEXT://127.0.0.1:9092 \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:latest
but I am getting the following connection errors:
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[kafka-admin-client-thread | adminclient-1] WARN org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
I have Kafka installed on my localhost.
Any idea to solve this, please?
I have Kafka installed on my localhost
As commented, localhost is unclear when you're actually using multiple machines (one physical and at least one virtual)
You need to use host.docker.internal:9092
https://docs.docker.com/docker-for-windows/networking/ (removed because host is not windows)
On a Linux host, you need to use host networking mode
https://docs.docker.com/network/host/
Although, realistically, running Kafka in a container would be simpler for connecting the two

neo4j-shell can not connect to neo4j Server

I'm using docker version of neo4j (v3.1.0) and I'm having difficulties connecting to neo4j server using neo4j-shell.
After running an instance of neo4r:3.1.0 docker, I run a bash inside the container:
$ docker exec -it neo4j /bin/bash
And from there I try to run the neo4j-shell like this:
/var/lib/neo4j/bin/neo4j-shell
But it errors:
$ /var/lib/neo4j/bin/neo4j-shell
ERROR (-v for expanded information):
Connection refused
-host Domain name or IP of host to connect to (default: localhost)
-port Port of host to connect to (default: 1337)
-name RMI name, i.e. rmi://<host>:<port>/<name> (default: shell)
-pid Process ID to connect to
-c Command line to execute. After executing it the shell exits
-file File containing commands to execute, or '-' to read from stdin. After executing it the shell exits
-readonly Connect in readonly mode (only for connecting with -path)
-path Points to a neo4j db path so that a local server can be started there
-config Points to a config file when starting a local server
Example arguments for remote:
-port 1337
-host 192.168.1.234 -port 1337 -name shell
-host localhost -readonly
...or no arguments for default values
Example arguments for local:
-path /path/to/db
-path /path/to/db -config /path/to/neo4j.config
-path /path/to/db -readonly
I also tried other hosts like: localhost, 127.0.0.1 and 172.17.0.6 (the container IP). Since it didn't work I tried to list open ports on my container:
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 :::7687 :::* LISTEN
tcp 0 0 :::7473 :::* LISTEN
tcp 0 0 :::7474 :::* LISTEN
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
As you can see there's no 1337 open! I've looked into the config file and the line for specifying port is commented out which means it should be set to its default value (1337).
Can anyone help me connect to neo4j using neo4j-shell?
BTW, the neo4j server is up and running and I can use its web access through port :7474.
In 3.1 it seems the shell is not enabled by default.
You will need to pass your own configuration file with the shell enabled :
Uncomment
# Enable a remote shell server which Neo4j Shell clients can log in to.
dbms.shell.enabled=true
(I find the amount of worker for changing one value in docker quite heavy but yeah..)
Or use the new cypher-shell :
ikwattro#graphaware-team ~> docker ps -a | grep 'neo4j'
34b3c6718504 neo4j:3.1.0 "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 7473-7474/tcp, 7687/tcp compassionate_easley
2395bd0b1fe9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (143) 3 minutes ago cranky_goldstine
949feacbc0f9 neo4j:3.1.0 "/docker-entrypoint.s" 5 minutes ago Exited (130) 5 minutes ago modest_boyd
c38572b078de neo4j:3.0.6-enterprise "/docker-entrypoint.s" 6 weeks ago Exited (0) 6 weeks ago fastfishpim_neo4j_1
ikwattro#graphaware-team ~> docker exec --interactive --tty compassionate_easley bin/cypher-shell
username: neo4j
password: *****
Connected to Neo4j 3.1.0 at bolt://localhost:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j>
NB: Cypher-shell supports begin and commit :
neo4j> :begin
neo4j# create (n:Node);
Added 1 nodes, Added 1 labels
neo4j# :commit;
neo4j>
-
neo4j> :begin
neo4j# create (n:Person {name:"John"});
Added 1 nodes, Set 1 properties, Added 1 labels
neo4j# :rollback
neo4j> :commit
There is no open transaction to commit
neo4j>
http://neo4j.com/docs/operations-manual/current/tools/cypher-shell/

can't start elastic search server

I'm trying to start my elastic search server but get the following error
* Starting Elasticsearch Server
sysctl: setting key "vm.max_map_count": Read-only file system
Any idea how to fix this? I've tried reinstalling the whole thing but to no avail.
Full actions below
rs191919:~/workspace/sample_app (elastic-again) $ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.7.0.deb
--2016-03-11 17:21:12-- https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.7.0.deb
Resolving download.elasticsearch.org (download.elasticsearch.org)... 23.21.221.132, 23.21.209.176, 23.23.251.246, ...
Connecting to download.elasticsearch.org (download.elasticsearch.org)|23.21.221.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 27320700 (26M) [application/x-debian-package]
Saving to: ‘elasticsearch-1.7.0.deb.1’
100%[==========================================================================================================>] 27,320,700 22.5MB/s in 1.2s
2016-03-11 17:21:14 (22.5 MB/s) - ‘elasticsearch-1.7.0.deb.1’ saved [27320700/27320700]
rs191919:~/workspace/sample_app (elastic-again) $ sudo dpkg -i elasticsearch-1.7.0.deb
Selecting previously unselected package elasticsearch.
(Reading database ... 125961 files and directories currently installed.)
Preparing to unpack elasticsearch-1.7.0.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (1.7.0) ...
Setting up elasticsearch (1.7.0) ...
Processing triggers for ureadahead (0.100.0-16) ...
rs191919:~/workspace/sample_app (elastic-again) $ sudo update-rc.d elasticsearch defaults 95 10
System start/stop links for /etc/init.d/elasticsearch already exist.
rs191919:~/workspace/sample_app (elastic-again) $ sudo /etc/init.d/elasticsearch start
* Starting Elasticsearch Server sysctl: setting key "vm.max_map_count": Read-only file system
rs191919:~/workspace/sample_app (elastic-again) $ curl http://localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
rs191919:~/workspace/sample_app (elastic-again) $

Not able to connect to cassandra from host machine

I have configured a cassandra node on my mac book pro using docker as follows
VBoxManage modifyvm "default" --natpf1 "tcp-port7191,tcp,,7191,,7191"
VBoxManage modifyvm "default" --natpf1 "tcp-port7000,tcp,,7000,,7000"
VBoxManage modifyvm "default" --natpf1 "tcp-port7001,tcp,,7001,,7001"
VBoxManage modifyvm "default" --natpf1 "tcp-port9160,tcp,,7160,,7160"
VBoxManage modifyvm "default" --natpf1 "tcp-port9042,tcp,,9042,,9042"
(restart machine)
docker run --name c1 -v /Users/MyProjects/scripts/:/script -d cassandra:latest -p "7191:7191" -p "7000:7000" -p "7001:7001" -p "9160:9160" -p "9042:9042"
I can easily do
docker exec -it c1 cqlsh
it says
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
Now I get the IP address of my virtual box vm using
docker-machine env default
I can see the IP address of 192.168.99.100
But when I run my java program to connect to the same cassandra instance using the IP address above. I get an error
00:00:56.611 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/192.168.99.100:9042-1, inFlight=0, closed=false] Error connecting to /192.168.99.100:9042 (Connection refused: /192.168.99.100:9042)
00:00:56.615 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Host.STATES - Defuncting Connection[/192.168.99.100:9042-1, inFlight=0, closed=false] because: [/192.168.99.100] Cannot connect
00:00:56.616 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/192.168.99.100:9042-1, inFlight=0, closed=true] closing connection
00:00:56.617 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Host.STATES - [/192.168.99.100:9042] preventing new connections for the next 1000 ms
00:00:56.617 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Host.STATES - [/192.168.99.100:9042] Connection[/192.168.99.100:9042-1, inFlight=0, closed=true] failed, remaining = 0
00:00:56.624 [run-main-0] DEBUG c.d.driver.core.ControlConnection - [Control connection] error on /192.168.99.100:9042 connection, no more host to try
com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100] Cannot connect
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:157) ~[cassandra-driver-core-3.0.0.jar:na]
at com.datastax.driver.core.Connection$1.operationComplete(Connection.java:140) ~[cassandra-driver-core-3.0.0.jar:na]
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:276) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:292) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) ~[netty-common-4.0.33.Final.jar:4.0.33.Final]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_79]
Caused by: java.net.ConnectException: Connection refused: /192.168.99.100:9042
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) ~[na:1.7.0_79]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) ~[netty-transport-4.0.33.Final.jar:4.0.33.Final]
... 6 common frames omitted
00:00:56.625 [run-main-0] DEBUG com.datastax.driver.core.Cluster - Shutting down
[error] (run-main-0) java.lang.ExceptionInInitializerError
java.lang.ExceptionInInitializerError
at com.abhi.connector.CassandraConnector$class.$init$(CassandraConnector.scala:8)
at com.abhi.models.Movies$.<init>(Movies.scala:25)
at com.abhi.models.Movies$.<clinit>(Movies.scala)
at com.abhi.MovieLensDataPreperation$$anonfun$storeInCassandra$1.apply(MovieLensDataPreperation.scala:55)
at com.abhi.MovieLensDataPreperation$$anonfun$storeInCassandra$1.apply(MovieLensDataPreperation.scala:55)
at scala.collection.immutable.List.foreach(List.scala:381)
at com.abhi.MovieLensDataPreperation$.storeInCassandra(MovieLensDataPreperation.scala:55)
at com.abhi.MovieLensDataPreperation$.main(MovieLensDataPreperation.scala:51)
at com.abhi.MovieLensDataPreperation.main(MovieLensDataPreperation.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connect(Cluster.java:283)
at com.abhi.connector.Connector$.<init>(CassandraConnector.scala:22)
at com.abhi.connector.Connector$.<clinit>(CassandraConnector.scala)
at com.abhi.connector.CassandraConnector$class.$init$(CassandraConnector.scala:8)
at com.abhi.models.Movies$.<init>(Movies.scala:25)
at com.abhi.models.Movies$.<clinit>(Movies.scala)
at com.abhi.MovieLensDataPreperation$$anonfun$storeInCassandra$1.apply(MovieLensDataPreperation.scala:55)
at com.abhi.MovieLensDataPreperation$$anonfun$storeInCassandra$1.apply(MovieLensDataPreperation.scala:55)
at scala.collection.immutable.List.foreach(List.scala:381)
at com.abhi.MovieLensDataPreperation$.storeInCassandra(MovieLensDataPreperation.scala:55)
at com.abhi.MovieLensDataPreperation$.main(MovieLensDataPreperation.scala:51)
at com.abhi.MovieLensDataPreperation.main(MovieLensDataPreperation.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
This is my code to configure the java connection
Cluster.builder()
.addContactPoints("192.168.99.100").withPort(9042)
.build()
Edit: I also replaced the above IP address with the IP address of docker guest VM... but that did not resolve the issue.
I was able to solve the problem by using CASSANDRA_BROADCAST_ADDRESS when creating the docker container
This is the command I used
docker run --name c1 -v /Users/MyProjects/scripts/:/script -d -p "7191:7191"
-p "7000:7000" -p "7001:7001" -p "9160:9160" -p "9042:9042" -e
CASSANDRA_BROADCAST_ADDRESS=192.168.99.100 cassandra:latest
After creating the container like this. I am able to connect from my Scala application with this code
val keyspace: KeySpace = new KeySpace("foo")
val cluster =
Cluster.builder()
.addContactPoint("192.168.99.100").withPort(9042)
.build()
cluster.getConfiguration().getSocketOptions().setReadTimeoutMillis(100000);
val session: Session = cluster.connect(keyspace.name)

jenkins jnlp slave agent refusing to listen on port

So I am trying to connect my jnlp slave agent (via java web start) to my master jenkins machine.
I have port 49187 fixed for tcp connection of jnlp slave agents and its open in slave machines. As I try and connect, it goes past the handshake but then fails with the following error:
Aug 19, 2014 3:51:10 PM com.youdevise.hudson.slavestatus.SlaveListener call
INFO: Slave-status listener starting
Aug 19, 2014 3:51:10 PM com.youdevise.hudson.slavestatus.SlaveListener$1 run
SEVERE: Could not listen on port
java.net.BindException: Address already in use: JVM_Bind
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(Unknown Source)
at java.net.ServerSocket.bind(Unknown Source)
at java.net.ServerSocket.<init>(Unknown Source)
at java.net.ServerSocket.<init>(Unknown Source)
at com.youdevise.hudson.slavestatus.SocketHTTPListener.waitForConnection
(SlaveListener.java:129)
at com.youdevise.hudson.slavestatus.SlaveListener$1.run(SlaveListener.ja
va:63)
at com.youdevise.hudson.slavestatus.Daemon.go(Daemon.java:16)
at com.youdevise.hudson.slavestatus.SlaveListener.call(SlaveListener.jav
a:83)
at hudson.remoting.UserRequest.perform(UserRequest.java:118)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecut
orService.java:72)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source
)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at hudson.remoting.Engine$1$1.run(Engine.java:58)
at java.lang.Thread.run(Unknown Source)
I have inbound , outbound opne but no idea why I am facing this error..Any ideas?
Heres the output for suggested answer:
[root#ip-10-192-35-89 ~]# netstat -ntpl | grep 49187
tcp 0 0 :::49187 :::* LISTEN 1054/java
[root#ip-10-192-35-89 ~]# ps -ef|grep 1054
jenkins 1054 1 2 Aug14 ? 02:56:02 /etc/alternatives/java -Djava.awt.headless=true -Xmx2048m -XX:MaxPermSize=512m -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=7777 --ajp13Port=8009 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
root 2483 2463 0 19:18 pts/0 00:00:00 grep 1054
From the following lines, it seems some process is already using the port 49187:
SEVERE: Could not listen on port
java.net.BindException: Address already in use: JVM_Bind
Just run the following command to fetch the process id:
netstat -ntpl | grep 49187
From the output, fetch the process-id and then run the following command to see which process is using the port 49187. You can then kill that process and try connecting to JNLP slave agent.
ps -ef | grep PID_from_above_output
For example,
[root#jenkins gc]# netstat -ntpl | grep 1569
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1569/sendmail
[root#jenkins gc]#
[root#jenkins gc]# ps -ef | grep 1569
root 1569 1 0 Jun26 ? 00:01:13 sendmail: accepting connections
root 8083 8059 0 21:35 pts/0 00:00:00 grep 1569
As you can see, sendmail program was using port 1569 to listen to requests.
When i got into the issues where it says that
JNLP Port is already in use. I tried grepping and doing a netstat on jenkins master for our JNLP port, i never got any process running on that particular port. Seems jenkins is storing the JNLP port information somewhere and not sure of it.
What i did was to change the JNLP port to some other port, saved the configuration, tried running the slave command, then updated the JNLP port with the same old one again and restarted the jnlp slave again. It works with out any issues now. Apparently the problem occurred when we restarted our jenkins master.

Resources