I want to use docker eclipse-mosquitto just for communication on a local machine. Which settings do I need for mosquitto.conf to make the mosquitto broker only visible on localhost but not from outside? Since a second mosquitto is running, port 1883 is blocked and I'm using port 1884.
This is what I have:
port 1884
bind_address 127.0.0.1
is visible from outside.
port 1884
bind_address localhost
gives error Error: Address not available.
Binding to docker-ip
port 1884
bind_address 172.17.0.1
gives error Error: Address not available.
What can I do?
Your answer is the wrong approach, you should only really be using --network="host" for things that need to open raw sockets or receive broadcast messages from the local network.
The correct answer is to not use the bind_address option in the mosquitto.conf file and use the docker -p option to do the port mapping correctly (docs).
e.g.
docker run exec -rm -p 127.0.0.1:1884:1884/tcp mosquitto
Here the -p 127.0.0.1:1884:1884 maps port 1884 in the container to port 1884 bound to the loopback ip (127.0.0.1) on the host.
Ok, solved it myself:
Running docker with additional option --network="host" and than in mosquitto.conf:
port 1884
bind_address 127.0.0.1
does the job.
I have 2 VMs.
On the first I run:
docker swarm join-token manager
On the second I run the result from this command.
i.e.
docker swarm join --token SWMTKN-1-0wyjx6pp0go18oz9c62cda7d3v5fvrwwb444o33x56kxhzjda8-9uxcepj9pbhggtecds324a06u 192.168.65.3:2377
However, this outputs:
Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.65.3:2377: connect: connection refused"
Any idea what's going wrong?
If it helps I'm spinning up these VMs using Vagrant.
Just add the port to firewall on master side
firewall-cmd --add-port=2377/tcp --permanent
firewall-cmd --reload
Then again try docker swarm join on second VM or node side
I was facing similar issue. and I spent couple of hours to figure out the root cause and share to those who may have similar issues.
Environment:
Oracle Cloud + AWS EC2 (2 +2)
OS: 20.04.2-Ubuntu
Docker version : 20.10.8
3 dynamic public IP+ 1 elastic IP
Issues
create two instances on the Oracle cloud at beginning
A instance (manager) docker swarm init --advertise-addr success
B instance (worker) docker join as worker is worker success
when I try to promo B as manager, encountered error
Unable to connect to remote host: No route to host
5. mesh routing is not working properly.
Investigation
Suspect it is related to network/firewall/Security group/security list
ssh to B server (worker), telnet (manager) 2377, with same error
Unable to connect to remote host: No route to host
3. login oracle console and add ingress rule under security list for all of relative port
TCP port 2377 for cluster management communications
TCP and UDP port 7946 for communication among nodes
UDP port 4789 for overlay network traffic
4. try again but still not work with telnet for same error
5. check the OS level firewall. if has disable it.
systemctl ufw disable
6. try again but still not work with same result
7. I suspect there have something wrong with oracle cloud, then I decide try to use AWS install the same version of OS/docker
8. add security group to allow all of relative ports/protocol and disable ufw
9. test with AWS instance C (leader/master) + D (worker). it works and also can promote D to manager. mesh routing was also work.
10. confirm the issue with oracle cloud
11. try to join the oracle instance (A) to C as worker. it works but still cannot promote as manager.
12. use journalctl -f to investigate the log and confirm there have socket timeout from A/B (oracle instances) to AWS instance(C)
13. relook the A/B, found there have iptables block request
14. remove all of setup in the iptables
# remove the rules
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
15. remove all of setup in the iptables
Root Cause
It caused by firewall either in cloud security/WAF/ACL level or OS firewall/rules. e.g. ufw/iptables
I did firewall-cmd --add-port=2377/tcp --permanent firewall-cmd --reload already on master side and was still getting the same error.
I did telnet <master ip> 2377 on worker node and then I did reboot on master.
Then it is working fine.
It looks like your docker swarm manager leader is not running on port 2377. You can check it by firing this command on your swarm manager leader vm. If it is working just fine then you will get similar output
[root#host1]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
tilzootjbg7n92n4mnof0orf0 * host1 Ready Active Leader
Furthermore you can check the listening ports in leader swarm manager node. It should have port tcp 2377 for cluster management communications and tcp/udp port 7946 for communication among nodes opened.
[root#host1]# netstat -ntulp | grep dockerd
tcp6 0 0 :::2377 :::* LISTEN 2286/dockerd
tcp6 0 0 :::7946 :::* LISTEN 2286/dockerd
udp6 0 0 :::7946 :::* 2286/dockerd
In the second vm where you are configuring second swarm manager you will have to make sure you have connectivity to port 2377 of leader swarm manager. You can use tools like telnet, wget, nc to test the connectivity as given below
[root#host2]# telnet <swarm manager leader ip> 2377
Trying 192.168.44.200...
Connected to 192.168.44.200.
For me I was on linux and windows. My windows docker private network was the same as my local network address. So docker daemon wasn't able to find in his own network the master with the address I was giving to him.
So I did :
1- go to Docker Desktop app
2- go to Settings
3- go to Resources
4- go to Network section and change the Docker subnet address (need to be different from your local subnet address).
5- Then apply and restart.
6- use the docker join on the worker again.
Note: All this steps are performed on the node where the error appear. Make sure that the ports 2377, 7946 and 4789 are opens on the master (you can use iptables or ufw).
Hope it works for you.
My setup is the following:
Host: Win10
Guest: Ubuntu 15.10 (clean install, only docker and nodejs are added)
Base image: https://hub.docker.com/r/microsoft/aspnet/ 1.0.0-beta8-coreclr
Inside the guest I have installed Docker and created image (added sample webapp using yeoman to the image above). When I run the image inside container I can ping the container IP sucessfuly using the container IP from the linux (e.g. 172.17.0.2).
$sudo docker run -d -p 80:5000 --name web myapp
$sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' "web"
172.17.0.2
$ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
1 packets transmitted, 1 received, 0% packet loss, time 999ms
$curl 172.17.0.2:80
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection refused
I can also connect to the container and execute commands like ping, however from the linux machine (guest in VirtualBox, host for docker) I cannot access the web app that is hosted inside the container as seen above. I tried several approaches like mapping to the host IP addresses etc, but none of them worked. Did anyone have ideas where to start from ? Is the issue comes from that the docker is installed inside VirtualBox machine?
Thank you in advance.
Edit: Here are the logs from the container:
Could not open /etc/lsb_release. OS version will default to the empty string.
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Your command tells Docker to essentially proxy requests from port 80 of the Linux guest to port 5000 of the container. So the curl command you tried doesn't work because you're trying on port 80 on the container, while the container itself has a service listening on port 5000.
To connect to the container directly, you would use (on the Linux guest):
curl 172.17.0.2:5000
To access via the published port on the Linux guest (from your host):
curl (Linux guest IP)
Or (from the Linux guest):
curl localhost
Edit: This will also prove to be problematic:
Now listening on: http://localhost:5000
You'll want your app inside the container to bind to all interfaces (0.0.0.0) so it listens on the container's assigned IP. With localhost it won't be accessible.
You might find this example useful:
https://github.com/aspnet/Home/blob/dev/samples/1.0.0-beta8/HelloWeb/project.json
This line specifies that the app bind to all interfaces (using "*") on port 5004:
21 "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
You'll need similar configuration.
I hard-coded a port forwarding in my Vagrantfile and now it collides with another box running on my machine.
I am aware Vagrant can detect port collisions and correct them. But one of the recipes I'm running depends on knowing the port for some other configuration.
Can I programmatically find out which port Vagrant detected as not in use so the recipe can make use of it?
There's no built-in command for this, but if you're using VirtualBox as your provider you can get port information using:
$ VBoxManage showvminfo $(cat .vagrant/machines/default/virtualbox/id) --details --machinereadable | egrep Forwarding
Giving you an output similar to:
Forwarding(0)="ssh,tcp,127.0.0.1,2222,,22"
Forwarding(1)="tcp8080,tcp,,8080,,80"
In the above, port 22 of the VM is forwarded to 2222 of the host, and 80 to 8080.
The VMNAME can be found by using vagrant's global-status command:
$ vagrant global-status
id name provider state directory
------------------------------------------------------------------------
78cf051 default virtualbox running /path/to/Vagrantfile
In the example above, default is the VMNAME.
Install the vagrant-portinfo plugin:
$ vagrant plugin install vagrant-portinfo`
$ vagrant portinfo
server1 (84a1587) running
------------------------------------------------
guest: 22 host: 2201
guest: 8080 host: 8083
You'll have to do a bit of grepping to parse the output. Adding programmatic querying of forwarded ports has been on the roadmap in Vagrant for years now, and there's still an open issue discussing it.
It seems that I've never got this to work in the past. Currently, I KNOW it doesn't work.
But we start up our Java process:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=6002
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
I can telnet to the port, and "something is there" (that is, if I don't start the process, nothing answers, but if I do, it does), but I can not get JConsole to work filling in the IP and port.
Seems like it should be so simple, but no errors, no noise, no nothing. Just doesn't work.
Anyone know the hot tip for this?
I have a solution for this:
If your Java process is running on Linux behind a firewall and you want to start JConsole / Java VisualVM / Java Mission Control on Windows on your local machine to connect it to the JMX Port of your Java process.
You need access to your linux machine via SSH login. All Communication will be tunneled over the SSH connection.
TIP: This Solution works no matter if there is a firewall or not.
Disadvantage: Everytime you restart your java process, you will need to do all steps from 4 - 9 again.
1. You need the putty-suite for your Windows machine from here:
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
At least the putty.exe
2. Define one free Port on your linux machine:
<jmx-remote-port>
Example:
jmx-remote-port = 15666
3. Add arguments to java process on the linux machine
This must be done exactly like this. If its done like below, it works for linux Machines behind firewalls (It works cause of the -Djava.rmi.server.hostname=localhost argument).
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=<jmx-remote-port>
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false
-Djava.rmi.server.hostname=localhost
Example:
java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=15666 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=localhost ch.sushicutta.jmxremote.Main
4. Get Process-Id of your Java Process
ps -ef | grep <java-processname>
result ---> <process-id>
Example:
ps -ef | grep ch.sushicutta.jmxremote.Main
result ---> 24321
5. Find arbitrary Port for RMIServer stubs download
The java process opens a new TCP Port on the linux machine, where the RMI Server-Stubs will be available for download. This port also needs to be available via SSH Tunnel to get a connection to the Java Virtual Machine.
With netstat -lp this port can be found also the lsof -i gives hints what port has been opened form the java process.
NOTE: This port always changes when java process is started.
netstat -lp | grep <process-id>
tcp 0 0 *:<jmx-remote-port> *:* LISTEN 24321/java
tcp 0 0 *:<rmi-server-port> *:* LISTEN 24321/java
result ---> <rmi-server-port>
Example:
netstat -lp | grep 24321
tcp 0 0 *:15666 *:* LISTEN 24321/java
tcp 0 0 *:37123 *:* LISTEN 24321/java
result ---> 37123
6. Enable two SSH-Tunnels from your Windows machine with putty
Source port: <jmx-remote-port>
Destination: localhost:<jmx-remote-port>
[x] Local
[x] Auto
Source port: <rmi-server-port>
Destination: localhost:<rmi-server-port>
[x] Local
[x] Auto
Example:
Source port: 15666
Destination: localhost:15666
[x] Local
[x] Auto
Source port: 37123
Destination: localhost:37123
[x] Local
[x] Auto
7. Login to your Linux machine with Putty with this SSH-Tunnel enabled.
Leave the putty session open.
When you are logged in, Putty will tunnel all TCP-Connections to the linux machine over the SSH port 22.
JMX-Port:
Windows machine: localhost:15666 >>> SSH >>> linux machine: localhost:15666
RMIServer-Stub-Port:
Windows Machine: localhost:37123 >>> SSH >>> linux machine: localhost:37123
8. Start JConsole / Java VisualVM / Java Mission Control to connect to your Java Process using the following URL
This works, cause JConsole / Java VisualVM / Java Mission Control thinks you connect to a Port on your local Windows machine. but Putty send all payload to the port 15666 to your linux machine.
On the linux machine first the java process gives answer and send back the RMIServer Port. In this example 37123.
Then JConsole / Java VisualVM / Java Mission Control thinks it connects to localhost:37123 and putty will send the whole payload forward to the linux machine
The java Process answers and the connection is open.
[x] Remote Process:
service:jmx:rmi:///jndi/rmi://localhost:<jndi-remote-port>/jmxrmi
Example:
[x] Remote Process:
service:jmx:rmi:///jndi/rmi://localhost:15666/jmxrmi
9. ENJOY #8-]
Adding -Djava.rmi.server.hostname='<host ip>' resolved this problem for me.
Tried with Java 8 and newer versions
This solution works well also with firewalls
1. Add this to your java startup script on remote-host:
-Dcom.sun.management.jmxremote.port=1616
-Dcom.sun.management.jmxremote.rmi.port=1616
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false
-Djava.rmi.server.hostname=localhost
2. Execute this on your computer.
Windows users:
putty.exe -ssh user#remote-host -L 1616:remote-host:1616
Linux and Mac Users:
ssh user#remote-host -L 1616:remote-host:1616
3. Start jconsole on your computer
jconsole localhost:1616
4. Have fun!
P.S.: during step 2, using ssh and -L you specify that the port 1616 on the local (client) host must be forwarded to the remote side. This is an ssh tunnel and helps to avoids firewalls or various networks problems.
After putting my Google-fu to the test for the last couple of days, I was finally able to get this to work after compiling answers from Stack Overflow and this page http://help.boomi.com/atomsphere/GUID-F787998C-53C8-4662-AA06-8B1D32F9D55B.html.
Reposting from the Dell Boomi page:
To Enable Remote JMX on an Atom
If you want to monitor the status of an Atom, you need to turn on Remote JMX (Java Management Extensions) for the Atom.
Use a text editor to open the <atom_installation_directory>\bin\atom.vmoptions file.
Add the following lines to the file:
-Dcom.sun.management.jmxremote.port=5002
-Dcom.sun.management.jmxremote.rmi.port=5002
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
The one line that I haven't seen any Stack Overflow answer cover is
-Dcom.sun.management.jmxremote.rmi.port=5002
In my case, I was attempting to retrieve Kafka metrics, so I simply changed the above option to match the -Dcom.sun.management.jmxremote.port value. So, without authentication of any kind, the bare minimum config should look like this:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=(jmx remote port)
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=(jmx remote port)
-Djava.rmi.server.hostname=(CNAME|IP Address)
You are probably experiencing an issue with a firewall. The 'problem' is that the port you specify is not the only port used, it uses 1 or maybe even 2 more ports for RMI, and those are probably blocked by a firewall.
One of the extra ports will not be know up front if you use the default RMI configuration, so you have to open up a big range of ports - which might not amuse the server administrator.
There is a solution that does not require opening up a lot of ports however, I've gotten it to work using the combined source snippets and tips from
http://forums.sun.com/thread.jspa?threadID=5267091 - link doesn't work anymore
http://blogs.oracle.com/jmxetc/entry/connecting_through_firewall_using_jmx
http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
It's even possible to setup an ssh tunnel and still get it to work :-)
Are you running on Linux? Perhaps the management agent is binding to localhost:
http://java.sun.com/j2se/1.5.0/docs/guide/management/faq.html#linux1
Sushicutta's steps 4-7 can be skipped by adding the following line to step 3:
-Dcom.sun.management.jmxremote.rmi.port=<same port as jmx-remote-port>
e.g.
Add to start up parameters:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=12345
-Dcom.sun.management.jmxremote.rmi.port=12345
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.local.only=false
-Djava.rmi.server.hostname=localhost
For the port forwarding, connect using:
ssh -L 12345:localhost:12345 <username>#<host>
if your host is a stepping stone, simply chain the port forward by running the following on the step stone after the above:
ssh -L 12345:localhost:12345 <username>#<host2>
Mind that the hostname=localhost is needed to make sure the jmxremote is telling the rmi connection to use the tunnel. Otherwise it might try to connect directy and hit the firewall.
PROTIP:
The RMI port are opened at arbitrary portnr's. If you have a firewall and don't want to open ports 1024-65535 (or use vpn) then you need to do the following.
You need to fix (as in having a known number) the RMI Registry and JMX/RMI Server ports. You do this by putting a jar-file (catalina-jmx-remote.jar it's in the extra's) in the lib-dir and configuring a special listener under server:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener"
rmiRegistryPortPlatform="10001" rmiServerPortPlatform="10002" />
(And ofcourse the usual flags for activating JMX
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Djava.rmi.server.hostname=<HOSTNAME> \
See: JMX Remote Lifecycle Listener at http://tomcat.apache.org/tomcat-6.0-doc/config/listeners.html
Then you can connect using this horrific URL:
service:jmx:rmi://<hostname>:10002/jndi/rmi://<hostname>:10001/jmxrmi
Check if your server is behind the firewall. JMX is base on RMI, which open two port when it start. One is the register port, default is 1099, and can be specified by the com.sun.management.jmxremote.port option. The other is for data communication, and is random, which is what cause problem. A good news is that, from JDK6, this random port can be specified by the com.sun.management.jmxremote.rmi.port option.
export CATALINA_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8991 -Dcom.sun.management.jmxremote.rmi.port=8991 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
Getting JMX through the Firewall is really hard. The Problem is that standard RMI uses a second random assigned port (beside the RMI registry).
We have three solution that work, but every case needs a different one:
JMX over SSH Tunnel with Socks proxy, uses standard RMI with SSH magic
http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html
JMX MP (alternative to standard RMI), uses only one fixed port, but needs a special jar on server and client
http://meteatamel.wordpress.com/2012/02/13/jmx-rmi-vs-jmxmp/
Start JMX Server form code, there it is possible to use standard RMI and use a fixed second port:
https://issues.apache.org/bugzilla/show_bug.cgi?id=39055
When testing/debugging/diagnosing remote JMX problems, first always try to connect on the same host that contains the MBeanServer (i.e. localhost), to rule out network and other non-JMX specific problems.
There are already some great answers here, but, there is a slightly simpler approach that I think it is worth sharing.
sushicutta's approach is good, but is very manual as you have to get the RMI Port every time. Thankfully, we can work around that by using a SOCKS proxy rather than explicitly opening the port tunnels. The downside of this approach is JMX app you run on your machine needs to be able to be configured to use a Proxy. Most processes you can do this from adding java properties, but, some apps don't support this.
Steps:
Add the JMX options to the startup script for your remote Java service:
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=8090
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Set up a SOCKS proxy connection to your remote machine:
ssh -D 9696 user#remotemachine.com
Configure your local Java monitoring app to use the SOCKS proxy (localhost:9696). Note: You can sometimes do this from the command line, i.e.:
jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=9696
The following worked for me (though I think port 2101 did not really contribute to this):
-Dcom.sun.management.jmxremote.port=2100
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.rmi.port=2101
-Djava.rmi.server.hostname=<IP_ADDRESS>OR<HOSTNAME>
I am connecting from a remote machine to a server which has Docker running and the process is inside the container. Also, I stopped firewallD but I don't think that was the issue as I could telnet to 2100 even with the firewall open.
Hope it helps.
I am running JConsole/JVisualVm on windows hooking to tomcat running Linux Redhat ES3.
Disabling packet filtering using the following command did the trick for me:
/usr/sbin/iptables -I INPUT -s jconsole-host -p tcp --destination-port jmxremote-port -j ACCEPT
where jconsole-host is either the hostname or the host address on which JConsole runs on and jmxremote-port is the port number set for com.sun.management.jmxremote.port for remote management.
I'm using boot2docker to run docker containers with Tomcat inside and I've got the same problem, the solution was to:
Add -Djava.rmi.server.hostname=192.168.59.103
Use the same JMX port in host and docker container, for instance: docker run ... -p 9999:9999 .... Using different ports does not work.
You need to also make sure that your machine name resolves to the IP that JMX is binding to; NOT localhost nor 127.0.0.1. For me, it has helped to put an entry into hosts that explicitly defines this.
Getting JMX through the firewall isn't that hard at all. There is one small catch. You have to forward both your JMX configured port ie. 9010 and one of dynamic ports its listens to on my machine it was > 30000
These are the steps that worked for me (debian behind firewall on the server side, reached over VPN from my local Mac):
check server ip
hostname -i
use JVM params:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=[jmx port]
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=[server ip from step 1]
run application
find pid of the running java process
check all ports used by JMX/RMI
netstat -lp | grep [pid from step 4]
open all ports from step 5 on the firewall
Voila.
In order to make a contribution, this is what I did on CentOS 6.4 for Tomcat 6.
Shutdown iptables service
service iptables stop
Add the following line to tomcat6.conf
CATALINA_OPTS="${CATALINA_OPTS} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8085 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=[host_ip]"
This way I was able to connect from another PC using JConsole.
I'm trying to JMC to run the Flight Recorder (JFR) to profile NiFi on a remote server that doesn't offer a graphical environment on which to run JMC.
Based on the other answers given here, and upon much trial and error, here is what I'm supplying to the JVM (conf/bootstrap.conf)when I launch NiFi:
java.arg.90=-Dcom.sun.management.jmxremote=true
java.arg.91=-Dcom.sun.management.jmxremote.port=9098
java.arg.92=-Dcom.sun.management.jmxremote.rmi.port=9098
java.arg.93=-Dcom.sun.management.jmxremote.authenticate=false
java.arg.94=-Dcom.sun.management.jmxremote.ssl=false
java.arg.95=-Dcom.sun.management.jmxremote.local.only=false
java.arg.96=-Djava.rmi.server.hostname=10.10.10.92 (the IP address of my server running NiFi)
I did put this in /etc/hosts, though I doubt it's needed:
10.10.10.92 localhost
Then, upon launching JMC, I create a remote connection with these properties:
Host: 10.10.10.92
Port: 9098
User: (nothing)
Password: (ibid)
Incidentally, if I click the Custom JMX service URL, I see:
service:jmx:rmi:///jndi/rmi://10.10.10.92:9098/jmxrmi
This finally did it for me.