I'm trying to profile a remote Tomcat and I'm able to connect to it with JConsole, but fail with VisualVM. I set up a proxy with ssh:
ssh -luser -D 9898 -Nf example.com
And with these configurations in tomcat7.conf:
-Dcom.sun.management.jmxremote=true \
-Dcom.sun.management.jmxremote.port=3333 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Djava.rmi.server.hostname=example.com
With this options JConsole works perfectly, but VisualVM doesn't. Is there anything I'm missing?
A year late, but in case anybody else might find it useful. This was not a problem with Java 7, but I started seeing this with Java 8. Ever since I've been using the below two commands to successfully connect to both jconsole and jvisualvm:
jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=<SOCKS_PORT> service:jmx:rmi:///jndi/rmi://<REMOTE_HOST:JMX_PORT>/jmxrmi
jvisualvm -J-DsocksProxyHost=localhost -J-DsocksProxyPort=<SOCKS_PORT> --openjmx <REMOTE_HOST>
In your case SOCKS_PORT would be 9898 and JMX_PORT would be 3333.
Related
I am trying to settup a sawtooth network like in the following tutorial.
I use the following docker-compose.yaml file as instructed in the tutorial to create a sawtooth network of 5 nodes using the pbft consesus engine.
The problem is that once I try to check whether peering has occurred on the network by submit a peers query to the REST API on the first node from the shell container I get a connection refused answer:
curl: (7) Failed to connect to sawtooth-rest-api-default-0 port 8008: Connection refused
Connectivity among the containers seems to be working fine (I have checked with ping from inside the containers).
I suspect that the problem stems from the following line of the docker-compose.yaml file:
sawtooth-validator -vv \
--endpoint tcp://validator-0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000
and more specifically the --bind option. I noticed that eth0 is not resolved properly to the IP of the container network, but instead to the loopback:
terminal output for validator 0
Do you believe that this could be the problem or is there something else I might have overlooked?
Thannk you
Looks like the moment I post something here the answer magically reveals itself.
The backslash characters are not interpreted correctly so the --bind option was not taken into account and the default is the loopback.
What I did to fix it is either put the whole command in the same line or use double backslash.
I am building a SCTP supporting application with Erlang and I stumbled upon some problems likely related to my machine (I tried the same code on another machine and it works just fine). I am using Ubuntu 22.04. When I try to gen_sctp:open(...) it returns: "{error,eprotonosupport}" which after some research turns out to be " The protocol type or the specified protocol is not supported within this domain.".
I tried:
sudo apt-get install libsctp-dev lksctp-tools
sctp_darn -H 0 -P 2500 -l
sctp_darn -H 0 -P 2600 -h 127.0.0.1 -p 2500 -s
And it seems to work just fine.
After:
lynis audit system | grep sctp
It returns:
* Determine if protocol 'sctp' is really needed on this system [NETW-3200]
So it seems to be enabled. What am I missing? (port is 3868)
Edit:
The port is open. I tried with ufw and iptables for all protocols and solely for sctp. It did't work.
Edit 2:
So after setting up 2 VM's Ubuntu 20.04 and Ubuntu 22.04 everything seems to work as expected. I guess I have messed something up with my system.
I installed Ubuntu on an older Laptop. Now there is Docker with Portainer running and I want to access Portainer via my main PC in the same network. When I try to connect to Portainer via my Laptop where it is runnig (not Localhost address) it works fine. But when I try to connect via my PC, I get a timeout. Windows diagnostics says: "resource is online but isn't responding to connection attempts". How can I open Portainer to my local network? Or is this a problem with Ubuntu?
so check if you have openssh server running for ssh! disable firewall on terminal sudo ufw disable check if your network card is running on name eth0 ifconfig if not change following this step below
Using netplan which is the default these days. File /etc/netplan/00-installer-config.yaml file. but b4 you need to get serial/mac
Find the target devices mac/hw address using the lshw command:
lshw -C network
You'll see some output which looks like:
root#ys:/etc# lshw -C network
*-network
description: Ethernet interface
physical id: 2
logical name: eth0
serial: dc:a6:32:e8:23:19
size: 1Gbit/s
capacity: 1Gbit/s
capabilities: ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bcmgenet driverversion=5.8.0-1015-raspi duplex=full ip=192.168.0.112 link=yes multicast=yes port=MII speed=1Gbit/s
So then you take the serial
dc:a6:32:e8:23:19
Note the set-name option.
This works for the wifi section as well.
if you using calbe you can delete everything add the example only change for your serial "mac" sudo nano /etc/netplan/00-installer-config.yaml file.
network:
version: 2
ethernets:
eth0:
dhcp4: true
match:
macaddress: <YOUR MAC ID HERE>
set-name: eth0
Then then to test this config run.
netplan try
When your happy with it
netplan apply
reboot you ubuntu
after restart
stop portainer container
sudo docker stop portainer
remove portainer container
sudo docker rm portainer
now run again on the last version
docker run -d -p 8000:8000 -p 9000:9000 \
--name=portainer --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:2.13.1
I have newly installed Jenkins on my Amazon-AWS server. I changed the Jenkins port to 8081 and started it. I verified that the Jenkins is running on port 8081.
[ec2-user#ip-172-31-28-247 ~]$ ps -eaf | grep 8081
jenkins 1370 1 0 03:42 ? 00:00:11 /etc/alternatives/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --daemon --httpPort=8081 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
ec2-user 1611 1585 0 04:34 pts/1 00:00:00 grep --color=auto 8081
[ec2-user#ip-172-31-28-247 ~]$
However when I try to access the Jenkins using this url:
http://ec2-54-214-126-0.us-west-2.compute.amazonaws.com:8081
it says that
This site can’t be reached ec2-54-214-126-0.us-west-2.compute.amazonaws.com took too long to respond.
On the same server I am running tomcat on port: 8080 which I can access and it shows the tomcat homepage.
http://ec2-54-214-126-0.us-west-2.compute.amazonaws.com:8080/
I am new to Jenkins.
Check the guide "Set Up a Jenkins Build Server -- Quickly create a build server for continuous integration (CI) on AWS"
It does has a section about a security group for the Amazon EC2 instance, which acts as a
firewall that controls the traffic allowed to reach one or more EC2 instances.
Maybe 8081 is not part of that security group.
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
In the left-hand navigation bar, choose Security Groups.
Click Add Rule, and then choose Custom TCP Rule from the Type list.
Under Port Range enter 8081.
So there are a lot of posts around this subject, but none of which seems to help.
I have an application running on a wildfly server inside a docker container.
And for some reason I cannot connect my remote debugger to it.
So, it is a wildfly 11 server that has been started with this command:
/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -c standalone.xml --debug 9999;
And in my standalone.xml I have this:
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
The console output seems promising:
Listening for transport dt_socket at address: 9999
I can even access the admin console with the credentials admin:admin on localhost:9990/console
However IntelliJ refuses to connect... I've creates a remote JBoss Server configuration that in the server tab points to localhost with management port 9990.
And in the startup/connection tab I've entered 9999 as remote socket port.
The docker image has exposed the ports 9999 and 9990, and the docker-compose file binds those ports as is.
Even with all of this IntelliJ throws this message when trying to connect:
Error running 'remote':
Unable to open debugger port (localhost:9999): java.io.IOException "handshake failed - connection prematurally closed"
followed by
Error running 'remote':
Unable to connect to the localhost:9990, reason:
com.intellij.javaee.process.common.WrappedException: java.io.IOException: java.net.ConnectException: WFLYPRT0053: Could not connect to remote+http://localhost:9990. The connection failed
I'm completely lost as to what the issue might be...
Interessting addition is that after intelliJ fails, if I invalidate caches and restart then wildfly reprints the message saying that it is listening on port 9999
In case someone else in the future comes to this thread with he same issue, I found this solution here:
https://github.com/jboss-dockerfiles/wildfly/issues/91#issuecomment-450192272
Basically, apparart from the --debug parameter, you also need to pass *:8787
Dockerfile:
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0", "--debug", "*:8787"]
docker-compose:
ports:
- "8080:8080"
- "8787:8787"
- "9990:9990"
command: /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 --debug *:8787
I have not tested the docker-compose solution, as my solution was on dockerfile.
Not sure if this can be seen as an answer since it goes around the problem.
But the way I solved this, was by adding a "pure" remote configuration in intelliJ instead of jboss remote. This means that it won't automagically deploy, but I'm fine with that