AppDynamics monitoring with AMQ 7.0.1 - jmx

I am using the ActiveMQ extension of AppDynamics. It is good to start. With JMXRemote(enabled in artemis.profile) it is OK. But, I want it from localhost. JMX is enabled by default for localhost for AMQ. AMQ management console use jmx internally and it works without JMXRemote enabled. What service URL jolokia use internally to connect using JMX from localhost? I have tryed with following URL:
serviceUrl: "service:jmx:rmi:///jndi/rmi://:1099/jmxrmi"

The first step is to add a username and password in the etc/users.properties file. For most purposes, it is ok to just
use the default settings provided out of the box. For this, just uncomment the following line:
admin=admin,admin,manager,viewer,Operator, Maintainer, Deployer, Auditor, Administrator, SuperUser
Then, you must bypass credential checks on BrokeViewMBean by adding it to the whitelist ACL configuration. You can do so by replacing this line:
org.apache.activemq.Broker;getBrokerVersion=bypass
with this:
org.apache.activemq.Broker=bypass
In addition to being the correct way, it also enables several different configuration options (eg: port, listen address, etc) by just changing the file org.apache.karaf.management.cfg on broker's etc directory.
Please keep in mind that JMX access is made through a different JMX connector root in this case: it uses karaf-root instead of jmxrmi, which was previously used in the older method. It also uses port 1099 by default, instead of 1616.
Therefore, the uri should be
service:jmx:rmi:///jndi/rmi://<host>:<port>/karaf-root

Related

Cannot enable basic auth on Windows-Exporter to secure node between Windows and Prometheus

As a test environment to monitor status of Windows Servers (CPU, Disk usage, Memory, network etc) I have placed two testing nodes with Windows-Exporter configured on custom port :15000
Next, I have created proper jobs for each separate Windows instances and created dashboard in Grafana.
The problem is that I'm looking for securing nodes so only Prometheus server can access node output and all other computers in same network get deny access to node website.
I have tried to install Windows Node with setting:
msiexec /i windows_exporter-0.19.0-amd64.msi LISTEN_PORT="15000" EXTRA_FLAGS="--web.config.file=C:\Configuration\web.yml"
As well as with different configurations of " and ' in commandline for EXTRA_FLAGS parameter - yet it seems they are being ignored. The only parameter working fine is change of listen port.
I have followed instructions provided at https://prometheus.io/docs/guides/basic-auth/ to set up basic auth.
Web.yml looks like this:
basic_auth:
username: 'scrapper'
password: '$2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC'
However, after installation with msiexec - entry in Windows services for windows_exporter is without web.config.file entry:
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --telemetry.addr 0.0.0.0:15000
I have tried to edit service entry with sc command but it broke node completely, making me rolling back to unprotected access to node.
Does basic auth work on windows-exporter same way as on node-exporter for Linux OSes?
Or is there other possible way to secure access to node exposed data without need to install IIS?
I have never worked with node exporter in Windows, but in Linux, your Web.yml configuration file should be as follows:
basic_auth_users:
<string>: <secret>
like this:
basic_auth_users:
scrapper: $2a$14$AWpxyT1KcRPSE07IfmqTqOZznpMfGwxHP8uPVQV8G0qdjggND3hgC

Localhost write permission on mosquitto topics

I'm using mosquitto for some IoT projects. I can use ACLs files to easily add authentication based on write and read access. But is there any way to active readwrite for a localhost connection and read for connections from outside (public IP)?
I don't see any reason to open the write access for the other connections outside even with a password, and access without the password would be easier for local services.
Not with the built in Username/Password + ACL scheme.
Mosquitto has a plugin interface for authentication/authorisation so you may be able to use that to build what you want.
The other option is to run 2 brokers and set one up with read/write and only listening on localhost, then bridge that to the other with the anonymous user set up with read only and one user for the bridging broker to use.

Apache Artemis queue monitoring with Zabbix

I'd like to keep track of data that might be stuck in Apache Artemis queues and I'd like to leverage its JMX management abilities together with our Zabbix instance.
What steps do I need to take in order to successfully connect Zabbix to Artemis via JMX? The ones mentioned in https://activemq.apache.org/artemis/docs/latest/management.html are not quite clear to me.
I had to disable the internal connector and go the other way around by adding this to the artemis.profile file:
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.authenticate=false"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.ssl=false"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.port=1099"
JAVA_ARGS="$JAVA_ARGS -Dcom.sun.management.jmxremote.rmi.port=1098"
JAVA_ARGS="$JAVA_ARGS -Djava.rmi.server.hostname=edimq-broker-master-az1.dc01.clouedi.local"
However, this way it's anything but secure, I know.
As the documentation states, you need to add this to your management.xml:
<connector connector-port="1099"/>
This will expose a JMX connector on localhost so if you want to be able to access it remotely from another machine on your network (i.e. your Zabbix instance) then you should do something like:
<connector connector-port="1099" connector-host="myhost" />
Also, if you have multiple IP addresses on the machine hosting the broker you'll want to set this system property in the JAVA_ARGS variable in artemis.profile:
-Djava.rmi.server.hostname=myhost
Then point your Zabbix instance at the broker using a url like:
service:jmx:rmi:///jndi/rmi://myhost:1099/jmxrmi
You can see this in action by running the jmx example shipped with Artemis in the examples/features/standard/ directory. Just navigate into that directory and run mvn verify. Running the example will create a broker instance, start the broker instance, and run the client all automatically. After the example runs you can go to into the target/server0 directory and look at all the configuration files to compare them to your own. You can also start broker independently of the example if you wish (by running ./artemis run from the target/server0/bin directory). Once the broker is running you should be able to connect to it with JConsole no problem using a JMX url like this:
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi

WebSocket connection failure. Due to security constraints in your web browser

Today I download neo4j-community-3.2.0 in windows, when i start the server, i meet one problem in browser, i meet this problem in neo4j-community-3.1.2 and i had solved it by Ticking the "Do not use Bolt" option in settings solved the issue. But in neo4j-community-3.2.0 , i can't see "Do not use Bolt" option ,and i don't know how to do.
N/A: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
This happens because the browser is trying (under the hood) to also access the bolt port, which uses an unsigned certificate.
You probably allowed the browser to access the SSL 7474 port through allowing the unsigned certificate as an exception on your browser (and if you didn't, you should in order to make it work).
The url was:
https://[neo4j_host]:7474
Do the same for the bolt certificate, allow it as an exception for url:
https://[neo4j_host]:7687
I ran into the same problem trying to use Neo4j Community Edition on an AWS Ubuntu 16.04 instance. The key thing that solved it was to open port 7687 (the bolt port) in the AWS security group settings.
Found this based on https://stackoverflow.com/a/45234105/1529646
Thus, full answer is:
Make sure to configure Neo4j correctly, ie. uncomment the line dbms.connectors.default_listen_address=0.0.0.0 AND the line dbms.connector.bolt.listen_address=:7687
Open ports 7474 AND 7687 in the AWS security group settings.
In the lower left corner of the browser gear, select do not use bolt.
Open your ${NEO4J_HOME}/conf/neo4j.conf file and edit the bolt settings. It is just about uncommenting this line dbms.connector.bolt.address=0.0.0.0:7687
Change the version of Neo4j
Check your JDK version, use JDK1.8
Adding another option, which worked for me. If your bolt's tls_level is set to REQUIRED, you need to change it to OPTIONAL, if you are not using it with SSL certificate; to get this working.
If you are using Neo4J Community Edition (ver 3.5.1 - in my case) from AWS Marketplace, you need to change the configuration in:
/etc/neo4j/pre-neo4j.sh
Change this line:
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=REQUIRED}"
to
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=OPTIONAL}"
You can find more about Neo4J connector configuration option here. Ideally as per docs, by default bolt.tls_level should have been OPTIONAL only. But I'm not really sure what exactly happened in my case, which got it changed to REQUIRED. Or if it came as is from AWS Marketplace.
Assuming you have valid certs and placed them under the correct certificates directory:
dbms.ssl.policy.bolt.client_auth=NONE
Version 4.0. Took it from this article.
I shared my full ssl config on this other answer.
I had the same error. New to Neo, so take this with a grain of salt, but my solution didn't match these above idea. But thanks as they did lead me to the right "water". So
I went into the conf file, noticed that there was the same port number (previously, the Neo desktop had been constantly telling me it'd needed to update the port numbers...I never checked to verity, but they'd be #, #+1 and #+2. But that didn't work yet that'd happened again and again...but now, after checking the conf file myself, I noticed that the number was the same for all three port requirements for BOLT. Tried that and it didn't work either...but maybe that was important in what did:
In the folder, where the specific database is housed, named "..neo4jdatabases/[GUID Value]" there were two directories titled "/installation-3.4.0" and "...1". I removed the ".0", restarted things and IT WORKED.
So, either there should NOT be two versions under the same database collection OR that's true AND you need the three ports to be the same.
Final add for any Neo4j experts who actually know what they're doing, I have three databases running, two without issue. This occurred AFTER I was messing around trying to see how PowerShell might be useful. Not sure if this is related, but the other databases have worked fine...but, this db is the original playground/sandbox I'd had since the beginning. Not 100% sure, I made the version update before or after, creating the other two databases. HTH.
Using a windows trial version on a Windows 10 machine. Current N4j version is 3.4.1.
Do love what I see so far with Neo BTW!!!
Please mention the correct bolt port under the Connect URL textbox.if you are using the service port the mention the service port in place of bolt port.
Then finally I resolve it by replacing the bolt port with service port inside k8s.
user: neo4j
password: neo4j
I resolve this error by replace the port 7687 with node port 30033 inside Neo4j
then it works fine.
I was facing the same issue with Neo4J version 4 installed on an Ubuntu 18 EC2 instance. Tthe workaround that did the trick for me was to replace the 0.0.0.0 entries in /etc/neo4j/neo4j.conf with the actual private IP of my instance.
Following are the lines where the replace happened:
dbms.default_listen_address=172.X.X.232
dbms.connector.bolt.address=172.X.X.232:7687
Post restart of the DB, the Connect URL when accessing from browser should also use the private IP instead of localhost.

Rails app call APIs using proxy

I have subscribed to an API service which provides access based on static IP (For both Live and Testing).
Since my development area ISP doesn't provide a static IP, I have enabled API access to my staging machine IP, which is static. I installed squid and enabled/setup a proxy server in my staging server so that I can use it as a proxy and make calls to the API while i do development.
I am using Mac for my development and Networking>Proxy settings wont work for system wide( Terminal ). Due to this, I was using Trial versions of MacProxy, proxifier( proxy clients) and all was was working fine till trial expired. Are there any free alternatives to this for Mac?
I tried to setup proxy by creating ssh socks proxy and setting http_proxy="xxx". In terminal. When I check terminal IP post setting using curl ipecho.net/plain ; echo, it shows proper IPs but when I run local rails development server and tries to access the API, its rejecting call with invalid IP (it shows non proxied IP)
An free alternative that might solve your problem might be a project on github:
sshuttle (read me)
It forwards TCP and DNS requests a remote ssh server.
The most basic use of sshuttle looks like this:
./sshuttle -r username#sshserver 0.0.0.0/0 -vv
To tunnel all traffic you might do:
./sshuttle --dns -vr ssh_server 0/0
There are also helper functions available here, which can simpify some of the commands.
The system level proxy settings aren't used by ruby applications. Typically this is a code level option passed to the library you are using to make connections.
If you want Savon to use a proxy then you need to pass this to Savon when you create the client:
client = Savon.client(proxy: "http://example.org", ...)
If this call is being made inside a gem, then unless that gem already provides that option then you would need to fork it to add the option
The gem you mention seems to already implement this - it's configuration class has a proxy attribute that seems to be passed through to savon.

Resources