I'm trying to deploy GridGain Web Console 2020.03.01 on RHEL7 x86_64 with Docker following documentation here.
However, there is 404 Not Found error on accessing http://localhost:3000/swagger-ui.html page which is used as healthcheck. Backend logs show no errors. The last version I'm able to get containers running with is 2019.12.02 (which in fact refuses to show a connected cluster, but that's another issue). Starting with 2020.01.00, all backend healthchecks fail. That looks suspicious considering that 2020.01.00 releasenotes include updates of io.springfox and swagger-ui-dist.
Besides that, 2020.03.01 releasenotes say that Console's default port is changed to 8008, but the server still starts on 3000.
Anyone had any luck deploying dockerized Web Console?
The Web Console consists of backend and frontend. The backend is started on port 3000 which is printed in log, while the frontend is started indeed on port 8008 - and you most probably want to use this.
The docker-compose.yml given on Documentation site maps container's 8008 port to host's 80 port, feel free to replace with any wanted.
Regarding the heathcheck, /health endpoint is now changed to this
The Swagger was removed in 2020.01.00 due to security concerns (same GG-26726 issue mentioned in the release notes). You are right to be suspicious, I'll ask right people to update release notes and the docs, sorry about the confusion and thanks for pointing the issue out. Swagger was supposed to be an internal feature for Web Console (WC) developer team only.
As you pointed out, starting with 2020.01.00 the Swagger-based health check won't work. Internally, the WC team uses dockerize to wait for backend to start, here's an example from our E2E test suite compose:
entrypoint: dockerize -wait http://backend:3000/health -timeout 2m -wait-retry-interval 5s node ./index.js --target=${TARGET:-on-premise}
This might work for you too, with some adaptation. You will most likely have to remove "healthcheck" sections from docker-compose.yml too, or modify these, if the "http://backend:3000/health" URL can indeed serve as a direct replacement for the old "http://localhost:3000/swagger-ui.html" URL, which I am not sure about.
I have been using Kafka Connect in my work setup for a while now and it works perfectly fine.
Recently I thought of dabbling with few connectors of my own in my docker based kafka cluser with just one broker (ubuntu:18.04 with kafka installed) and a separate node acting as client for deploying connector apps.
Here is the problem:
Once my broker is up and running, I login to the client node (with no broker running,just the vanilla kafka installation), i setup the class path to point to my connector libraries. Also the KAFKA_LOG4J_OPTS environment variable to point to the location of log file to generate with debug mode enabled.
So every time i start the kafka worker using command:
nohup /opt//bin/connect-distributed /opt//config/connect-distributed.properties > /dev/null 2>&1 &
the connector starts running, but I don't see the log file getting generated.
I have tried several changes but nothing works out.
QUESTIONS:
Does this mean that connect-distributed.sh doesn't generate the log file after reading the variable
KAFKA_LOG4J_OPTS? and if it does, could someone explain how?
NOTE:
(I have already debugged the connect-distributed.sh script and tried the options where daemon mode is included and not included, by default if KAFKA_LOG4J_OPTS is not provided, it uses the connect-log4j.properties file in config directory, but even then no log file is getting generated).
OBSERVATION:
Only when I start zookeeper/broker on the client node, then provided KAFKA_LOG4J_OPTS value is picked and logs start getting generated but nothing related to the Kafka connector. I have already verified the connectivity b/w the client and the broker using kafkacat
The interesting part is:
The same process i follow in my workpalce and logs start getting generated every time the worker (connnect-distributed.sh) is started, but I haven't' been to replicate the behaviors in my own setup). And I have no clue what I am missing here.
Could someone provide some reasoning, this is really driving me mad.
I am trying to mirror a kafka topic from a provider in a container in an ec2 instance to a consumer, and messages are not coming through. I suspect that I am messing things up with the .properties configs, as this is my first time using MirrorMaker. I may have also messed up with pointing to the wrong ports somewhere along the way.
The would-be provider broker is running in a centOS container in an ec2 instance. The provider is receiving data from a remote MySQL server through a custom-configured jdbc source connector to a topic called mysql-jdbc-events. The provider is successfully receiving messages.
The would-be consumer is currently on the host ec2 instance, although that will change once it's successfully been tested. I mapped port 12181 of the host to port 2181 of the container (where zookeeper is running). I am running the MirrorMaker command from the consumer.
I ran the command
./kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config /path/to/config/consumer.properties --producer.config /path/to/config/producer.properties --whitelist "mysql-jdbc-events"
consumer.properties:
# format: host1:port1,host2:port2 ...
zookeeper.connect=(host ip-address):12181
zookeeper.connection.timeout.ms=10000
bootstrap.servers=localhost:9092
# consumer group id
group.id=mirror_group
producer.properties:
# format: host1:port1,host2:port2 ...
zookeeper.connect=(host ip-address):2181
bootstrap.servers=localhost:9092
# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
compression.type=none
I tried both with and without the zookeeper.connect parameter in the producer config because I found conflicting about it being necessary. I also got a warning to the effect of WARN The configuration 'zookeeper.connect' was supplied but isn't a known config., but I read elsewhere on SO that this could be ignored.
I did not get any messages populated to the topic at the consumer, but there are messages in the topic at the producer.
If any more information would be helpful, please let me know.
I am also not married to this configuration - if there is a simpler way to keep the jdbc in the container and forward the messages to a kafka instance outside the container, that's good too.
Today I download neo4j-community-3.2.0 in windows, when i start the server, i meet one problem in browser, i meet this problem in neo4j-community-3.1.2 and i had solved it by Ticking the "Do not use Bolt" option in settings solved the issue. But in neo4j-community-3.2.0 , i can't see "Do not use Bolt" option ,and i don't know how to do.
N/A: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver. Please use your browsers development console to determine the root cause of the failure. Common reasons include the database being unavailable, using the wrong connection URL or temporary network problems. If you have enabled encryption, ensure your browser is configured to trust the certificate Neo4j is configured to use. WebSocket readyState is: 3
This happens because the browser is trying (under the hood) to also access the bolt port, which uses an unsigned certificate.
You probably allowed the browser to access the SSL 7474 port through allowing the unsigned certificate as an exception on your browser (and if you didn't, you should in order to make it work).
The url was:
https://[neo4j_host]:7474
Do the same for the bolt certificate, allow it as an exception for url:
https://[neo4j_host]:7687
I ran into the same problem trying to use Neo4j Community Edition on an AWS Ubuntu 16.04 instance. The key thing that solved it was to open port 7687 (the bolt port) in the AWS security group settings.
Found this based on https://stackoverflow.com/a/45234105/1529646
Thus, full answer is:
Make sure to configure Neo4j correctly, ie. uncomment the line dbms.connectors.default_listen_address=0.0.0.0 AND the line dbms.connector.bolt.listen_address=:7687
Open ports 7474 AND 7687 in the AWS security group settings.
In the lower left corner of the browser gear, select do not use bolt.
Open your ${NEO4J_HOME}/conf/neo4j.conf file and edit the bolt settings. It is just about uncommenting this line dbms.connector.bolt.address=0.0.0.0:7687
Change the version of Neo4j
Check your JDK version, use JDK1.8
Adding another option, which worked for me. If your bolt's tls_level is set to REQUIRED, you need to change it to OPTIONAL, if you are not using it with SSL certificate; to get this working.
If you are using Neo4J Community Edition (ver 3.5.1 - in my case) from AWS Marketplace, you need to change the configuration in:
/etc/neo4j/pre-neo4j.sh
Change this line:
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=REQUIRED}"
to
echo "dbms_connector_bolt_tls_level" "${dbms_connector_bolt_tls_level:=OPTIONAL}"
You can find more about Neo4J connector configuration option here. Ideally as per docs, by default bolt.tls_level should have been OPTIONAL only. But I'm not really sure what exactly happened in my case, which got it changed to REQUIRED. Or if it came as is from AWS Marketplace.
Assuming you have valid certs and placed them under the correct certificates directory:
dbms.ssl.policy.bolt.client_auth=NONE
Version 4.0. Took it from this article.
I shared my full ssl config on this other answer.
I had the same error. New to Neo, so take this with a grain of salt, but my solution didn't match these above idea. But thanks as they did lead me to the right "water". So
I went into the conf file, noticed that there was the same port number (previously, the Neo desktop had been constantly telling me it'd needed to update the port numbers...I never checked to verity, but they'd be #, #+1 and #+2. But that didn't work yet that'd happened again and again...but now, after checking the conf file myself, I noticed that the number was the same for all three port requirements for BOLT. Tried that and it didn't work either...but maybe that was important in what did:
In the folder, where the specific database is housed, named "..neo4jdatabases/[GUID Value]" there were two directories titled "/installation-3.4.0" and "...1". I removed the ".0", restarted things and IT WORKED.
So, either there should NOT be two versions under the same database collection OR that's true AND you need the three ports to be the same.
Final add for any Neo4j experts who actually know what they're doing, I have three databases running, two without issue. This occurred AFTER I was messing around trying to see how PowerShell might be useful. Not sure if this is related, but the other databases have worked fine...but, this db is the original playground/sandbox I'd had since the beginning. Not 100% sure, I made the version update before or after, creating the other two databases. HTH.
Using a windows trial version on a Windows 10 machine. Current N4j version is 3.4.1.
Do love what I see so far with Neo BTW!!!
Please mention the correct bolt port under the Connect URL textbox.if you are using the service port the mention the service port in place of bolt port.
Then finally I resolve it by replacing the bolt port with service port inside k8s.
user: neo4j
password: neo4j
I resolve this error by replace the port 7687 with node port 30033 inside Neo4j
then it works fine.
I was facing the same issue with Neo4J version 4 installed on an Ubuntu 18 EC2 instance. Tthe workaround that did the trick for me was to replace the 0.0.0.0 entries in /etc/neo4j/neo4j.conf with the actual private IP of my instance.
Following are the lines where the replace happened:
dbms.default_listen_address=172.X.X.232
dbms.connector.bolt.address=172.X.X.232:7687
Post restart of the DB, the Connect URL when accessing from browser should also use the private IP instead of localhost.
I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.