How to use neo4j from a docker image on the Google Cloud Platform - docker

I want to run neo4j from the google cloud shell and I have already ssh'd into my project.
Currently I am using the following to run neo4j:
docker run \
--publish=7474:7474 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
neo4j:3.0
The command works and prints the following output:
Starting Neo4j.
2017-12-13 03:22:34.661+0000 INFO ======== Neo4j 3.0.12 ========
2017-12-13 03:22:34.681+0000 INFO No SSL certificate found,
generating a self-signed certificate..
2017-12-13 03:22:35.163+0000 INFO Starting...
2017-12-13 03:22:35.631+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-13 03:22:37.966+0000 INFO Started.
2017-12-13 03:22:39.041+0000 INFO Remote interface available at
http://0.0.0.0:7474/
However, when I follow the link to http://0.0.0.0:7474/, it redirects to something like https://7474-dot-3282369-dot-devshell.appspot.com/?authuser=0 and I get an error:
Error: Could not connect to Cloud Shell on port 7474.
What can I do differently or what additional info would you need? Thank you.

I think you are facing one of the two following issues:
1. If you ssh'd in a different machine and the server is running there
The issue is that you accessed an instance from the Google Cloud Shell, then you started the server through docker. At this point I think that you connected (not intentionally) to the Cloud Shell on the port 7474 clicking on "Web preview" of the same Window!
But the server was running on a different machine!
Therefore the Cloud Shell informed you that is not listening on port 7474. To solve this issue you need to retrieve the public/external IP of your instance, create a firewall rule allowing the TCP:7474 traffic and connect to it from any browser with http://ip-your-machine:7474.
2. If you are running the server in the Google Cloud Shell
First of all you should not run a server on the Google Cloud Shell, it is not a normal virtual machine and you should never rely on it.
By the way I followed step by step what you did:
I accessed the Google Cloud Shell, I have run your code, I obtained the very same output, but when I have done the "Web preview" I correctly visualised the neo4j login page.
Thus, I believe that if you were running the server here you unintentionally stopped it before checking the "Web preview".
P.S.
The weird domain name you have been redirected to: https://7474-dot-3282369-dot-devshell.appspot.com is a domain name that points exactly to your Google Cloud Shell #3282369 on port 7474.
You are redirected automatically clicking on a link from the Cloud Shell, (since you cannot reach 0.0.0.0 from your computer).

Related

ClearML SSH port forwarding fileserver not available in WEB Ui

Trying to use clearml-server on own Ubuntu 18.04.5 with SSH Port Forwarding and not beeing able to see my debug samples.
My setup:
ClearML server on hostA
SSH Tunnel connections to access Web App from working machine via localhost:18080
Web App: ssh -N -L 18081:127.0.0.1:8081 user#hostA
Fileserver: ssh -N -L 18081:127.0.0.1:8081 user#hostA
In Web App under Task->Results->Debug Samples the Images are still refrenced by localhost:8081
Where can I set the fileserver URL to be localhost:18081 in Web App?
I tried ~/clearml.conf, but this did not work ( I think it is for my python script ).
Disclaimer: I'm a member of the ClearML team (formerly Trains)
In ClearML, debug images' URL is registered once they are uploaded to the fileserver. The WebApp doesn't actually decide on the URL for each debug image, but rather obtains it for each debug image from the server. This allows you to potentially upload debug images to a variety of storage targets, ClearML File Server simply being the most convenient, built-in option.
So, the WebApp will always look for localhost:8008 for debug images that have already been uploaded to the fileserver and contain localhost:8080 in their URL.
A possible solution is to simply add another tunnel in the form of ssh -N -L 8081:127.0.0.1:8081 user#hostA.
For future experiments, you can choose to keep using 8081 (and keep using this new tunnel), or to change the default fileserver URL in clearml.conf to point to port localhost:18081, assuming you're running your experiments from the same machine where the tunnel to 18081 exists.

kafka connect in distributed mode is not generating logs specified via log4j properties

I have been using Kafka Connect in my work setup for a while now and it works perfectly fine.
Recently I thought of dabbling with few connectors of my own in my docker based kafka cluser with just one broker (ubuntu:18.04 with kafka installed) and a separate node acting as client for deploying connector apps.
Here is the problem:
Once my broker is up and running, I login to the client node (with no broker running,just the vanilla kafka installation), i setup the class path to point to my connector libraries. Also the KAFKA_LOG4J_OPTS environment variable to point to the location of log file to generate with debug mode enabled.
So every time i start the kafka worker using command:
nohup /opt//bin/connect-distributed /opt//config/connect-distributed.properties > /dev/null 2>&1 &
the connector starts running, but I don't see the log file getting generated.
I have tried several changes but nothing works out.
QUESTIONS:
Does this mean that connect-distributed.sh doesn't generate the log file after reading the variable
KAFKA_LOG4J_OPTS? and if it does, could someone explain how?
NOTE:
(I have already debugged the connect-distributed.sh script and tried the options where daemon mode is included and not included, by default if KAFKA_LOG4J_OPTS is not provided, it uses the connect-log4j.properties file in config directory, but even then no log file is getting generated).
OBSERVATION:
Only when I start zookeeper/broker on the client node, then provided KAFKA_LOG4J_OPTS value is picked and logs start getting generated but nothing related to the Kafka connector. I have already verified the connectivity b/w the client and the broker using kafkacat
The interesting part is:
The same process i follow in my workpalce and logs start getting generated every time the worker (connnect-distributed.sh) is started, but I haven't' been to replicate the behaviors in my own setup). And I have no clue what I am missing here.
Could someone provide some reasoning, this is really driving me mad.

Docker cannot acces registry from openshift

Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.

Secure gateway between Bluemix CF apps and containers

Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?
1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.

call jmx operation on a local running process

I have a java process on a linux server, which runs with this option: -Dcom.sun.management.jmxremote
So I cannot just connect to this process via jconsole running on my local pc (because neither port nor -Dcom.sun.management.jmxremote.ssl=false options are set up).
But still, how can I connect to the application and run some operations over some of its MBeans? It this possible? I have a ssh access to the server and would be able to run it "locally" on the server (but not changing the options unfortunately)
According to JMX documentation the -Dcom.sun.management.jmxremote option
Enables the JMX remote agent and local monitoring via JMX connector published on a private
interface used by jconsole. The jconsole tool can use this connector if it is executed by
the same user ID as the user ID that started the agent. No password or access files are
checked for requests coming via this connector.
The naming is a bit unfortunate because it in fact enables the local monitoring only.
Since you can not change the options but can access the server via SSH the only option is to use X server forwarding (ssh -X ...) and run jconsole (or better yet jvisualvm which has specific optimisations for running remotely).

Resources