WSO2 not compatible with Docker - docker

I had to shutdown wso2server to test an other API management tool on the same machine. The other tool provided a quick setup running on Docker so I had to install docker.
Now, when I shutdown every docker services and start wso2server again, it looks like some services detect docker virtual interface IP (172.17.0.1) instead of using the real IP (10.22.106.101) :
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent Mgt Console URL : https://172.17.0.1:9443/carbon/
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Publisher Default Context : https://172.17.0.1:9443/publisher
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Store Default Context : https://172.17.0.1:9443/store
Log from a previous day with expected IP:
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL :
https://10.22.106.101:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Publisher Defa
ult Context : https://10.22.106.101:9443/publisher {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Store Default
Context : https://10.22.106.101:9443/store {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
This doesnt prevent WSO2 to start, but it prevents Swagger-UI to work because it's trying to reach services on 172.17.0.1 ending in timeout since this IP doesn't correspond to anything.
What can I do in order to have the real IP is detected again ?

You can set required IP in carbon.xml
<!--
Host name or IP address of the machine hosting this server
e.g. www.wso2.org, 192.168.1.10
This is will become part of the End Point Reference of the
services deployed on this server instance.
-->
<!--HostName>www.wso2.org</HostName-->
<!--
Host name to be used for the Carbon management console
-->
<!--MgtHostName>mgt.wso2.org</MgtHostName-->
And you may have to replace ${carbon.local.ip} in api-manager.xml by required IP too.

In addition to above we also need to edit the /wso2am-2.0.0/repository/conf/api-manager.xml file and change url value of <GatewayEndpoint> replace ${carbon.local.ip}

Related

WSO2 API Manager in Docker

I am trying to deploy API Manager and Enterprise Integrator using Docker Compose. This is deployed using a cloud server.
Everything works locally when using localhost as the host but when I deploy it on a using a cloud server, I cannot access the API Manager using the public IP of the server. The Enterprise Integrator works though. I've modified some configuration parameters as shown below but the problem persists:
<APIStore>
<!--GroupingExtractor>org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl</GroupingExtractor-->
<!--This property is used to indicate how we do user name comparision for token generation https://wso2.org/jira/browse/APIMANAGER-2225-->
<CompareCaseInsensitively>true</CompareCaseInsensitively>
<DisplayURL>false</DisplayURL>
<URL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}/store</URL>
<!-- Server URL of the API Store. -->
<ServerURL>https://<PUBLIC IP HERE>:${mgt.transport.https.port}${carbon.context}services/</ServerURL>
I've also whitelisted the said public IP:
"whiteListedHostNames" : ["localhost","PUBLIC IP HERE"]
Meanwhile please check the reference.

Locator starts on incorrect hostname

I am running docker image apachegeode/geode:1.9.0 on AWS ec2 instance with Ubuntu 18 AMI. While running gfsh command to start the locator , I see the hostname as garbled.
How do I set the correct hostname while starting locator so that I can access the locator from Java client ?
gfsh command used is as follows :
start locator --name=LocatorOne --log-level=config --J=-Dgemfire.http-service-bind-address=172.17.0.2
gfsh start locator command results are as given below :
Starting a Geode Locator in /LocatorOne...
.........
Locator in /LocatorOne on b9e7f469d3b9[10334] as LocatorOne is currently online.
Process ID: 40
Uptime: 12 seconds
Geode Version: 1.9.0
Java Version: 1.8.0_201
Log File: /LocatorOne/LocatorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.http-service-bind-address=172.17.0.2 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /geode/lib/geode-core-1.9.0.jar:/geode/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=b9e7f469d3b9, port=1099]
Cluster configuration service is up and running.
The above mentioned garbled hostname appears in Java client code while I try to put a key-value pair to a region.
The http-service-bind-address specifies the IP address to which the HTTP service will be bound. What you should be using instead is hostname-for-clients, which is basically the host name or IP address that will be sent to clients so they can connect to this locator.
Please have a look at the start locator command for further details.
Best regards.

How to access rails server in a remote VM

I set up a Virtual Machine (VM) on OpenStack remotely. The VM is running Red Hat Enterprise Linux (RHEL) 7.
I ssh into the above VM using ssh vm-url, and then I setup a rails server during that ssh session and get it running using rails server -b vm-url
Now, I try to access the rails website above from my local Chrome browser by typing the URL vm-url:3000 into Chrome's address bar (the Omnibox), but I get:
This site can’t be reached
10.150.8.101 took too long to respond.
Why Can't I access the rails website, what have I done wrong?
Please correct me if any terminologies I used are incorrect.
Thank you.
Two things to check,
The ip attached to the VM is public and accessible
Http port is enabled to be accessed from outside
The port accessed is handled in security groups which is generally configured while creating the instance. Either add new security group with enough privileges or update the same with new added ports.

How to use neo4j from a docker image on the Google Cloud Platform

I want to run neo4j from the google cloud shell and I have already ssh'd into my project.
Currently I am using the following to run neo4j:
docker run \
--publish=7474:7474 \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/logs:/logs \
neo4j:3.0
The command works and prints the following output:
Starting Neo4j.
2017-12-13 03:22:34.661+0000 INFO ======== Neo4j 3.0.12 ========
2017-12-13 03:22:34.681+0000 INFO No SSL certificate found,
generating a self-signed certificate..
2017-12-13 03:22:35.163+0000 INFO Starting...
2017-12-13 03:22:35.631+0000 INFO Bolt enabled on 0.0.0.0:7687.
2017-12-13 03:22:37.966+0000 INFO Started.
2017-12-13 03:22:39.041+0000 INFO Remote interface available at
http://0.0.0.0:7474/
However, when I follow the link to http://0.0.0.0:7474/, it redirects to something like https://7474-dot-3282369-dot-devshell.appspot.com/?authuser=0 and I get an error:
Error: Could not connect to Cloud Shell on port 7474.
What can I do differently or what additional info would you need? Thank you.
I think you are facing one of the two following issues:
1. If you ssh'd in a different machine and the server is running there
The issue is that you accessed an instance from the Google Cloud Shell, then you started the server through docker. At this point I think that you connected (not intentionally) to the Cloud Shell on the port 7474 clicking on "Web preview" of the same Window!
But the server was running on a different machine!
Therefore the Cloud Shell informed you that is not listening on port 7474. To solve this issue you need to retrieve the public/external IP of your instance, create a firewall rule allowing the TCP:7474 traffic and connect to it from any browser with http://ip-your-machine:7474.
2. If you are running the server in the Google Cloud Shell
First of all you should not run a server on the Google Cloud Shell, it is not a normal virtual machine and you should never rely on it.
By the way I followed step by step what you did:
I accessed the Google Cloud Shell, I have run your code, I obtained the very same output, but when I have done the "Web preview" I correctly visualised the neo4j login page.
Thus, I believe that if you were running the server here you unintentionally stopped it before checking the "Web preview".
P.S.
The weird domain name you have been redirected to: https://7474-dot-3282369-dot-devshell.appspot.com is a domain name that points exactly to your Google Cloud Shell #3282369 on port 7474.
You are redirected automatically clicking on a link from the Cloud Shell, (since you cannot reach 0.0.0.0 from your computer).

Secure gateway between Bluemix CF apps and containers

Can I use Secure-Gateway between my Cloud Foundry apps on Bluemix and my Bluemix docker container database (mongo)? It does not work for me.
Here the steps I have followed:
upload secure gw client docker image on bluemix
docker push registry.ng.bluemix.net/NAMESPACE/secure-gateway-client:latest
run the image with token as a parameter
cf ic run registry.ng.bluemix.net/edevregille/secure-gateway-client:latest GW-ID
when i look at the logs of the container secure-gateway, I get the following:
[INFO] (Client PID 1) Setting log level to INFO
[INFO] (Client PID 1) There are no Access Control List entries, the ACL Deny All flag is set to: true
[INFO] (Client PID 1) The Secure Gateway tunnel is connected
and the secure-gateway dashboard interface shows that it is connected too.
But then, when I try to add the MongoDB database (running also on my Bluemix at 134.168.18.50:27017->27017/tcp) as a destination from the service secure-gateway dashboard, nothing happened: the destination is not created (does not appear).
I am doing something wrong? Or is it just that this not a supported use case?
1) The Secure Gateway is a service used to integrate resources from a remote (company) data center into Bluemix. Why do you want to use the SG to access your docker container on Bluemix?
2) From a technical point of view the scenario described in the question should work. However, you need to add rule to the access control list (ACL) to allow access to the docker container with your MongoDB. When you are running the SG it has a console to type in commands. You could use something like allow 134.168.18.50:27017 as command to add the rule.
BTW: There is a demo using the Secure Gateway to connect to a MySQL running in a VM on Bluemix. It shows how to install the SG and add a ACL rule.
Added: If you are looking into how to secure traffic to your Bluemix app, then just use https instead of http. It is turned on automatically.

Resources