Locator starts on incorrect hostname - docker

I am running docker image apachegeode/geode:1.9.0 on AWS ec2 instance with Ubuntu 18 AMI. While running gfsh command to start the locator , I see the hostname as garbled.
How do I set the correct hostname while starting locator so that I can access the locator from Java client ?
gfsh command used is as follows :
start locator --name=LocatorOne --log-level=config --J=-Dgemfire.http-service-bind-address=172.17.0.2
gfsh start locator command results are as given below :
Starting a Geode Locator in /LocatorOne...
.........
Locator in /LocatorOne on b9e7f469d3b9[10334] as LocatorOne is currently online.
Process ID: 40
Uptime: 12 seconds
Geode Version: 1.9.0
Java Version: 1.8.0_201
Log File: /LocatorOne/LocatorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.http-service-bind-address=172.17.0.2 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /geode/lib/geode-core-1.9.0.jar:/geode/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=b9e7f469d3b9, port=1099]
Cluster configuration service is up and running.
The above mentioned garbled hostname appears in Java client code while I try to put a key-value pair to a region.

The http-service-bind-address specifies the IP address to which the HTTP service will be bound. What you should be using instead is hostname-for-clients, which is basically the host name or IP address that will be sent to clients so they can connect to this locator.
Please have a look at the start locator command for further details.
Best regards.

Related

netty dubbojson read timeout

i has dev and test two env that delpoy with k8s cluster.
dev:
rpc frame : dubbo 2.7.0
protocol:dubbo
JVM version:open jdk1.8
operate system : redhat 8.5
kernel.sysrq=1
vm.swappiness=10
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
test:
rpc frame : dubbo 2.7.0
protocol:dubbo
JVM version:open jdk1.8
operate system : redhat 7.9
kernel config:
kernel.sysrq=1
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
with my project updata lib, but netty was still in 4.1.25version,when i deploy to test on k8s pod , it will throw an timeout exception via dubbojson.
however it was work well on dev.
or update netty version to 4.1.71 on redhat 7.9 , the timeout exception was gone.
the tcpdump that i get of netty4.1.25 on test env show everthing was normal
ie:
docker container (psh ack)---provider
provider (psh ack)---docker container
docker (ack) --- provider
tcpdump show data had been reccived and ack to provider,but netty was timeout to read data.
when i use two server ,one os is redhat7.9, anther os is redhat8.5,both installed docker
i pull image(problematic) delpoy on both server , the timeout not exists anymore.
has anyone can help me.
begin, i was think that ,may be it was netty version too lower has compatiable operation system version.
after test, that was wrong.

RabbitMQ Unable to Join Cluster

I am trying to learn clustering rabbitmq nodes and I am following this tutorial as well as the official documentation.
I have 2 physical machines with rabbitmq deployed on them through docker. machine1 (192.168.1.2) is to be the cluster, and machine2 (192.168.1.3) is to join it.
When I attempt to run rabbitmqctl join_cluster rabbit#192.168.1.2 from machine2, this fails with the following message.
Clustering node rabbit#node2.rabbit with rabbit#192.168.1.2
Error: unable to perform an operation on node 'rabbit#192.168.1.2'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit#192.168.1.2
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: ['rabbit#192.168.1.2']
rabbit#192.168.1.3:
* connected to epmd (port 4369) on 192.168.1.2
* epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
* TCP connection succeeded but Erlang distribution failed
* suggestion: check if the Erlang cookie identical for all server nodes and CLI tools
* suggestion: check if all server nodes and CLI tools use consistent hostnames when addressing each other
* suggestion: check if inter-node connections may be configured to use TLS. If so, all nodes and CLI tools must do that
* suggestion: see the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
Current node details:
* node name: 'rabbitmqcli-1352-rabbit#node2.rabbit'
* effective user's home directory: /var/lib/rabbitmq
* Erlang cookie hash: XXXXXXXXXXXXX
The error logs on machine1 show nothing related to such a connection attempt. I have verified the md5sum of the cookies on both docker containers and they are exactly the same. So are the permissions.
I assumed perhaps the port 4369 isn't reachable, but it is.
I am unsure what I am doing wrong. Can someone help here?
Additional information:
I am using the rabbitmq:3.85-management image. It uses Erlang/OTP 23 [erts-11.0.3].
I have been checking the troubleshooting guide, but I am unsure what seems wrong here. Please let me know if I can provide more information.
So thanks to #NeoAnderson and #José M, I was able to understand what happened.
The containers running RMQ need to be accessible via the hostname that Erlang uses within the service, across the network. Since the hostname of the containers were not accessible in a container on another machine, this clustering failed.
A simple fix would be to edit the /etc/hosts file on the containers so that it would point the IP to the "leader" node.
I was just doing this to avoid installing RMQ and not because I thought this was the best way to do this. Alternately, docker swarm or k8s would have provided the right networking for me.
But the root cause was definitely the nodename problem.

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

WSO2 not compatible with Docker

I had to shutdown wso2server to test an other API management tool on the same machine. The other tool provided a quick setup running on Docker so I had to install docker.
Now, when I shutdown every docker services and start wso2server again, it looks like some services detect docker virtual interface IP (172.17.0.1) instead of using the real IP (10.22.106.101) :
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent Mgt Console URL : https://172.17.0.1:9443/carbon/
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Publisher Default Context : https://172.17.0.1:9443/publisher
[2016-11-04 16:33:21,452] INFO - CarbonUIServiceComponent API Store Default Context : https://172.17.0.1:9443/store
Log from a previous day with expected IP:
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL :
https://10.22.106.101:9443/carbon/ {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Publisher Defa
ult Context : https://10.22.106.101:9443/publisher {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
[2016-09-15 15:38:24,534] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - API Store Default
Context : https://10.22.106.101:9443/store {org.wso2.carbon.ui.internal.CarbonUIServiceComponent}
This doesnt prevent WSO2 to start, but it prevents Swagger-UI to work because it's trying to reach services on 172.17.0.1 ending in timeout since this IP doesn't correspond to anything.
What can I do in order to have the real IP is detected again ?
You can set required IP in carbon.xml
<!--
Host name or IP address of the machine hosting this server
e.g. www.wso2.org, 192.168.1.10
This is will become part of the End Point Reference of the
services deployed on this server instance.
-->
<!--HostName>www.wso2.org</HostName-->
<!--
Host name to be used for the Carbon management console
-->
<!--MgtHostName>mgt.wso2.org</MgtHostName-->
And you may have to replace ${carbon.local.ip} in api-manager.xml by required IP too.
In addition to above we also need to edit the /wso2am-2.0.0/repository/conf/api-manager.xml file and change url value of <GatewayEndpoint> replace ${carbon.local.ip}

GlassFish as windows service

I am using GlassFish 3.1.2. I want to create a service of GlassFish so that each the system start it will automatically start the GlassFish domain.
In default domain domain1, there is a cluster cluster1 having two instances instance1 and instance2.
But when I use the command...
asadmin>domain1Service.exe start
It starts the domain, but clusters are not started. So How can I make a service which can start clusters ??
Do I have to create separate service for each instance within a cluster ??
We can create service over cluster instance in GlassFish.
For that we have to create separate service for each instance.
This command is used to create service for each an instance..
asadmin>>create-service --nodedir <<node-dir location>> <<node-name>>
Thanks,
Gunjan.
It sounds like you are running this on a windows machine, so I would write a batch script (.bat) that executes the appropriate asadmin commands.
asadmin start-domain --user admin --passwordfile adminpassword.txt domain1
asadmin start-cluster --user admin --passwordfile adminpassword.txt cluster1
Than I would setup the service to point at the batch file.

Resources