I have many wildfly-8.2 nodes that has IP address 10.4.0.X. I need to group them in 2 different clusters. Unfortunately I received message from nodes that doesn't included in a cluster. Each cluster receives messages from all nodes since they are all under 10.4.0. Here is my mod_cluster configuration in Apache:
# MOD_CLUSTER_ADDS
<IfModule manager_module>
Listen 10.4.0.1:10001
ManagerBalancerName testbalancer
<VirtualHost 10.4.0.1:10001>
<Location />
Order deny,allow
Deny from all
Allow from 10.4.0.
</Location>
KeepAliveTimeout 300
MaxKeepAliveRequests 0
#ServerAdvertise on http://10.4.0.1:10001
AdvertiseFrequency 5
#AdvertiseSecurityKey secret
#AdvertiseGroup 224.0.1.105:23364
EnableMCPMReceive
<Location /mod_cluster_manager>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
Allow from 10.4.0
</Location>
I think this might be a good scenario to use widlfly in domain mode, that way you can configure multiple server groups (subclusters) and manage them centrally using a domain controller. A detailed tutorial is available here:
http://blog.akquinet.de/2012/07/19/scalable-ha-clustering-with-jboss-as-7-eap-6/
The tutorial splits multiple server groups into multiple load balancer groups on mod_cluster.
To just configure individual nodes into different load balancer groups for mod_cluster (have not tried this myself!), you can use the lbgroup parameter in the mod-cluster-config:
<subsystem xmlns="urn:jboss:domain:modcluster:1.1">
<mod-cluster-config advertise-socket="modcluster"
balancer="myBalancer" load-balancing-group="myLBGroup"
connector="ajp">
<dynamic-load-provider>
<load-metric type="busyness"/>
</dynamic-load-provider>
</mod-cluster-config>
</subsystem>
ref: https://developer.jboss.org/thread/203907
To actually separate the wildfly instances into separate clusters is another story. There used to be a property "jboss.partition.name" but this has been replaced by defining unique multicast address/port combinations for your cluster partitions within the subnet.
https://developer.jboss.org/thread/177877
So assuming you are using udp as your jgroups stack, you can change the multicast address using the "-u" command line parameter:
https://docs.jboss.org/author/display/WFLY8/Command+line+parameters#Commandlineparameters-defaultmulticastaddress
An alternative for configuring mod_cluster might be to disable advertising (nodes and mod_cluster) and use a static config in your standalone.xml:
<mod-cluster-config proxy-list="10.0.1.2:6667"/>
That way, the nodes will no longer advertise and the assignments to the different mod_cluster apache proxies would be static.
ref: https://developer.jboss.org/thread/218813
ref: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6/html/Administration_and_Configuration_Guide/Configure_the_mod_cluster_Subsystem_to_Use_TCP.html
Related
The new NBIOT demo modules from O2 - we are testing - they only accept an IP address as a broker host rather than URL [mqtt.googleapis.com]. If i run DNS lookup this is fine - but how stable is the IP address associated with the mqtt.googleapis.com ??
I have the DNS lookup here 74.125.201.206
How long will it remain stable / the same ??
stream {
upstream google_mqtt {
server mqtt.googleapis.com:8883;
}
server {
listen 8883;
proxy_pass google_mqtt;
}
}
Instead of the mqtt url i want to insert IP address
Why would you want to hard code the IP address? You are just setting yourself up for it to fail at the moment you can't fix it (e.g. while on vacation)
You shouldn't assume an IP address returned by a DNS query is good for any longer than the TTL value returned with the response.
Hostnames are a deliberate abstraction so you don't have to worry about if the IP address changes, be it due to a failure, maintenance, load balancing.
Just DON'T hardcode the IP address.
If the module you mentioned REALLY only accepts IP addresses then you need to raise a bug against the supplier saying this needs fixing, especially as this is for a field deployed device that you probably can't easily update once deployed.
I used symfony 1.4 to create my application.
I'd like to get the IP adress of the current server to put it within soap request
So, how can i get the IP address of the current server?
For most situations, using $_SERVER['SERVER_ADDR']; will work. If that doesn't work you can try $ip = gethostbyname(gethostname());
If you have access to the $request object and it is a sfWebRequest (typical request from a browser) you can use:
$request->getPathInfoArray()['SERVER_ADDR']
Premise of the following method: your domain name has only one IP resolution
Using PHP:
gethostbyname($_SERVER['SERVER_NAME'])
$_SERVER['SERVER_NAME']will generally return your domain name (server_name / ServerName is configured in Nginx / Apache server), and then use gethostbyname().
About $_SERVER['SERVER_ADDR'], it often return a LAN IP address (I only have one server, one domain name, no reverse proxy; cloud server).
About gethostname()
In the test, it returns the name of the server (host name, not the domain name you use), and then uses gethostbyname(), will return a LAN IP.
More can be used https://checkip.amazonaws.com/ Get the current IP.
I am using nginx to proxy to a unicorn upstream running a Ruby on Rails application. I want to be able to limit the total amount of the backend resources a singe user (IP address) can consume. By backend resources, I mean the number of active requests a user can have running on the upstream unicorn processes at once.
So for example, if an IP address already has 2 writing connections to a particular upstream, I want any further requests to be queued by nginx, until one of the previously open connections is complete. Note that I don't want requests to be dropped - they should just wait until the number of writing connections drops below 2 for the user.
This way, I can ensure that even if one user attempts many requests for a very time consuming action, they don't consume all of the available upstream unicorn workers, and some unicorn workers are still available to service other users.
It seems like ngx_http_limit_conn_module might be able to do this, but the documentation is not clear enough for me to be sure.
Another way to think about the problem is that I want to protect against DoS (but not DDoS, i.e. I only care about DoS from one IP at a time), by making the server appear to any one IP address as if it has the ability to process N simultaneous requests. But in reality the server can process 10*N requests, but I am limiting the simultaneous requests from any one IP to 1/10th of the server's real capacity. Just like a normal server behaves, when the number of simultaneous workers is exceeded requests are queued until previous requests have completed.
You can user limit_req module
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
It doesn't limit number of connections, but it limits requests per second. Just use large burst to delay request and not to drop them.
Here's example.
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=2r/s;
...
server {
...
location / {
limit_req zone=one burst=50;
}
You know that average request processing time is say 1 second, so setting limit to 2r/s allows only two workers being busy with this particular ip address (approximately, of course). If request takes 0.5 sec to complete, you can set 4r/s.
If you know the time consuming url, then you can make use of the limit_req module to implement 2r/s for the long requests, and no limit for short requests.
http {
...
#
# Limit request processing rate or connection
# If IP address is 127.0.0.1, the limit_conn_zone will not count it.
#
geo $custom_remote_addr $custom_limit_ip {
default $binary_remote_addr;
127.0.0.1 "";
}
limit_req_zone $custom_limit_ip zone=perip:10m rate=2r/s;
...
server {
...
# By default, do not enforce the maximum allowed number of connections for the remote IP
set $custom_remote_addr 127.0.0.1;
# If the URI matches a super time consuming requests, limit to 2r/s.
if ($uri ~* "^/super-long-requests") {
set $custom_remote_addr $remote_addr;
}
limit_req zone=perip burst=50;
...
}
}
I am developing a chat application.
But Right now chatting is possible with only google because I know only google's port no.
xmppClient = [[XMPPClient alloc] init];
[xmppClient addDelegate:self];
// Replace me with the proper domain and port.
// The example below is setup for a typical google talk account.
[xmppClient setDomain:#"talk.google.com"];
[xmppClient setPort:5222];
You can see that, google has set 5222 as port number.
Same way I want to set port no for yahoo, windows messenger & other popular sites, How can I get all these?
(Is it something like that - "XMPP is specific for Google ones" ? ? )
Kraken's Openfire Properties Page has the port and domain information you need. Just re-use and try with your application.
5222/tcp is the default port for XMPP, but your implementation may have a different one. To find out, you do a DNS SRV query for _xmpp-client._tcp.YOURDOMAIN, where you replace YOURDOMAIN with the domain you're trying to connect to. This will return 0+ records that have hostname/port combinations for how to connect. If you get 0 records back, assume port 5222.
For example, I want to connect to the GoogleTalk server, and log in with the account foo#gmail.com. My client performs the lookup that can be simulated with dig on the command line like this:
% dig +short -t SRV _xmpp-client._tcp.gmail.com.
20 0 5222 talk1.l.google.com.
20 0 5222 talk4.l.google.com.
5 0 5222 talk.l.google.com.
20 0 5222 talk3.l.google.com.
20 0 5222 talk2.l.google.com.
The result with the lowest priority number is 5 0 5222 talk.l.google.com., which means you open a TCP connection to talk.l.google.com on port 5222.
To make SRV queries from code, check out this answer, which relies on DNSServiceQueryRecord.
5222 is the default port for XMPP, but
your implementation may have a
different one. To find out, you do a
DNS server query for
_xmpp-client._tcp.DOMAIN_Name, where you replace DOMAIN_Name with the
domain you're trying to connect to(ex.
gmail.com,google.com,yahoo.com). This
will return 0+ records that have
hostName/port combinations for how to
connect. If you get 0 records back,
assume port 5222.
i want to check my server connection to know if its available or not to inform the user..
so how to send a pkg or msg to the server (it's not SQL server; it's a server contains some serviecs) ...
thnx in adcvance ..
With all the possibilities for firewalls blocking ICMP packets or specific ports, the only way to guarantee that a service is running is to do something that uses that service.
For instance, if it were a JDBC server, you could execute a non-destructive SQL query, such as select * from sysibm.sysdummy1 for DB2. If it's a HTTP server, you could create a GET packet for index.htm.
If you actually have control over the service, it's a simple matter to create a special sub-service to handle these requests (such as you send through a CHECK packet and get back an OKAY response).
That way, you avoid all the possible firewall issues and the test is a true end-to-end one. PINGs and traceroutes will be able to tell if you can get to the machine (firewalls permitting) but they won't tell you if your service is functioning.
Take this from someone who's had to battle the network gods in a corporate environment where machines are locked up as tight as the proverbial fishes ...
If you can open a port but don't want to use ping (i dont know why but hey) you could use something like this:
import socket
host = ''
port = 55555
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port))
s.listen(1)
while 1:
try:
clientsock, clientaddr = s.accept()
clientsock.sendall('alive')
clientsock.close()
except:
pass
which is nothing more then a simple python socket server listening on 55555 and returning alive