We have an app that invokes various remote methods on MBeans using MBeanServerConnection.invoke.
Occasionally one of these methods hangs.
Is there any way to have a timeout on the call? so that it will return with an exception if the call takes too long?
Or do I have to move all those calls into separate threads so they don't lock up the UI and require killing the app?
See http://weblogs.java.net/blog/emcmanus/archive/2007/05/making_a_jmx_co.html
===== Update =====
I was thinking about this stuff when I first responded, but I was on my mobile and I can't type worth a damn on it.....
This is really an RMI problem, and unless you use a different protocol, there's not much you can do, except, as you say, move all those calls into separate threads so they don't lock up the UI.
But.... if you have the option of fiddling with the target server and you can customize the connecting client, you have at least 1 option which is to customize the JMXConnectorServer on your target servers.
The standard JMXConnectorServer implementation is the RMIConnectorServer. Part of it's specification is that when you create a new instance using any of the constructors (like RMIConnectorServer(JMXServiceURL url, Map environment)), the environment map can contain a key/value pair where the key is RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE and the value is a RMIClientSocketFactory. Therefore, you can specify a socket factory method like this:
RMIClientSocketFactory clientSocketFatory = new RMIClientSocketFactory() {
public Socket createSocket(String host, int port) {
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
}
};
This factory creates a Socket and then sets its SO_TIMEOUT using setSoTimeout, so when the client connects using this socket, all operations, including connecting, will timeout after 3000 ms.
You could also checkout the JMXMP connector and server in the jmx-optional package of the OpenDMK. (links are to my github mavenized). No built in solution, mind you, but they're super easy to extend and JMXMP is simple TCP socket based rather than RMI, so this type of customization would be trivial.
Cheers.
# Nicholas : The above code is not working.I mean request is not getting timeout after 3000. ms.
map.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE , new RMIClientSocketFactory() {
#Override
public Socket createSocket(String host, int port) throws IOException {
if(logger.isInfoEnabled() ){
logger.info("JMXManager inside createSocket..." + host + ": port :" + port);
}
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
return s;
}
});
cs = JMXConnectorServerFactory.newJMXConnectorServer(url,map,mbeanServer);
As I answered on: How to set request timeout for JMX Connector the RMI properties can help you. All the properties are on Oracle documentation site:
http://docs.oracle.com/javase/7/docs/technotes/guides/rmi/sunrmiproperties.html.
For example: -Dsun.rmi.transport.tcp.responseTimeout=60000 is a client side tcp response timeout. There are also properties for connect timeout and for server side connections.
I also am not happy how the JMX/RMI/TCP stack hides important settings from lower level protocols, and makes it not available for a single connection.
Related
I am using the basic approach as set out in this post to connect from a client docker container to any one of a number of chrome docker containers (in a docker swarm/service, potentially across several servers behind nginx, deployed using CapRover).
In each chrome container I maintain a pool (just a simple array) of browser objects, and direct incoming requests to an appropriate browser as follows (very similar to the linked post):
import http from 'node:http'; // https://nodejs.org/api/http.html
import httpProxy from 'http-proxy'; // https://www.npmjs.com/package/http-proxy
const proxy = new httpProxy.createProxyServer({ ws: true });
// an array (pool) of pre-launched and managed browser objects...
const browsers = [ ... ];
http
.createServer()
.on('upgrade', (req, socket, head) => {
const browser = browsers[Math.floor(Math.random() * browsers.length)]; // in reality I don't just pick a browser at random
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target });
})
.listen(3222);
The above is listening at ws://srv-captain--chrome:3222 (communication is "internal" over the docker network between containers).
Then, in my client container, I connect to the common endpoint ws://srv-captain--chrome:3222 as follows:
import puppeteer from 'puppeteer'; // https://www.npmjs.com/package/puppeteer (using version 17.1.3 at time of posting this)
try {
const browser = await puppeteer.connect({ browserWSEndpoint: 'ws://srv-captain--chrome:3222' });
} catch (err) {
console.error('error connecting to browser', err);
}
This works really well, except that I am getting occasional/inconsistent errors like these when calling puppeteer.connect() in the client container above:
Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
Protocol error (Performance.enable): Target closed.
Almost always, if I simply try to connect again, the connection is made without further error, and at the first attempt.
I have no idea why the error is complaining that the page has been closed or Target closed since, at this point in the process, I'm not attempting to interact with any page, and I know from listening for browser.on('disconnected'...), and also monitoring the chromium processes themselves, that each browser in the array is still working fine... none has crashed.
Any idea what's going on here?
UPDATE after further testing
Of course, in the client container we don't connect to a browser just for the sake of it, like in the above snippet, but to open a page and do some stuff with the page. In practice, in the client container it's more like the following test snippet:
const doIteration = function (i) {
return new Promise(async (resolve, reject) => {
// mimic incoming requests coming in at random times over a short period by introducing a random initial delay...
await new Promise(resolve => setTimeout(resolve, Math.random() * 5000));
// now actually connect...
let browser;
try {
browser = await puppeteer.connect({ browserWSEndpoint: `ws://srv-captain--chrome:3222?queryParam=loop_${i}` });
} catch (err) {
reject(err);
return;
}
// now that we have a browser, open a new page...
const page = await browser.newPage();
// do something useful with the page (not shown here) and then close it..
await page.close();
// now disconnect (but don't close) the browser...
browser.disconnect();
resolve();
});
};
const promises = [];
for (let i = 0; i < 15; i++) {
promises.push( doIteration(i) );
}
try {
await Promise.all(promises);
} catch (err) {
console.error(`error doing stuff`, err);
}
Each iteration above is being performed multiple times concurrently... I am using Promise.all() on an array of iteration promises to mimic multiple concurrent incoming requests in my production code. The above is enough to reproduce the problem... the error doesn't happen on calling puppeteer.connect() with every iteration, just some.
So there seems to be some sort of interplay between opening/closing a page in one iteration, and calling puppeteer.connect() in another, despite closing the page and disconnecting the browser properly in each iteration? This probably also explains the Most likely the page has been closed error message when calling puppeteer.connect() if there is some hangover relating to a page closed in another iteration... though for some reason this error occurs when calling puppeteer.connect()?
With the use of a pool of browser objects in the browsers array, and a docker swarm having multiple containers on multiple servers, each upgrade message could be received at a different container (which could even be on a different server) and could be routed to a different browser in the browsers array. But I now think that this is a red herring, because in the further testing I narrowed the problem down by routing all requests to browsers[0] and also scaling the service down to just one container... so that the upgrade messages are always handled by the same container on the same server and routed to the same browser... and the problem still occurs.
Full stacktrace for the above-mentioned error:
Error: Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
at CDPSession.send (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Connection.js:281:35)
at EmulationManager.emulateViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/EmulationManager.js:33:73)
at Page.setViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:1776:93)
at Function._create (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:242:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Target.page (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Target.js:123:23)
at async Promise.all (index 0)
at async BrowserContext.pages (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Browser.js:577:23)
at async Promise.all (index 0)
As I dug deeper and deeper into this problem, it become more and more apparent that I might not actually be doing anything fundamentally wrong, and that this might just be a bug in puppeteer itself. So I reported those as an issue over on puppeteer... and indeed, it is acknowledged as a bug for any version later than 15.5.0, and is being fixed. In the meantime, the workaround is to revert to puppeteer version 15.5.0 and to be careful when calling browser.pages() when concurrent connections are being used, because that might itself throw an error... but I understand that this too might be something that they can/will fix so that browser.pages() is more resilient to the presence of concurrent connections.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Is it possible to use the UPNP protocol for automatic port forwarding on the router using ESP8266?
I need to be able to access my ESP8266 module even when I am away from home.
Currently I have configured port forwarding manually in my router settings.
But in the future, in order for my project to become a commercial product, it needs to be able to do automatic port forwarding as this would be a barrier for the average user.
On the internet I found something talking about UPNP on ESP8266, but it was not about port forwarding.
Thank you very much in advance!
You can have a look at my library that I made just for that:
https://github.com/ofekp/TinyUPnP
I have an example for an IOT device (LED lights) within the package, I cannot attach the link due to low reputation.
You can have a look at the example code. All made for ESP8266.
Very simple to use, just call addPortMapping with the port you want to open, just as showed in the example.
You have to do this every 36000 (LEASE_DURATION) seconds, since UPnP is lease based protocol.
Declare:
unsigned long lastUpdateTime = 0;
TinyUPnP *tinyUPnP = new TinyUPnP(-1); // -1 means blocking, preferably, use a timeout value (ms)
Setup:
if (tinyUPnP->addPortMapping(WiFi.localIP(), LISTEN_PORT, RULE_PROTOCOL_TCP, LEASE_DURATION, FRIENDLY_NAME)) {
lastUpdateTime = millis();
}
Loop:
// update UPnP port mapping rule if needed
if ((millis() - lastUpdateTime) > (long) (0.8D * (double) (LEASE_DURATION * 1000.0))) {
Serial.print("UPnP rule is about to be revoked, renewing lease");
if (tinyUPnP->addPortMapping(WiFi.localIP(), LISTEN_PORT, RULE_PROTOCOL_TCP, LEASE_DURATION, FRIENDLY_NAME)) {
lastUpdateTime = millis();
}
}
I only checked it with my D-Link router.
To anyone interested in how the library works:
It sends an M_SEARCH message to UPnP UDP multicast address.
The gateway router will respond with a message including an HTTP header called Location.
Location is a link to an XML file containing the IGD (Internet Gateway Device) API in order to create the needed calls which will add the new port mapping to your gateway router.
One of the services that is depicted in the XML is <serviceType>urn:schemas-upnp-org:service:WANPPPConnection:1</serviceType> which is what the library is looking for.
That service will include a eventSubURL tag which is a link to your router's IGD API. (The base URL is also depicted in the same file under the tag URLBase)
Using the base URL and the WANPPPConnection link you can issue an HTTP query to the router that will add the UPnP rule.
As a side note, the service depicted in the XML also includes a SCPDURL tag which is a link to another XML that depicts commands available for the service and their parameters. The package skips this stage as I assumed the query will be similar for many routers, this may very well not be the case, though, so it is up to you to check.
From this stage the package will issue the service command using an HTTP query to the router. The actual query can be seen in the code quite clearly but for anyone interested:
Headers:
"POST " + <link to service command from XML> + " HTTP/1.1"
"Content-Type: text/xml; charset=\"utf-8\""
"SOAPAction: \"urn:schemas-upnp-org:service:WANPPPConnection:1#AddPortMapping\""
"Content-Length: " + body.length()
Body:
"<?xml version=\"1.0\"?>\r\n"
"<s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\" s:encodingStyle=\"http://schemas.xmlsoap.org/soap/encoding/\">\r\n"
"<s:Body>\r\n"
"<u:AddPortMapping xmlns:u=\"urn:schemas-upnp-org:service:WANPPPConnection:1\">\r\n"
" <NewRemoteHost></NewRemoteHost>\r\n"
" <NewExternalPort>" + String(rulePort) + "</NewExternalPort>\r\n"
" <NewProtocol>" + ruleProtocol + "</NewProtocol>\r\n"
" <NewInternalPort>" + String(rulePort) + "</NewInternalPort>\r\n"
" <NewInternalClient>" + ipAddressToString(ruleIP) + "</NewInternalClient>\r\n"
" <NewEnabled>1</NewEnabled>\r\n"
" <NewPortMappingDescription>" + ruleFriendlyName + "</NewPortMappingDescription>\r\n"
" <NewLeaseDuration>" + String(ruleLeaseDuration) + "</NewLeaseDuration>\r\n"
"</u:AddPortMapping>\r\n"
"</s:Body>\r\n"
"</s:Envelope>\r\n";
I hope this helps.
I don't see why not. UPnP implements multiple profiles, the one you are interested in is named IGD (Internet Gateway Device), which most home routers implement to allow client applications on the local network (e.g Skype, uTorrent, etc.) to map ports on the router's NAT.
UPnP works over IP multicast to discover and announce devices implementing UPnP services over the address 239.255.255.250. Devices interested in such announcements subscribe to this multicast group and listen on port 1900. In fact, UPnP does not itself provide a discovery mechanism, but relies on a protocol called SSDP (Simple Service Discovery Protocol) to discover hosts on the local network.
All that's needed is an UDP socket bound to the aforementioned address and port to subscribe and publish messages on your home multicast group. You'd need to use an implementation of SSDP to discover your router, once you have discovered your router, you can send commands using UPnP wrapped around SOAP enveloppes.
There are many implementations of the UPnP IGD profile in Posix C, which you may reuse and port to the ESP 8266 (e.g MiniUPnP, gupnp-igd).
I'm basically trying to get the percentage of time a task is taking to display to the user on the screen in an overlay template.
I have a service that is calculating the process percentage:
def progressCalculation(requestsToSend, requestsSent, requestsFailed, progressPercentage) {
progressPercentage = 100 / requestsToSend * (requestsSent + requestsFailed)
progressPercentage = Math.round(progressPercentage * 1) / 1
MyController upCont = new MyController()
upCont.progress(progressReport.progressPercentage)
}
this continues to send progressReport.progressPercentage to the controller:
def progress(progressData) {
int statusToView = progressData
if (statusToView % 5==0) {
[statusToView: statusToView]
}
}
I have created a src/groovy file that is using websockets from here: https://github.com/vahidhedayati/grails-websocket-example/blob/master/README.md
My connection is working but I need to show the percentage on the view using the websocket which is working.
#OnMessage
public String handleMessage(String message) {
message = MyController.progressPercentage
String replyMessage = "echo "+message
return replyMessage
}
now what I'm trying to so here is return the progressPercentage value from the controller to the src/groovy file so that my view can continually updated with the latest property value whilst the task is completing.
MyController upCont = new MyController() seriously?
It is good idea to move the code that hosts and modifies progressPercentage variable to service layer and access it using service rather than controller.
myService.progressPercentage rather than MyController.progressPercentage
Also you must inject myService , not instantiate it as myService = new MyService(), services are singletons you can not instantiate them like this. They are managed by the spring container.
Actually if you do MyController upCont = new MyController()
and you try to access a property of upCont you will get this beautiful error message:
java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
I put those instructions together so if I can help you in any way do let me know.
Websockets require as much frontend work as backend. so to get back the data via websockets you need to expand on the java script as well as expand on the backend websocket sending that information to the java script.
So if you had a button on the frontend gsp that rather than was a typical
You can take a look at some of my plugins that already do this. There is a ping/pong that happens discretely in https://github.com/vahidhedayati/jssh which if user defines within taglib the websocket connection triggers a pong that frontend javascript receives and sends ping - and they continue doing this..
Here is another example which is what you probably need to use:
This is the result back from websocket
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L411
which when recieved updates this span or div id:
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L213
so you need to get your websocket to send it back in some json format that your frontend javascript picks up the json request and if it is of a certain convention to look for a value and update a div on the frontend.
There is a good video I have done on wschat which shows you updating frontend using websocket client/server. it may help you understand it better
https://www.youtube.com/watch?v=xagMYM9n3l0 or https://www.youtube.com/watch?v=zAySkzNid3E
unsure which one it was in
E2A: it will need to be a service:
https://github.com/vahidhedayati/grails-wschat-plugin/blob/master/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L63 then the few lines ahead registers those services in the websocket endpoint. Now going back in the history of the code or if you follow onMessage to verifyAction - you will need to send something from frontend - or upon when a connection is made to then send a message to frontend https://github.com/vahidhedayati/grails-wschat-plugin/blob/75590bf10ea040c18548377dedc716fdab2aa820/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L148. You can use userSession to directly message the person making the socket connection. On webpage using javascript parse json and update div as mentioned above
I am currently working on a client-server application using netty, some of the clients are not going to be doing anything until they recieve a message. I have read the api and can´t find a way to do so. I mean I could try to have "in.readline()" on the main so it won´t end but it Doesn´t feel right. Also could have endless loops but I don´t think its the right way either.
The question here is: is there a way to bind the socket for incoming messages just like the server having the main method ending?
public void run(){
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap()
.group(group)
.channel(NioSocketChannel.class)
.handler(new ChatClientInitializer());
Channel channel = bootstrap.connect(host,port).sync().channel();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
System.out.println("Inserte su nombre");
String nombre = in.readLine();
MyClientChannel canal = new MyClientChannel(channel,nombre);
canal.write("SM",nombre);
in.readLine();
See that at the end I had to write "in.readline()" so the program wouldn´t end and the handler would be still up for incomming messages
The easiest thing to do would be to replace:
in.readLine();
With:
channel.closeFuture().await();
When the connection to the server is disconnected, the client will terminate.
You will also want to spend some time defining your client's life-cycle, so that the channel's state doesn't affect when your application is running and when it's not.
My JMS client connects to WMQ through JNDI. The initial context factory used is com.ibm.mq.jms.context.WMQInitialContextFactory.
Currently, at WMQ side, there's a queue manager called TestMgr. Under this queue manager I created two channels. One is PLAIN.CHL which does not specify an SSL Cipher Spec, the other one is SSL.CHL which configured SSL Cipher Spec with RC4_MD5_US and SSL Authentication with Optional.
I have created a key store for the queue manager using IBM Key Management tool. The path of key db is [wmq_home]\qmgrs\TestMgr\ssl\key.
For channel PLAIN.CHL, I defined a queue connection factory like:
DEF QCF(PlainQCF) QMANAGER(TestMgr) CHANNEL(PLAIN.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client)
And under the SSL channel SSL.CHL, I defined a queue connection factory like:
DEF QCF(SSLQCF) QMANAGER(TestMgr) CHANNEL(SSL.CHL) HOST(192.168.66.23) PORT(1414) TRANSPORT(client) SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
Now I only can create connection using the PlainQCF. But failed to look up the SSL queue connection factory. My code looks like:
Hashtable environment = new Hashtable();
environment.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.mq.jms.context.WMQInitialContextFactory");
environment.put(Context.PROVIDER_URL, "192.168.66.23:1414/SSL.CHL");
Context ctx = new InitialContext( environment );
QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("SSLQCF");
qcf.createConnection();
....
Am I missing some context properties when looking up the SSL factory? connection And then I found the code is hanging on the line new InitialContext( environment ) for a long time, almost 5 minutes, and I got CC=2;RC=2009;AMQ9208... error.
Any suggestion would be appreciated. Is it true that SSL channel can't be connected by JNDI?
#T.Rob, thanks for your reply very much. But we still want to use WMQInitialContextFactory, so I'm afraid I still need to find solution for this.
I just defined the connection factory one time. The displayed info for the SSL queue connection factory like:
InitCtx> DISPLAY QCF(SSLQCF)
ASYNCEXCEPTION(ALL)
CCSID(819)
CHANNEL(SSL.CHL)
CLIENTRECONNECTOPTIONS(ASDEF)
CLIENTRECONNECTTIMEOUT(1800)
COMPHDR(NONE )
COMPMSG(NONE )
CONNECTIONNAMELIST(192.168.66.23(1414))
CONNOPT(STANDARD)
FAILIFQUIESCE(YES)
HOSTNAME(192.168.66.23)
LOCALADDRESS()
MAPNAMESTYLE(STANDARD)
MSGBATCHSZ(10)
MSGRETENTION(YES)
POLLINGINT(5000)
PORT(1414)
PROVIDERVERSION(UNSPECIFIED)
QMANAGER(TestMgr)
RESCANINT(5000)
SENDCHECKCOUNT(0)
SHARECONVALLOWED(YES)
SSLCIPHERSUITE(SSL_RSA_WITH_RC4_128_MD5)
SSLFIPSREQUIRED(NO)
SSLRESETCOUNT(0)
SYNCPOINTALLGETS(NO)
TARGCLIENTMATCHING(YES)
TEMPMODEL(SYSTEM.DEFAULT.MODEL.QUEUE)
TEMPQPREFIX()
TRANSPORT(CLIENT)
USECONNPOOLING(YES)
VERSION(7)
WILDCARDFORMAT(TOPIC_ONLY)
The JNDI Provider should be fine because I can look up the plain connection factory successfully. Also, for my client app, I extracted the cert from the key store which created for MQ server and imported it to the trust store(cacerts) of my JRE with alias name ibmwebspheremqtestmgr.
You are correct, with 2009 error there are some log entries:
=================================================================
4/20/2012 20:24:27 - Process(13768.3) User(MUSR_MQADMIN) Program(amqzmur0.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ6287: WebSphere MQ V7.1.0.0 (p000-L111019).
EXPLANATION:
WebSphere MQ system information:
Host Info :- Windows Server 2003, Build 3790: SP2 (MQ Windows 32-bit)
Installation :- C:\IBM\WebSphereMQ (mqenv)
Version :- 7.1.0.0 (p000-L111019)
ACTION:
None.
-------------------------------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(xxxx_host of my MQ) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9639: Remote channel 'SSL.CHL' did not specify a CipherSpec.
EXPLANATION:
Remote channel 'SSL.CHL' did not specify a CipherSpec when the local channel
expected one to be specified.
The remote host is 'xxx_host of my app (192.168.66.25)'.
The channel did not start.
ACTION:
Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs.
----- amqcccxa.c : 3817 -------------------------------------------------------
4/20/2012 20:24:27 - Process(7348.116) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(my app host) Installation(mqenv)
VRMF(7.1.0.0) QMgr(TestMgr)
AMQ9999: Channel 'SSL.CHL' to host 'xxx_host of my app (192.168.66.25)' ended
abnormally.
====================================================================
I also got some confusion with the error log. My app staged at at a machine which is different from my MQ. But the log says the Change the remote channel 'SSL.CHL' on host 'xxx_host of my app (192.168.66.25)' to
specify a CipherSpec so that both ends of the channel have matching
CipherSpecs. How can I change the channel cipher spec on my app host?
updates on MQEnvironment...
reply the comments.
The value of MQEnvironment.sslCipherSuite is null, so it throws out NullPointerExcetpion when i put it the the env hashtable. But i tried another one environment.put(MQC.SSL_CIPHER_SUITE_PROPERTY, "SSL_RSA_WITH_RC4_128_MD5") and it still failed with 2009 error.
For JMSAdmin tool, i had changed the config to use WMQInitialContextFactory. The configuration like(JMSAdmin.config):
INITIAL_CONTEXT_FACTORY=com.ibm.mq.jms.context.WMQInitialContextFactory
PROVIDER_URL=192.168.66.23:1414/SYSTEM.DEF.SVRCONN
The rest configuration leaves as default.
Kindly note, here i use the default channel SYSTEM.DEF.SVRCONN so that i can logon to admin console. If I change the channel to the SSL oneSSL.CHL, I also can't logon to admin console. The error happened here is just like the one in my client app.
Another clarification, in my client, i use follow code can connect to connect qmgr(TestMgr) successfully through channel SSL.CHL.
MQConnectionFactory factory = new MQConnectionFactory();
factory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
factory.setQueueManager("TestMgr");
factory.setSSLCipherSuite("SSL_RSA_WITH_RC4_128_MD5");
factory.setPort(1414);
factory.setHostName("192.168.66.23");
factory.setChannel("SSL.CHL");
MQConnection connection = (MQConnection) factory.createConnection();
And now the problem is just like you said, that's the initial context failed connect to qmgr through SSL channel. The option(use plain channel for initial context and ssl channel for connection factory) you provided works too. But I still want to know how to get initial context with ssl channel work. Thanks for you patience very much. Your updates will be appreciated.
thanks
I never really liked com.ibm.mq.jms.context.WMQInitialContextFactory very much. It stores the managed objects on a queue. So in order to lookup the connectionFactory, which tells JMS how to connect to the QMgr, it is first necessary to connect to the QMgr to make the JNDI call. Therefore, before you can debug the SSL connection, you need to know whether the underlying JNDI provider is working.
If you want to skip the MQ-based JNDI provider and just use the filesystem, see the updated version of Bobby Woolf's article here. If you want to continue with com.ibm.mq.jms.context.WMQInitialContextFactory, read on but be prepared to provide more configuration info.
When you run the JMSAdmin tool, do you display the objects after creating them? For example, here is one of my JMSAdmin.bat scripts:
# Connection Factory for Client mode
# Delete the Connection Factory if it exists
DELETE CF(JMSDEMOCF)
# Define the Connection Factory
DEFINE CF(JMSDEMOCF) +
SYNCPOINTALLGETS(YES) +
SSLCIPHERSUITE(NULL_SHA) +
TRAN(client) +
HOST(127.0.0.1) CHAN(SSL.SVRCONN) PORT(1414) +
QMGR( )
# Display the resulting definition
DISPLAY CF(JMSDEMOCF)
This deletes the object (because JMSAdmin doesn't have a define with replace option) then defines the object, then displays it. Do you in fact see both objects defined? Can you connect and interactively display them both? Can you update your question with the contents displayed?
If so, then what does the JNDI provider configuration look like with each sample program? The 2009 indicates that there is at least a connection to the QMgr being made, so it is important to determine whether the thing that suffering the broken connection is your app or the JNDI provider. To diagnose that requires the config info you are using for the JNDI provider and whether it is the same in the working and failing cases. If not, how do they differ?
Once you know whether it's the app or the JNDI provider that is causing the problem (or switch to another JNDI provider that doesn't require an MQ connection such as the filesystem initial context) then it will be possible to determine the next steps.
The article linked above has samples of code and managed object scripts that use a filesystem JNDI provider. You may notice my scripts pasted in above use the same QMgr name. That's because I wrote that part of the article. When I want to switch to SSL using those same samples, I just update the connectionFactory to point to the SSL channel and it works.
Here are the other bits from the sample that I've modified:
java -Djavax.net.debug=ssl ^
-Djavax.net.ssl.trustStore=key2.jks ^
-Djavax.net.ssl.keyStore=key2.jks ^
-Djavax.net.ssl.keyStorePassword=???????? ^
-Djavax.net.ssl.trustStorePassword=???????? ^
-cp "%CLASSPATH%" ^
com.ibm.examples.JMSDemo -pub -topic JMSDEMOPubTopic %*
Note: The ^ is Windows version of line continuation.
Then if there are problems, I follow the debugging scenario I described in this SO answer. Note that the app will require a truststore, even if you have SSLCAUTH(OPTIONAL) on your channel. This is because the app must always validate the QMgr's certificate, even if the app does not present its own certificate. In my case I was using SSLCAUTH(REQUIRED) so my app needed both a keystore and a truststore. Your question mentions that the QMgr has a keystore but does not say what you did for the application.
Finally, a 2009 will usually generate an entry in the QMgr error logs. If you continue to get the problem, please update your question with those log entries.
UPDATE:
Responding to the comments, the JMSAdmin tool is part of the WMQ package. However, WMQ it comes with jars for filesystem context and LDAP context. The WMQInitialContextFactory is optional and is delivered as SupportPac ME01. When using WMQInitialContextFactory with the JMSAdmin tool (or the JMSAdmin GUI or with WMQ Explorer) it is necessary to configure the PROVIDER_URL with the host, port and channel. For example:
PROVIDER_URL: <Hostname>:<port>/<SVRCONN Channel Name>
192.168.66.23:1414/SSL.SVRCONN
So after reviewing your post again, I realized that you did provide the config info for WMQInitialContextFactory. I was looking for a JMSADmin.config file but you have it in the environment hash table. And that is where the problem is. You are attempting to use the SSL channel for both the WMQInitialContextFactory and the connection factory. This is what is causing the lookup to fail. The WMQInitialContextFactory first makes a Java connection to the QMgre in order to look in the queue to obtain the administered objects such as QCF. In order to do that, it needs to know the ciphersuite that the channel is set up for in order to negotiate the handshake. Right now, the *only * place that ciphersuite is recorded is in the QCF definition.
Try adding the following line:
environment.put(MQEnvironment.sslCipherSuite, "SSL_RSA_WITH_RC4_128_MD5");
As per this Infocenter page, that should tell the context factory classes what ciphersuite to use. Of course, they also need to know where the trust store is (and possibly keystore if the channel has SSLCAUTH(RQUIRED) set) so you still need to get those values in the environment. You can use the command-line variables or try loading them into the environment using code. You'll need both -Djavax.net.ssl.trustStore=key2.jks and -Djavax.net.ssl.trustStorePassword=????????.
The other option is to continue to use the plaintext channel for the WMQInitialContextFactory and the SSL channel for the application. If the plaintext channel has an MCAUSER for a non-privileged user ID, it can be restricted to only connect to the QMgr and access the queue that contains the administered objects. With those restrictions, anyone will be able to read the administered objects using that channel but not the application queues or administrative queues.