How to test Connection Pooling in nodeJS using mogoDB Database? - connection

How to test Connection Pooling in NodeJS using MongoDB Database?

Instead of having our app wait around for a request before connecting to the database we're going to have it connect when the application starts, and we're going to give ourselves a pool of connections to draw from as and when we need them.
Here we're using the node-mongodb-native driver, which like most available MongoDB drivers has an option that you can use to set the size of your connection pool. For this driver, it's called poolSize, and has a default value of 5. We can make use of the poolsize option by creating a database connection variable in advance, and letting the driver allocate available spaces as new connection requests come in:
// This is a global variable we'll use for handing the MongoDB client
var mongodb;
// Connection URL
var url = '[connectionString]';
// Create the db connection
MongoClient.connect(url, function(err, db) {
assert.equal(null, err);
mongodb=db;
}
);
To change the size of the connection pool from the default, we can pass poolSize in as an option:
// Create the database connection
MongoClient.connect(url, {
poolSize: 10
// other options can go here
},function(err, db) {
assert.equal(null, err);
mongodb=db;
}
);
Now we have a connection ready and waiting. To use our new connection, we just need to make use of our new global variable, mongodb when a request is made:
// Use the connect method to connect to the server when the page is requested
app.get('/', function(request, response) {
mongodb.listCollections({}).toArray(function(err, collections) {
assert.equal(null, err);
collections.forEach(function(collection) {
console.log(collection);
});
})
response.send('See console for a list of available collections');
});

Related

Protocol error when calling puppeteer.connect()

I am using the basic approach as set out in this post to connect from a client docker container to any one of a number of chrome docker containers (in a docker swarm/service, potentially across several servers behind nginx, deployed using CapRover).
In each chrome container I maintain a pool (just a simple array) of browser objects, and direct incoming requests to an appropriate browser as follows (very similar to the linked post):
import http from 'node:http'; // https://nodejs.org/api/http.html
import httpProxy from 'http-proxy'; // https://www.npmjs.com/package/http-proxy
const proxy = new httpProxy.createProxyServer({ ws: true });
// an array (pool) of pre-launched and managed browser objects...
const browsers = [ ... ];
http
.createServer()
.on('upgrade', (req, socket, head) => {
const browser = browsers[Math.floor(Math.random() * browsers.length)]; // in reality I don't just pick a browser at random
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target });
})
.listen(3222);
The above is listening at ws://srv-captain--chrome:3222 (communication is "internal" over the docker network between containers).
Then, in my client container, I connect to the common endpoint ws://srv-captain--chrome:3222 as follows:
import puppeteer from 'puppeteer'; // https://www.npmjs.com/package/puppeteer (using version 17.1.3 at time of posting this)
try {
const browser = await puppeteer.connect({ browserWSEndpoint: 'ws://srv-captain--chrome:3222' });
} catch (err) {
console.error('error connecting to browser', err);
}
This works really well, except that I am getting occasional/inconsistent errors like these when calling puppeteer.connect() in the client container above:
Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
Protocol error (Performance.enable): Target closed.
Almost always, if I simply try to connect again, the connection is made without further error, and at the first attempt.
I have no idea why the error is complaining that the page has been closed or Target closed since, at this point in the process, I'm not attempting to interact with any page, and I know from listening for browser.on('disconnected'...), and also monitoring the chromium processes themselves, that each browser in the array is still working fine... none has crashed.
Any idea what's going on here?
UPDATE after further testing
Of course, in the client container we don't connect to a browser just for the sake of it, like in the above snippet, but to open a page and do some stuff with the page. In practice, in the client container it's more like the following test snippet:
const doIteration = function (i) {
return new Promise(async (resolve, reject) => {
// mimic incoming requests coming in at random times over a short period by introducing a random initial delay...
await new Promise(resolve => setTimeout(resolve, Math.random() * 5000));
// now actually connect...
let browser;
try {
browser = await puppeteer.connect({ browserWSEndpoint: `ws://srv-captain--chrome:3222?queryParam=loop_${i}` });
} catch (err) {
reject(err);
return;
}
// now that we have a browser, open a new page...
const page = await browser.newPage();
// do something useful with the page (not shown here) and then close it..
await page.close();
// now disconnect (but don't close) the browser...
browser.disconnect();
resolve();
});
};
const promises = [];
for (let i = 0; i < 15; i++) {
promises.push( doIteration(i) );
}
try {
await Promise.all(promises);
} catch (err) {
console.error(`error doing stuff`, err);
}
Each iteration above is being performed multiple times concurrently... I am using Promise.all() on an array of iteration promises to mimic multiple concurrent incoming requests in my production code. The above is enough to reproduce the problem... the error doesn't happen on calling puppeteer.connect() with every iteration, just some.
So there seems to be some sort of interplay between opening/closing a page in one iteration, and calling puppeteer.connect() in another, despite closing the page and disconnecting the browser properly in each iteration? This probably also explains the Most likely the page has been closed error message when calling puppeteer.connect() if there is some hangover relating to a page closed in another iteration... though for some reason this error occurs when calling puppeteer.connect()?
With the use of a pool of browser objects in the browsers array, and a docker swarm having multiple containers on multiple servers, each upgrade message could be received at a different container (which could even be on a different server) and could be routed to a different browser in the browsers array. But I now think that this is a red herring, because in the further testing I narrowed the problem down by routing all requests to browsers[0] and also scaling the service down to just one container... so that the upgrade messages are always handled by the same container on the same server and routed to the same browser... and the problem still occurs.
Full stacktrace for the above-mentioned error:
Error: Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
at CDPSession.send (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Connection.js:281:35)
at EmulationManager.emulateViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/EmulationManager.js:33:73)
at Page.setViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:1776:93)
at Function._create (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:242:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Target.page (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Target.js:123:23)
at async Promise.all (index 0)
at async BrowserContext.pages (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Browser.js:577:23)
at async Promise.all (index 0)
As I dug deeper and deeper into this problem, it become more and more apparent that I might not actually be doing anything fundamentally wrong, and that this might just be a bug in puppeteer itself. So I reported those as an issue over on puppeteer... and indeed, it is acknowledged as a bug for any version later than 15.5.0, and is being fixed. In the meantime, the workaround is to revert to puppeteer version 15.5.0 and to be careful when calling browser.pages() when concurrent connections are being used, because that might itself throw an error... but I understand that this too might be something that they can/will fix so that browser.pages() is more resilient to the presence of concurrent connections.

Possible to publish from Rails through Socket that runs on Node?

Let's say I run server with node.js and I attach Socket.io to it.
Now I can emit from it and let's say my front-end app can talk to it.
Can I use Rails to emit to the same front-end app through using that Socket server I ran with node.js?
You won't be able to emit directly from rails to the same socket.
What you can do is setup an inter process communication (IPC) between rails & node.js, and when a message is received from rails, emit that message to the front end.
Here's an example using unix sockets, which require that your rails & node.js app run on the same machine.
const net = require('net');
const server = require('http').createServer();
const io = require('socket.io')(server);
server.listen(3000); // Socket.io server
const socketName = '/tmp/ipc.sock';
const unix = net.createServer(connection => {
connection.on('data', data => {
// data may be a JSON with a room name & message
// If data is big enough, you may need to buffer data
// and emit on `end`
io.emit('some-event', data.toString()); // Emit data
// connection.write('something'); if you want to send data back to rails
connection.end();
});
});
unix.listen(socketName, () => {
console.log(`Socket started at ${socketName}`);
});
I don't know any ruby, but now you will need to write to /tmp/ipc.sock
It should look something like this (Again, I don't know ruby)
require 'socket'
socket = UNIXSocket.new("/tmp/ipc.sock")
socket.puts('some data')
# or maybe socket.write('some data')
Unix sockets is one of many ways to handle IPC, you can use Redis, RabbitMQ or whatever you may like or feel comfortable with.

MQTT+Mosquitto+Javascript in windows

I am new in MQTT so can someone help me for connecting MQTT with Mosquitto using javascript i am using this code but it give error...
Connection failed: AMQJS0007E Socket error:undefined.
My Code is :
<script type='text/javascript' src='jquery-1.10.1.js'></script>
<script type='text/javascript' src="mqttws31.js"></script>
var client = new Messaging.Client("ns.testingindia.tld", 1883, "myclientid_" + parseInt(Math.random() * 100, 10));
//Gets called if the websocket/mqtt connection gets disconnected for any reason
client.onConnectionLost = function (responseObject) {
//Depending on your scenario you could implement a reconnect logic here
alert("connection lost: " + responseObject.errorMessage);
};
//Gets called whenever you receive a message for your subscriptions
client.onMessageArrived = function (message) {
//Do something with the push message you received
$('#messages').append('Topic: ' + message.destinationName + ' | ' + message.payloadString + '');
};
//Connect Options
var options = {
timeout: 3,
//Gets Called if the connection has sucessfully been established
onSuccess: function () {
alert("Connected");
},
//Gets Called if the connection could not be established
onFailure: function (message) {
document.write("Connection failed: " + message.errorMessage);
alert("Connection failed: " + message.errorMessage);
}
};
//Creates a new Messaging.Message Object and sends it to the HiveMQ MQTT Broker
var publish = function (payload, topic, qos) {
//Send your message (also possible to serialize it as JSON or protobuf or just use a string, no limitations)
var message = new Messaging.Message(payload);
message.destinationName = topic;
message.qos = qos;
client.send(message);
}
//]]>
You are connecting to port 1883 which is the default MQTT port. I assume you mean to use Websockets, and that would typically be configured on a different port number. If the broker you're using has Websocket support, ensure you connect to the correct port with Messaging.Client().
If you're using the Mosquitto broker, you'll need version 1.4 from its bitbucket repository for Websocket support, but note that Mosquitto 1.4 hasn't yet been released.
A quick way to test that your broker isn't causing the problem is to connect to broker.mqttdashboard.com port:8000 if that doesn't work my next guess is that you have just mosquitto installed and no websockets server, which you need if you want to use JS to connect directly to the broker over the web.
Another, but quicker way to get up and running now is downloading hivemq (trial version supports 25 connections) it has a mqtt broker with websockets built in and will run on windows and will be up and running in 5 mins.
Which version of Mosquitto are you using?
The current release version (1.3.4) does not natively support Websockets (next version will)
You can use something like lighttpd with mod_websockets to supply websocket support (instructions for linux are linked to from here: http://test.mosquitto.org/ws.html) or you can build a new version of Mosquitto from the head of the source tree

Socket.io-1.0 get clients from namespace

I work with socket.io 1.0 and maybe I'm wrong with my conception.
Actually, I open a namespace server side with
var nsp = io.of('/myNamespace');
And clients connect with
var socket = io.connect('http://localhost/myNamespace');
I can start communication without problems.
Server side I catch signals with
nsp.on('connection', function(socket){
socket.on('disconnect', function(){
//problem here
});
});
In the disconnect I would like to disconnect all sockets connected to my namespace, so i tried to do
for(var myParticipantID in io.sockets.adapter.nsp.connected)
{
io.sockets.adapter.nsp.connected[myParticipantID].disconnect();
}
but it doesn't work ... I don't have error but clients still connecting
I tried with
io.sockets.nsp.clients();
but I have error since socket.io 1.0
I don't want to create room, but maybe it's my mistake?
Thanks for your help,
MagicDenver
If it would help somebody,
I work with node js so I created a value:
app.set(idNameSpace,[]);
and push socket when I have a new connection
You should use the io.of(namespace) function to get connected clients.
for (var id in io.of('/namespace').connected) {
var s = io.of('/namespace').connected[id];
s.disconnect();
}
If you don't know the namespace and you are in a socket.on statement, you can use socket.nsp.connected instead of io.of('/namespace').connected

MBeanServerConnection.invoke hangs forever

We have an app that invokes various remote methods on MBeans using MBeanServerConnection.invoke.
Occasionally one of these methods hangs.
Is there any way to have a timeout on the call? so that it will return with an exception if the call takes too long?
Or do I have to move all those calls into separate threads so they don't lock up the UI and require killing the app?
See http://weblogs.java.net/blog/emcmanus/archive/2007/05/making_a_jmx_co.html
===== Update =====
I was thinking about this stuff when I first responded, but I was on my mobile and I can't type worth a damn on it.....
This is really an RMI problem, and unless you use a different protocol, there's not much you can do, except, as you say, move all those calls into separate threads so they don't lock up the UI.
But.... if you have the option of fiddling with the target server and you can customize the connecting client, you have at least 1 option which is to customize the JMXConnectorServer on your target servers.
The standard JMXConnectorServer implementation is the RMIConnectorServer. Part of it's specification is that when you create a new instance using any of the constructors (like RMIConnectorServer(JMXServiceURL url, Map environment)), the environment map can contain a key/value pair where the key is RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE and the value is a RMIClientSocketFactory. Therefore, you can specify a socket factory method like this:
RMIClientSocketFactory clientSocketFatory = new RMIClientSocketFactory() {
public Socket createSocket(String host, int port) {
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
}
};
This factory creates a Socket and then sets its SO_TIMEOUT using setSoTimeout, so when the client connects using this socket, all operations, including connecting, will timeout after 3000 ms.
You could also checkout the JMXMP connector and server in the jmx-optional package of the OpenDMK. (links are to my github mavenized). No built in solution, mind you, but they're super easy to extend and JMXMP is simple TCP socket based rather than RMI, so this type of customization would be trivial.
Cheers.
# Nicholas : The above code is not working.I mean request is not getting timeout after 3000. ms.
map.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE , new RMIClientSocketFactory() {
#Override
public Socket createSocket(String host, int port) throws IOException {
if(logger.isInfoEnabled() ){
logger.info("JMXManager inside createSocket..." + host + ": port :" + port);
}
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
return s;
}
});
cs = JMXConnectorServerFactory.newJMXConnectorServer(url,map,mbeanServer);
As I answered on: How to set request timeout for JMX Connector the RMI properties can help you. All the properties are on Oracle documentation site:
http://docs.oracle.com/javase/7/docs/technotes/guides/rmi/sunrmiproperties.html.
For example: -Dsun.rmi.transport.tcp.responseTimeout=60000 is a client side tcp response timeout. There are also properties for connect timeout and for server side connections.
I also am not happy how the JMX/RMI/TCP stack hides important settings from lower level protocols, and makes it not available for a single connection.

Resources