Simperium bucket ready response time experiences - simperium

I have measured Simperium response time for loading a bucket (containing 20-30 small items) with the following code:
var simperium = new Simperium(simperiumAppID, { token: simperiumAuthData.access_token });
console.time("simperiumBucketInit");
var bucket = simperium.bucket("main");
bucket.on('error', function (errortype) {
console.log("got error for bucket: "+errortype);
})
bucket.on('ready', function () {
console.timeEnd("simperiumBucketInit");
});
bucket.start();
​
Bucket is loaded generally in 1.5-3 secs, which is a bit long, but acceptable. But sometimes it needs 20-30 secs, one time it needed more than 5 mins.
Is it global phenomenon?
I am using the free tier. Do the payed tiers have better performance?

Performance problems in the past week 9/1 - 9/6 are likely the culprit of database issues we've had. The servers have since been upgraded and performance should be much smoother.
Another possible issue is the connection to the server. The Simperium JS library includes and uses SockJS. It tries to establish a websocket connection, but sometimes this can fail, depending on network/firewall/browser. In those cases, it should fall back to using regular http polling, but this failover can take up to 20s.
It is possible to pass options directly to SockJS and use SockJS options to control the connection behavior and force the use of polling:
var simperium = new Simperium(simperiumAppID, { token: "",
sockjs: {
protocols_whitelist: ['xdr-polling', 'xhr-polling', 'iframe-xhr-polling', 'jsonp-polling']
}
});
Regarding free vs paid tiers, paid/production apps will have dedicated resources separate from free apps so they should have more consistent performance.

Related

How do I use Event Store DB client without continued memory usage growth?

I am using the event store client for .Net and I am struggling to find the correct way to use the client. When I register the client as a singleton in the .Net dependency injection and run my application over an extended period of time memory usage grows continuously with each subscription.
I create and register the client in the following way. A full minimal application that experiences the problem can be found here.
var esdbConnectionString = configuration.GetValue("ESDB_CONNECTION_STRING", "esdb://admin:changeit#localhost:2113?tls=false");
var eventStoreClientSettings = EventStoreClientSettings.Create(esdbConnectionString);
var eventStoreClient = new EventStoreClient(eventStoreClientSettings);
services.AddSingleton(eventStoreClient);
My application has a high number of short streams over an extended period of time
To Reproduce
Steps to reproduce the behavior:
Register EventStoreClient as singleton as reccomended in the documentation.
Subscribe to a very high number of streams over an extended time.
Cancel the CancellationToken sent into the stream subscription and let it be garbage collected.
Watch memory usage of service grow.
How I am creating and subscribing to streams:
var streamName = CreateStreamName();
var payload = new PingEvent { StreamNr = _currentStreamNumber };
var eventData = new EventData(Uuid.NewUuid(), typeof(PingEvent).Name, EventSerialization.SerializeEventData(payload));
await _client.AppendToStreamAsync(streamName, StreamState.Any, new[] { eventData });
var streamCancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(30));
await _client.SubscribeToStreamAsync(streamName, FromStream.Start, async (sub, evnt, token) =>
{
if (evnt.Event.EventType == "PongEvent")
{
_previousStreamIsDone = true;
streamCancellationTokenSource.Cancel();
}
},
cancellationToken: streamCancellationTokenSource.Token);
Approaches attempted
Registering as Transient or Scoped
If I register the client as Transient or Scoped in .Net DI it is throwing thousands of exceptions internally and causing multiple problems.
Manually handling lifetime of client
By having a singleton service that handles the lifetime of the client I have attempted to every once in a while dispose of the client and create a new one, ensuring that there exists only one instance of the client at the same time. This results in same problem as registering the service as Transient or Scoped.
I am using version 22.0.0 of the Event Store client in .Net 6 against Event Store Database 21.10.0. The problems happens both when running on windows and on the standard aspnet:6.0 linux docker container.
By inspecting the results of these dotnet-dumps the memory growth seem to be happening inside this HashSet of ActiveCalls in the gRPC client.
I am hoping to find a way of using the client that does not lead to memory growth.
In your reproduction the leaked calls are coming from the extra read that you are issuing while processing an event received on the subscription.
There is an open issue (https://github.com/EventStore/EventStore-Client-Dotnet/issues/219) at the moment to deal with this better, but currently if you issue a read but don't consume all the events and don't cancel the read, then the call remains open. In your case this is happening if the slave has managed to reply Pong before the master has issued the read that results from receiving its own Ping in the subscription. That read will then contain the Ping and the Pong, only the Ping is read, and the call remains open.
For now, if you cancel those reads by passing the cancellation token that you are cancelling into the ReadStreamAsync call in ReadFromStartOfStreamToEnd, it should resolve your problem.
In case it's helpful for you, you can see the number of Current Calls live rather than waiting a long time to see the effect on memory:
dotnet-counters monitor --counters "Grpc.Net.Client" -p <processid>

Deleted Scheduled Messages still sending

I am building a slack application that will schedule a message when someone posts a specific type of workflow in a channel.
It will schedule a message, and if someone from a specific group of users replies before it has sent, it will delete the scheduled message.
Unfortuantely these messages are still sending, even though the list of scheduled messages is empty and the response when deleting the message is a successful one. I am also deleting the message within the 60 second limit that is noted on the API.
Scheduling the message gives me a success response, and if I use the list scheduled messages I get:
[
{
id: 'MESSAGE_ID',
channel_id: 'CHANNEL_ID',
post_at: 1620428096, // 2 minutes in the future for testing
date_created: 1620428026,
text: 'thread_ts: 1620428024.001300'
}
]
Canceling the message:
async function cancelScheduledMessage(scheduled_message_id) {
const response = await slackApi.post("/chat.deleteScheduledMessage", {
channel: SLACK_CHANNEL,
scheduled_message_id
})
return response.data
}
response.data returns { "ok": true }
If I use the list scheduled message API to retrieve what is scheduled I get an empty array []
However, the message will still send to the thread.
Is there something I am missing? I have the proper scopes set up and the API calls appear to be working.
If it helps, I am using AWS Lambda, and DynamoDB to store/retrieve the thread_ts and message IDs.
Thanks all.
For messages due in 5 minutes or less, chat.deleteScheduleMessage has a bug (as of November 2021) [1]. Although this API call may return OK, the actual message will still be delivered due to the bug.
Note that for messages within 60 seconds, this API does return an proper error code, as described in the documentation [2]. For the range (60 seconds, ~5 minutes), the API call returns OK but fails behind the scenes.
Before this bug is fixed, the only thing one can do is to only delete messages scheduled 5 minutes (the exact threshold may vary, according to Slack) or more (yes not very ideal and may not be feasible for some applications).
[1] Private communication with Slack support.
[2] https://api.slack.com/methods/chat.deleteScheduledMessage

Managing Server Side Events with a Service Worker

I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client

Testing for off-line in a Worklight app

What is the best way to test if a Worklight app is off-line?
After I use the WL.Device.startAcquisition( ... ) api to start stuff off, I am currently using:
WL.Device.Geo.acquirePosition(function(pos) {
console.log("***** Aquired position ***** " + JSON.stringify(pos));
}, function(error) {
console.log(JSON.stringify("***** Unable to aquire position ***** " + error.code + ' : ' + error.message));
// call method to asynchronously - periodicallyCheckIfOnline( ... );
}, {timeout: 5000});
And if I determine that I am offline, I then use the watchPosition api to periodically test for a new connection.
navHandle = navigator.geolocation.watchPosition(onSuccess, onError, { timeout: 5000 });
Once I get the connection back I then clear the watch.
navigator.geolocation.clearWatch(navHandle);
Is this the best way of doing it or are there better Worklight APIs to use for this.
Note: I am trying to test this in a Mobile Browser Simulator scenario hence the short timeouts.
On startup, use something like
WL.Client.connect({onSuccess:onConnectSuccess,onFailure:onConnectFailure,timeout:number_of_ms});
to check if you have initial connectivity.
To detect any further changes in connectivity, you can use the
WL.Client.setHeartBeatInterval(number_of_s) API.
This will 'ping' the worklight server every number_of_s seconds and fire the WL.Events.WORKLIGHT_IS_DISCONNECTED and WL.Events.WORKLIGHT_IS_CONNECTED events, to which you attach callbacks to as described in the reading-worthy tutorial linked to by #Leandro David.
NOTE : if you need to use network to transfer heavy data, do a double check : once you know you have connectivity to the worklight server, use the WL.Device.getNetworkInfo API to check the connection quality before sending/receiving data.
There is a tutorial about dealing with online/offline mode in the worklight geting started material ("Working Offline" link):
http://www.ibm.com/developerworks/mobile/worklight/getting-started.html#GS_work_offline
It tells the best way to use Worklight API to deal with online/offline connection
To summarize, I believe this is the most important part:
Active detection of connectivity
Connectivity loss can be detected in two locations in your application code:
– Application initialization – WL.Client.init() method, typically called from initOptions.js file
– Adapter procedure invocation – WL.Client.invokeProcedure() method
-To add connectivity failure detection in either location, add the onConnectionFailure property and specify a callback function to be invoked if connectivity fails
var wlInitOptions = {
onConnectionFailure: function (data){
connectionFailure(data);
},
or
WL.Client.invokeProcedure(invocationData, {
onSuccess: successHandlerFunction,
onConnectionFailure: connectionFailure,
timeout: 1000
});
Passive detection – Offline and online events
Each time the Worklight framework attempts to access the Worklight Server, it might detect that the application switched from offline to online status or vice versa.
In both cases, JavaScript events are fired:
– WL.Events.WORKLIGHT_IS_DISCONNECTED event is fired when connectivity to the Worklight Server fails
– WL.Events.WORKLIGHT_IS_CONNECTED event is fired when connectivity to the Worklight Server is restored
You can add event listeners to these events and specify the callback functions to handle them.
document.addEventListener(WL.Events.WORKLIGHT_IS_CONNECTED, connectDetected, false);
document.addEventListener(WL.Events.WORKLIGHT_IS_DISCONNECTED, disconnectDetected, false);
Note: WL.Events.WORKLIGHT_IS_DISCONNECTED and WL.Events.WORKLIGHT_IS_CONNECTED are namespace constants, not strings
There are more details available in the tutorial above

Sending different body via Amazon SES Api

I am using Amazon-SES api for sending email to clients. It's very successfull but i have to send different body for each client. When i start to send mails about 200.000 clients, how the code below look like ? Is it loop 200.000 times or can i prepare an object and send one time (like n:n system, now it's 1:n).
var clientList=new List<String>(); //200.000 mail adress
foreach(var to in clientList)
{
SendEmailRequest email = new SendEmailRequest();
email.Message = new Message();
email.Message.Body = new Body();
email.Message.Body.Html = new Content(bodyhtml);
email.Message.Subject = new Content(subject);
email.WithDestination(new Destination() { ToAddresses = new List<String>() { to } })
.WithSource("mysite#mysite.com")
.WithReturnPath("mysite#mysite.com");
SendEmailResponse resp = client.SendEmail(email); //that's 1:n
}
SendEmailResponse resp = client.SendEmail(emailList); //that's n:n but it's a wrong usage
How can i send n:n algorithm in Amazon SES ?
Application is Asp.net MVC 3. So can i use Asynchronous Controller ? Is it good idea ?
Assuming you have production access for Amazon SES already (see What should I do after I'm finished testing and evaluating Amazon SES?) and a sufficiently increased Sending Quota to send 200.000 mails/day in the first place (see How Amazon SES Sets Sending Limits), the respective limits are documented for the SendEmail action:
The total size of the message cannot exceed 10 MB.
Amazon SES has a limit on the total number of recipients per message:
The combined number of To:, CC: and BCC: email addresses cannot exceed
50. If you need to send an email message to a larger audience, you can divide your recipient list into groups of 50 or fewer, and then call
Amazon SES repeatedly to send the message to each group. [emphasis mine]
Please note: It is strictly recommended to use Bcc: only for this kind of mass mailing operation, else your users will see their mail addresses exposed to each other and I can guarantee they won't be amused at all!
So you could prepare mails with 50 Bcc: recipients at a time, dropping the outbound mail amount for your use case to about 4.000, which is a considerable improvement already. However, please note a respective AWS Team response to Increase sending limit, and question on FAQ:
if you're sending to multiple ISPs [...], I would recommend
sending to one address at a time since certain ISPs are sensitive
about multiple addresses on the BCC: line in large quantities. [emphasis mine]
Whether or not this warning applies depends on your use case as usual (e.g. you might be able to shard the mails by ISP etc.).
Doing it asynchronously is fine and likely useful, but you need to ensure to stay within your Maximum Send Rate (mails/second) limit as well. These limits are visible in the SES tab of the AWS Management Console, but available via the API as well of course (see Monitoring Your Sending Limits for details).

Resources