I use gatling to send data to an ActiveMQ. The payload is generated in a separate method. The response should also be validated. However, how can I access the session data within the checks
check(bodyString.is()) or simpleCheck(...)? I have also thought about storing the current payload in a separate global variable, but I don't know if this is the right approach. My code's setup looks like this at the moment:
val scn = scenario("Example ActiveMQ Scenario")
.exec(jms("Test").requestReply
.queue(...)
.textMessage{ session => val message = createPayload(); session.set("payload", payload); message}
.check(simpleCheck{message => customCheck(message, ?????? )})) //access stored payload value, alternative: check(bodystring.is(?????)
def customCheck(m: Message, string: String) = {
// check logic goes here
}
Disclaimer: providing example in Java as you don't seem to be a Scala developper, so Java would be a better fit for you (supported since Gatling 3.7).
The way you want to do things can't possibly work.
.textMessage(session -> {
String message = createPayload();
session.set("payload", payload);
return message;
}
)
As explained in the documentation, Session is immutable, so in a function that's supposed to return the payload, you can't also return a new Session.
What you would have to do it first store the payload in the session, then fetch it:
.exec(session -> session.set("payload", createPayload()))
...
.textMessage("#{payload}")
Regarding writing your check, simpleCheck doesn't have access to the Session. You have to use check(bodyString.is()) and pass a function to is, again as explained in the documentation.
Say I have a service worker that populates the cache with the following working code when its activated:
async function install() {
console.debug("SW: Installing ...");
const cache = await caches.open(CACHE_VERSION);
await cache.addAll(CACHE_ASSETS);
console.log("SW: Installed");
}
async function handleInstall(event) {
event.waitUntil(install());
}
self.addEventListener("install", handleInstall);
When performs cache.addAll(), will the browser use its own internal cache, or will it always download the content from the site? This is important, because it one creates a new service worker release, and there are new static assets, the old version maybe be cached by the service worker.
If not then I guess one still has to do named hashed/versioned static assets. Something I was hoping service workers would make none applicable.
cache.addAll()'s behavior is described in the service worker specification, but here's a more concise summary:
For each item in the parameter array, if it's a string and not a Request, construct a new Request using that string as input.
Perform fetch() on each request and get a response.
As long as the response has an ok status, call cache.put() to add the response to the cache, using the request as the key.
To answer your question, the most relevant step is 1., as that determines what kind of Request is passed to fetch(). If you just pass in a string, then there are a lot of defaults that will be used when implicitly constructing the Request. If you want more control over what's fetch()ed, then you should explicitly create a Request yourself and pass that to cache.addAll() instead of passing in strings.
For instance, this is how you'd explicitly set the cache mode on all the requests to 'reload', which always skip the browser's normal HTTP cache and go against the network for a response:
// Define your list of URLs somewhere...
const URLS = ['/one.css', '/two.js', '/three.js', '...'];
// Later...
const requests = URLS.map((url) => new Request(url, {cache: 'reload'}));
await cache.addAll(requests);
I am trying to create websocket server and client in my iOS app, which i successfully managed to do with the help of sample implementation here. (https://github.com/apple/swift-nio/tree/master/Sources/NIOWebSocketServer) - so current working situation is, i run the websocket server when app launches and then I load the client in a webview which can connect to it.
Now my problem is I want my server to secured websocket server (Basically connect to the websocket server from a HTTPS html page)
I am new to network programming and Swift-nio documentation is lacking to say the least. As far as I understand I could use (https://github.com/apple/swift-nio-transport-services)
I found this thread which is exactly what I need - https://github.com/apple/swift-nio-transport-services/issues/39 - I could disable the TLS authentication as I dont care in my usecase as long as I could get the websocket connected.
So my question is how to I extend my client (https://github.com/apple/swift-nio/tree/master/Sources/NIOWebSocketClient) and server (https://github.com/apple/swift-nio/tree/master/Sources/NIOWebSocketServer) to use swift-nio-transport-service.
I could add the NIOSSLContext and stuff but I think I need to add the EventLoopGroup and new bootstrap methods. I know the answers is right there.... but I just cannot seem to pinpoint it.
Any pointer would be appreciated.
Thanks.
To translate a simple NIO Server to a NIOTransportServices one, you need to make the following changes:
Add a dependency on NIOTransportServices to your server.
Change MultiThreadedEventLoopGroup to NIOTSEventLoopGroup.
Change ClientBootstrap to NIOTSConnectionBootstrap.
Change ServerBootstrap to NIOTSListenerBootstrap.
Build and run your code.
Some ChannelOptions don’t work in NIOTransportServices, but most do: the easiest way to confirm that things are behaving properly is to quickly test the common flow.
This doesn’t add any extra functionality to your application, but it does give you the same functionality using the iOS APIs.
To add TLS to either NIOTSConnectionBootstrap or NIOTSListenerBootstrap, you use the .tlsOptions function. For example:
NIOTSListenerBootstrap(group: group)
.tlsOptions(myTLSOptions())
Configuring a NWProtocolTLS.Options is a somewhat tricky thing to do. You need to obtain a SecIdentity, which requires interacting with the keychain. Quinn has discussed this somewhat here.
Once you have a SecIdentity, you can use it like so:
func myTLSOptions() -> NWProtocolTLS.Options {
let options = NWProtocolTLS.Options()
let yourSecIdentity = // you have to implement something here
sec_protocol_options_set_local_identity(options.securityProtocolOptions, sec_identity_create(yourSecIdentity)
return options
}
Once you have that code written, everything should go smoothly!
As an extension, if you wanted to secure a NIO server on Linux, you can do so using swift-nio-ssl. This has separate configuration as the keychain APIs are not available, and so you do a lot more loading of keys and certificates from files.
I needed a secure websocket without using SecIdentity or NIOTransportServices, so based on #Lukasa's hint about swift-nio-ssl I cobbled together an example that appears to work correctly.
I dunno if it's correct, but I'm putting it here in case someone else can benefit. Error-handling and aborting when the try's fail is left out for brevity.
let configuration = TLSConfiguration.forServer(certificateChain: try! NIOSSLCertificate.fromPEMFile("/path/to/your/tlsCert.pem").map { .certificate($0) }, privateKey: .file("/path/to/your/tlsKey.pem"))
let sslContext = try! NIOSSLContext(configuration: configuration)
let upgradePipelineHandler: (Channel, HTTPRequestHead) -> EventLoopFuture<Void> = { channel, req in
WebSocket.server(on: channel) { ws in
ws.send("You have connected to WebSocket")
ws.onText { ws, string in
print("Received text: \(string)")
}
ws.onBinary { ws, buffer in
// We don't accept any Binary data
}
ws.onClose.whenSuccess { value in
print("onClose")
}
}
}
self.eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let port: Int = 5759
let promise = self.eventLoopGroup!.next().makePromise(of: String.self)
_ = try? ServerBootstrap(group: self.eventLoopGroup!)
// Specify backlog and enable SO_REUSEADDR for the server itself
.serverChannelOption(ChannelOptions.backlog, value: 256)
.serverChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
.childChannelInitializer { channel in
let handler = NIOSSLServerHandler(context: sslContext)
_ = channel.pipeline.addHandler(handler)
let webSocket = NIOWebSocketServerUpgrader(
shouldUpgrade: { channel, req in
return channel.eventLoop.makeSucceededFuture([:])
},
upgradePipelineHandler: upgradePipelineHandler
)
return channel.pipeline.configureHTTPServerPipeline(
withServerUpgrade: (
upgraders: [webSocket],
completionHandler: { ctx in
// complete
})
)
}.bind(host: "0.0.0.0", port: port).wait()
_ = try! promise.futureResult.wait()
try! server.close(mode: .all).wait()
My requiment is I have an API which will provide user data. In the Apostrophe CMS I need to access the user data from all the layouts (Header, Main, Footer).
I can see gobal.data which is avaiable everywhere in the template. Likewise I need a hook which will call the API and store the response data in the Apostrophe's global.data.
Please let me know if you need further informations.
You could hit that API on every page render:
// index.js of some apostrophe module
// You should `npm install request-promise` first
const request = require('request-promise');
module.exports = {
construct: function(self, options) {
self.on('apostrophe-pages:beforeSend', async function(req) {
const apiInfo = await request('http://some-api.com/something');
req.data.apiInfo = apiInfo;
// now in your templates you can access `data.apiInfo`
});
}
}
But this will hit that API on every single request which will of course make your site slow down. So I would recommend that you cache the information for some period of time.
I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};
this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.
Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.
I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.