I have two subdomains: https://abc.xxxx.com and https://xyz.xxxx.com. So my questions:
1). is it possible to register a service worker for
https://xyz.xxxx.com from https://abc.xxxx.com ? if yes then how?
2). if http://abc.xxxx.com (http insecure) then anyway to register
a service worker for https://xyz.xxxx.com from http://abc.xxxx.com like in iframe or something....
This is a real situation, I am facing for my multiple subdomain. Any help appreciated. Thanks in advance.
Here are some general answers that I think should address the various points you raise in your question:
Each registered service worker has an associated scope, which dictates the set of web pages that the service worker can control. The scope of a service worker is a URL, and that URL must have the same origin as the page that registers the service worker, and must be either a URL that corresponds to the same path level as the page or a path that's one or more levels down. The default scope corresponds to the same path level as location of the service worker script. Because of this restriction, it's not possible to call navigator.serviceWorker.register(...) from a page on one (sub-)domain and end up with a service worker that controls pages on another (sub-)domain.
There are restrictions in place to prevent you from throwing an https: <iframe> on an http: page and using that to register a service worker. See DOMException when registering service worker inside an https iframe
Though I don't know that it's directly related to your question, explicitly calling fetch() for an http: resource within your service worker code will result in a failure in current versions of Chrome, since mixed-content fetch()s are not allowed within a service worker. I don't know if things are 100% settled on that front, and this open bug is still relevant.
If you have pages that live on both abc.123.com and xyz.123.com and you want both sets of pages to be controlled by a service worker, then you need to have two separate service worker registrations. Each registration needs to be for a copy of your service worker JS file that's hosted on the respective domain corresponding to the top-level page, and all pages and service worker scripts need to be accessed via https:.
That being said, you can kick off a service worker registration for a different domain by including a cross-domain <iframe> on a page, but both the host page and the <iframe> need to be served via https:. The normal service worker scoping restrictions apply, so if, for example, you want to register a service worker for the other domain that will cover the entire https://other-domain.com/ scope, you need to make sure that the location of the service worker script being registered is at the top-level, e.g. https://other-domain.com/service-worker.js, not at https://other-domain.com/path/to/service-worker.js. This is the approach used by, for example, the AMP project via the <amp-install-serviceworker> element.
Service Worker scripts must be hosted at the same origin (Protocol + Domain name + Port). Each sub-domain is considered a different origin, So, you will need to register a service worker for each one. Each of these workers will have its own cache and scope.
Try use Ngnix proxy_pass. This work for me.
My bad, I misunderstood a bit. Well, here's the code
if('serviceWorker' in navigator){
if(window.location.pathname != '/'){
//register with API
if(!navigator.serviceWorker.controller) navigator.serviceWorker.register('/service-worker', { scope: '/' });
//once registration is complete
navigator.serviceWorker.ready.then(function(serviceWorkerRegistration){
//get subscription
serviceWorkerRegistration.pushManager.getSubscription().then(function(subscription){
//enable the user to alter the subscription
//jquery selector for enabling whatever you use to subscribe.removeAttr("disabled");
//set it to allready subscribed if it is so
if(subscription){
//code for showing the user that they're allready subscribed
}
});
});
}
}else{
console.warn('Service workers aren\'t supported in this browser.');
}
then here's the event -ish for your subscribe / unsubscribe
// subscribe or unsubscribe to the ServiceWorker
$(document.body).on('change', /*selector*/, function(){
//new state is checked so we subscribe
if($(this).prop('checked')){
navigator.serviceWorker.ready.then(function(serviceWorkerRegistration){
serviceWorkerRegistration.pushManager.subscribe()
.then(function(subscription){
// The subscription was successful
console.log('subscription successful'); //subscription.subscriptionId
//save in DB - this is important because
$.post($('#basePath').val() + 'settings/ajax-SW-sub/', {id:subscription.subscriptionId}, function(data){
//console.log(data);
}, 'json');
}).catch(function(e) {
if (Notification.permission === 'denied') {
// The user denied the notification permission which
// means we failed to subscribe and the user will need
// to manually change the notification permission to
// subscribe to push messages
console.warn('Permission for Notifications was denied');
} else {
// A problem occurred with the subscription; common reasons
// include network errors, and lacking gcm_sender_id and/or
// gcm_user_visible_only in the manifest.
console.error('Unable to subscribe to push.', e);
}
});
});//*/
//new state us unchecked so we unsubscribe
}else{
$('.js-enable-sub-test').parent().removeClass('checked');
//get subscription
navigator.serviceWorker.ready.then(function(reg) {
reg.pushManager.getSubscription().then(function(subscription) {
//unregister in db
$.post($('#basePath').val() + 'settings/ajax-SW-unsub/', {id:subscription.subscriptionId}, function(data){
//console.log(data);
}, 'json');
//remove subscription from google servers
subscription.unsubscribe().then(function(successful) {
// You've successfully unsubscribed
console.log('unsubscribe successful');
}).catch(function(e) {
// Unsubscription failed
console.log('unsubscribe failed', e);
})
})
});//*/
}
});
after that you need to register an account on the google developer console and register a project for something like *.xxxx.com . Then you need to get a proper manifest json with gcm_sender_id and gcm_user_visible_only
You need to create a key for both server and browser applications, there's more info on that on this page.
https://developers.google.com/web/updates/2015/03/push-notificatons-on-the-open-web?hl=en
The one for browser applications goes in your manifest json.
Then to send out push notifications you'll be using something like this:
function addSWmessage($args){
$output = false;
if(!isset($args['expiration']) || $args['expiration'] == ''){
$args['expiration'] = date('Y-m-d H:i:s', strtotime('+7 days', time()));
}
$sql = sprintf("INSERT INTO `serviceworker_messages` SET title = '%s', body = '%s', imageurl = '%s', linkurl = '%s', hash = '%s', expiration = '%s'",
parent::escape_string($args['title']),
parent::escape_string($args['text']),
parent::escape_string($args['imageurl']),
parent::escape_string($args['linkurl']),
parent::escape_string(md5(uniqid('******************', true))),
parent::escape_string($args['expiration']));
if($id = parent::insert($sql)){
$output = $id;
}
return $output;
}
function pushSWmessage($args){
//$args['messageid'] $args['userids'][]
foreach($args['userids'] as $val){
$sql = sprintf("SELECT messages_mobile, messages FROM `users_serviceworker_hash` WHERE users_id = '%s'",
parent::escape_string($val));
if($row = parent::queryOne($sql)){
$m1 = json_decode($row['messages'], true);
$m1[] = $args['messageid'];
$m2 = json_decode($row['messages_mobile'], true);
$m2[] = $args['messageid'];
$sql = sprintf("UPDATE `users_serviceworker_hash` SET messages = '%s', messages_mobile = '%s' WHERE users_id = '%s'",
parent::escape_string(json_encode($m1)),
parent::escape_string(json_encode($m2)),
parent::escape_string($val['users_id']));
parent::insert($sql);
}
}
$sql = sprintf("SELECT subscriptionID, users_id FROM `users_serviceworker_subscriptions`");
if($rows = parent::query($sql)){
foreach($rows as $val){
if(in_array($val['users_id'], $args['userids'])){
$registrationIds[] = $val['subscriptionID'];
}
}
if(isset($registrationIds) && !empty($registrationIds)){
// prep the bundle
$msg = array
(
'message' => '!',
'title' => '!',
'subtitle' => '!',
'tickerText' => '!',
'vibrate' => 1,
'sound' => 1,
'largeIcon' => '!',
'smallIcon' => '!'
);
$headers = array
(
'Authorization: key='.SW_API_ACCESS_KEY,
'Content-Type: application/json'
);
$fields = array
(
'registration_ids' => $registrationIds,
'data' => $msg
);
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, 'https://android.googleapis.com/gcm/send');
curl_setopt($ch,CURLOPT_POST, true);
curl_setopt($ch,CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch,CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch,CURLOPT_POSTFIELDS, json_encode($fields));
curl_exec($ch);
curl_close($ch);
}
}
}
And no, I don't know what issue you've been having but this works for me with multiple sub domains. :)
Related
When a browser requests an image from the server, the call is getting picked up by an API controller in the back end. There, a authorization check must be done before returning the image in order to check if the request is allowed or not.
So I need to add the authorization header and when searching for the best solution, I found this article: https://www.twelve21.io/how-to-access-images-securely-with-oauth-2-0/ and I was mostly intereseted in the solution number 4 which uses a Service Worker.
I made my own implementation, I registered a serviceWorker:
if ('serviceWorker' in navigator) {
console.log("serviceWorker active");
window.addEventListener('load', onLoad);
}
else {
console.log("serviceWorker not active");
}
function onLoad() {
console.log("onLoad is called");
var scope = {
scope: '/api/imagesgateway/'
};
navigator.serviceWorker.register('/Scripts/ServiceWorker/imageInterceptor.js', scope)
.then(registration => console.log("ServiceWorker registration successful with scope: ", registration.scope))
.catch(error => console.error("ServiceWorker registration failed: ", error));
}
and this is in my imageInterceptor:
self.addEventListener('fetch', event => {
console.log("fetch event triggered");
event.respondWith(
fetch(event.request, {
mode: 'cors',
credentials: 'include',
header: {
'Authorization': 'Bearer ...'
}
})
)
});
When I run my application, I see in my console that the registration seems to be successfully executed as I see the console.logs printed (ServiceWorker active, onLoad is called and successful registration with correct scope: https://localhost:44332/api/imagesgateway/
But when I load an image (https://localhost:44332/api/imagesgateway/...) via the gateway, I still get a 400 and when put a breakpoint on the backend I see that the authentication header is still null. Also, I don't see "fetch event triggered" message in my console. In another article it is stated that I can see the registered service workers via this setting: chrome://inspect/#service-workers but I don't see my worker there either.
My question is: Why isn't the authorization header added? Is it because, although the registration seems to go successfully, this isn't actually the case and therefore I don't see the worker in inspect#service-workers either?
You're not seeing fetch event triggered in the browser console because your Service Worker script isn't allowed to intercept the image requests. This is because your Service Worker script is located in a directory outside the scope of the requests you're interested in.
In order to intercept requests that handle resources at
/api/imagesgateway/
the SW script needs to be located in either
/, /api/, or /api/imagesgateway/. It cannot be located in /some/other/directory/service-worker.js.
This is the reason that your Service Worker registers successfully! There is no probelm in registering the SW. The problem lies in what it can do.
More info: Understanding Service Worker scope
I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};
this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.
Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.
I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.
To use Events API for Slack App development, there is a setting for "Events API Request URLs" as described in doc:
In the Events API, your Events API Request URL is the target location
where all the events your application is subscribed to will be
delivered, regardless of the team or event type.
There is a UI for changing the URL "manually" at api.slack.com under
"Event Subscriptions" section in settings. There is also url_verification event after changing the Request URL described here.
My question - Is there an API call (method) so I can update the endpoint (Request URL) from my server code?
For example, in Facebook API there is a call named subscriptions where I can change webhook URL after initial setup - link
Making a POST request with the callback_url, verify_token, and object
fields will reactivate the subscription.
PS. To give a background, this is needed for development using outbound tunnel with dynamic endpoint URL, e.g. ngrok free subscription. By the way, ngrok is referenced in sample "onboarding" app by slack here
Update. I checked Microsoft Bot Framework, and they seems to use RTM (Real Time Messaging) for slack which doesn't require Request URL setup, and not Events API. Same time, e.g. for Facebook they (MS Bot) instruct me to manually put their generated URL to webhook settings of a FB app, so there is no automation on that.
Since this question was originally asked, Slack has introduced app manifests, which enable API calls to change app configurations. This can be used to update URLs and other parameters, or create/delete apps.
At the time of writing, the manifest / manifest API is in beta:
Beta API — this API is in beta, and is subject to change without the usual notice period for changes.
so the this answer might not exactly fit the latest syntax as they make changes.
A programatic workflow might look as follows:
Pull a 'template' manifest from an existing version of the application, with most of the settings as intended (scopes, name, etc.)
Change parts of the manifest to meet the needs of development
Verify the manifest
Update a slack app or create a new one for testing
API List
Basic API list
Export a manifest as JSON: apps.manifest.export
Validate a manifest JSON: apps.manifest.validate
Update an existing app: apps.manifest.update
Create a new app from manifest: apps.manifest.create
Delete an app: apps.manifest.delete
Most of these API requests are Tier 1 requests, so only on the order of 1+ per minute.
API Access
You'll need to create and maintain "App Configuration Tokens". They're created in the "Your Apps" dashboard. More info about them here.
Example NodeJS Code
const axios = require('axios');
// Change these values:
const TEMPLATE_APP_ID = 'ABC1234XYZ';
const PUBLIC_URL = 'https://www.example.com/my/endpoint';
let access = {
slackConfigToken: "xoxe.xoxp-1-MYTOKEN",
slackConfigRefreshToken: "xoxe-1-MYREFRESHTOKEN",
slackConfigTokenExp: 1648550283
};
// Helpers ------------------------------------------------------------------------------------------------------
// Get a new access token with the refresh token
async function refreshTokens() {
let response = await axios.get(`https://slack.com/api/tooling.tokens.rotate?refresh_token=${access.slackConfigRefreshToken}`);
if (response.data.ok === true) {
access.slackConfigToken = response.data.token;
access.slackConfigRefreshToken = response.data.refresh_token;
access.slackConfigTokenExp = response.data.exp;
console.log(access);
} else {
console.error('> [error] The token could not be refreshed. Visit https://api.slack.com/apps and generate tokens.');
process.exit(1);
}
}
// Get an app manifest from an existing slack app
async function getManifest(applicationID) {
const config = {headers: { Authorization: `Bearer ${access.slackConfigToken}` }};
let response = await axios.get(`https://slack.com/api/apps.manifest.export?app_id=${applicationID}`, config);
if (response.data.ok === true) return response.data.manifest;
else {
console.error('> [error] Invalid could not get manifest:', response.data.error);
process.exit(1);
}
}
// Create a slack application with the given manifest
async function createDevApp(manifest) {
const config = {headers: { Authorization: `Bearer ${access.slackConfigToken}` }};
let response = await axios.get(`https://slack.com/api/apps.manifest.create?manifest=${encodeURIComponent(JSON.stringify(manifest))}`, config);
if (response.data.ok === true) return response.data;
else {
console.error('> [error] Invalid could not create app:', response.data.error);
process.exit(1);
}
}
// Verify that a manifest is valid
async function verifyManifest(manifest) {
const config = {headers: { Authorization: `Bearer ${access.slackConfigToken}` }};
let response = await axios.get(`https://slack.com/api/apps.manifest.validate?manifest=${encodeURIComponent(JSON.stringify(manifest))}`, config);
if (response.data.ok !== true) {
console.error('> [error] Manifest did not verify:', response.data.error);
process.exit(1);
}
}
// Main ---------------------------------------------------------------------------------------------------------
async function main() {
// [1] Check token expiration time ------------
if (access.slackConfigTokenExp < Math.floor(new Date().getTime() / 1000))
// Token has expired. Refresh it.
await refreshTokens();
// [2] Load a manifest from an existing slack app to use as a template ------------
const templateManifest = await getManifest(TEMPLATE_APP_ID);
// [3] Update URLS and data in the template ------------
let devApp = { name: 'Review App', slashCommand: '/myslashcommand' };
templateManifest.settings.interactivity.request_url = `${PUBLIC_URL}/slack/events`;
templateManifest.settings.interactivity.message_menu_options_url = `${PUBLIC_URL}/slack/events`;
templateManifest.features.slash_commands[0].url = `${PUBLIC_URL}/slack/events`;
templateManifest.oauth_config.redirect_urls[0] = `${PUBLIC_URL}/slack/oauth_redirect`;
templateManifest.settings.event_subscriptions.request_url = `${PUBLIC_URL}/slack/events`;
templateManifest.display_information.name = devApp.name;
templateManifest.features.bot_user.display_name = devApp.name;
templateManifest.features.slash_commands[0].command = devApp.slashCommand;
// [5] Verify that the manifest is still valid ------------
await verifyManifest(templateManifest);
// [6] Create our new slack dev application ------------
devApp.data = await createDevApp(templateManifest);
console.log(devApp);
}
main();
Hope this helps anyone else looking to update Slack applications programatically.
No, such a method does not exist in the official documentation. There might be an unofficial method - there are quite a few of them actually - but personally I doubt it.
But you don't need this feature for developing Slack apps. Just simulate the POST calls from Slack on your local dev machine with a script and then do a final test together with Slack on your webserver on the Internet.
I am looking to create APNS (Apple Push Notification Service), where the server will be sending notifications to the iOS devices.
I am able to make the push notifications work via PHP using the SAME device token and the SAME certificate, however, I would like to send notifications via Node JS instead of PHP.
I have the following valid files/certificates to help me get started:
cert.pem
key.pem
aps_development.cer
cert.p12
key.p12,
ck.pem
I've been looking through several resources/links such as:
https://github.com/argon/node-apn
How to implement APNS notifications through nodejs?
After doing so, I was able to come up with the following sample code, where PASSWORD stands for the password of the key.pem and TOKEN stands for my device's token:
var apn = require("apn");
var path = require('path');
try {
var options = {
cert: path.join(__dirname, 'cert.pem'), // Certificate file path
key: path.join(__dirname, 'key.pem'), // Key file path
passphrase: '<PASSWORD>', // A passphrase for the Key file
ca: path.join(__dirname, 'aps_development.cer'),// String or Buffer of CA data to use for the TLS connection
production:false,
gateway: 'gateway.sandbox.push.apple.com', // gateway address
port: 2195, // gateway port
enhanced: true // enable enhanced format
};
var apnConnection = new apn.Connection(options);
var myDevice = new apn.Device("<TOKEN>");
var note = new apn.Notification();
note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.
note.badge = 3;
note.sound = "ping.aiff";
note.alert = "You have a new message";
note.payload = {'msgFrom': 'Alex'};
note.device = myDevice;
apnConnection.pushNotification(note);
process.stdout.write("******* EXECUTED WITHOUT ERRORS************ :");
} catch (ex) {
process.stdout.write("ERROR :"+ex);
}
I get no errors when executing this code, but The problem is that no notification is received on my iOS device. I have also tried setting the ca:null & debug:true (in options var). But same thing happens.
Again, when I use the ck.pem & device token that I have and use it with PHP, it works, but i'm not able to make it work in Node JS. PLEASE HELP!!
Thank you so much!
You are probably running into the asynchronous nature of NodeJS itself. I use the same node-apn module with great success. But you don't just call it directly like you're used to in PHP - that's a synchronous model that doesn't map from PHP->Node. Your process is exiting before anything can actually happen - the apnConnection.pushNotification(note); is an asynchronous call that just barely gets started before your script returns/exits.
As noted in the node-apn docs you probably want to "listen for" additional events on apnConnection. Here's an excerpt of code that I use to log out various events that are occurring on the connection after it's created:
// We were unable to initialize the APN layer - most likely a cert issue.
connection.on('error', function(error) {
console.error('APNS: Initialization error', error);
});
// A submission action has completed. This just means the message was submitted, not actually delivered.
connection.on('completed', function(a) {
console.log('APNS: Completed sending', a);
});
// A message has been transmitted.
connection.on('transmitted', function(notification, device) {
console.log('APNS: Successfully transmitted message');
});
// There was a problem sending a message.
connection.on('transmissionError', function(errorCode, notification, device) {
var deviceToken = device.toString('hex').toUpperCase();
if (errorCode === 8) {
console.log('APNS: Transmission error -- invalid token', errorCode, deviceToken);
// Do something with deviceToken here - delete it from the database?
} else {
console.error('APNS: Transmission error', errorCode, deviceToken);
}
});
connection.on('connected', function() {
console.log('APNS: Connected');
});
connection.on('timeout', function() {
console.error('APNS: Connection timeout');
});
connection.on('disconnected', function() {
console.error('APNS: Lost connection');
});
connection.on('socketError', console.log);
Equally important, you need to make sure your script STAYS RUNNING while the async requests are being processed. Most of the time, as you build a bigger and bigger service, you're going to end up with some kind of event loop that does this, and frameworks like ActionHero, ExpressJS, Sails, etc. will do this for you.
In the meantime, you can confirm it with this super-crude loop, which just forces the process to stay running until you hit CTRL+C:
setInterval(function() {
console.log('Waiting for events...');
}, 5000);
I will explain it with simple code
First install apn module using this command npm install apn .
Require that module in code
var apn = require('apn');
let service = new apn.Provider({
cert: "apns.pem",
key: "p12Cert.pem",
passphrase:"123456",
production: true //use this when you are using your application in production.For development it doesn't need
});
Here is the main heart of notification
let note = new apn.Notification({
payload:{
"staffid":admins[j]._id,
"schoolid":admins[j].schoolid,
"prgmid":resultt.programid
},
category:"Billing",
alert:"Fee payment is pending for your approval",
sound:"ping.aiff",
topic:"com.xxx.yyy",//this is the bundle name of your application.This key is needed for production
contentAvailable: 1//this key is also needed for production
});
console.log(`Sending: ${note.compile()} to ${ios}`);
services.send(note, ios).then( result => {//ios key is holding array of device ID's to which notification has to be sent
console.log("sent:", result.sent.length);
console.log("failed:", result.failed.length);
console.log(result.failed);
});
services.shutdown();
In Payload you can send data with custom keys.I hope it helps
I'm working on a web application and I went through the necessary steps to enable HTML5 App Cache for my initial login page. My goal is to cache all the images, css and js to improve the performance while online browsing, i'm not planning on offline browsing.
My initial page consist of a login form with only one input tag for entering the username and a submit button to process the information as a POST request. The submitted information is validated on the server and if there's a problem, the initial page is shown again (which is the scenario I'm currently testing)
I'm using the browser's developers tools for debugging and everything works fine for the initial request (GET request by typing the URL in the browser); the resources listed on the manifest file are properly cached, but when the same page is shown again as a result of a POST request I notice that all the elements (images, css, js) that were previously cached are being fetched form the server again.
Does this mean that HTML5 App Cache only works for GET requests?
Per http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html#the-application-cache-selection-algorithm it appears to me that only GET is allowed.
In modern browsers (which support offline HTML), GET requests can probably be made long enough to supply the necessary data to get back data you need, and POST requests are not supposed to be used for requests which are idempotent (non-changing). So, the application should probably be architected to allow GET requests if it is the kind of data which is useful offline and to inform the user that they will need to login in order to get the content sent to them for full offline use (and you could use offline events to inform them that they haven't yet gone through the necessary process).
I'm having exactly the same problem and I wrote a wrapper for POST ajax calls. The idea is when you try to POST it will first make a GET request to a simple ping.php and only if that is successful will it then request the POST.
Here is how it looks in a Backbone view:
var BaseView = Backbone.View.extend({
ajax: function(options){
var that = this,
originalPost = null;
// defaults
options.type = options.type || 'POST';
options.dataType = options.dataType || 'json';
if(!options.forcePost && options.type.toUpperCase()==='POST'){
originalPost = {
url: options.url,
data: options.data
};
options.type = 'GET';
options.url = 'ping.php';
options.data = null;
}
// wrap success
var success = options.success;
options.success = function(resp){
if(resp && resp._noNetwork){
if(options.offline){
options.offline();
}else{
alert('No network connection');
}
return;
}
if(originalPost){
options.url = originalPost.url;
options.data = originalPost.data;
options.type = 'POST';
options.success = success;
options.forcePost = true;
that.ajax(options);
}else{
if(success){
success(resp);
}
}
};
$.ajax(options);
}
});
var MyView = BaseView.extend({
myMethod: function(){
this.ajax({
url: 'register.php',
type: 'POST',
data: {
'username': 'sample',
'email': 'sample#sample.com'
},
success: function(){
alert('You registered :)')
},
offline: function(){
alert('Sorry, you can not register while offline :(');
}
});
}
});
Have something like this in your manifest:
NETWORK:
*
FALLBACK:
ping.php no-network.json
register.php no-network.json
The file ping.php is as simple as:
<?php die('{}') ?>
And no-network.json looks like this:
{"_noNetwork":true}
And there you go, before any POST it will first try a GET ping.php and call offline() if you are offline.
Hope this helps ;)