Planner/Task endpoint not certain when PATCH data gets updated - microsoft-graph-api

I was trying to use the planner endpoint on version 1 of the graph. The main goal for me is to update the status of a task and decide whether it is ‘completed’ or ‘to do’. The first thing I do is to get all tasks from myself. See the endpoint below:
https://graph.microsoft.com/v1.0/me/planner/tasks
function plannerCompleteTask(id, etag) {
var specialEtag = etag.replace(/\\/g, "");
var deferred = $q.defer();
var endpoint = config.baseGraphApiUrl + 'planner/tasks/' + id;
var data = {
"percentComplete": "100"
};
var configRest = {
headers: {
"content-type": "application/json",
"If-Match": specialEtag
}
}
//"completedDateTime": "2018-02-15T07:56:25.7951905Z",
$http.patch(endpoint, data, configRest).then(function (result) {
console.log('log code', result);
deferred.resolve(result.status);
});
return deferred.promise;
}
I will create the following request
This will return a status: 204 with no content.
If I rerun the query with a "percentageCompleted: 0" in the body I get the following error.
Also If I try to log the request I get back from the AJAX call it doesn't give me back anything. As if there is no error handling being send back. I would need this because I have to reload the data in my application; but right now my code runs before the changes on the graph get completed, yet it returns a 204 status.
So I am clueless to find out when the call doesn't work or to find out when it is finished. Did anyone faced this issue before?
Thanks for reading and any help would be much appreciated. Cheers!

I think what you are looking for is the "prefer" header. If in your patch request you provide the "prefer" header with value "return=representation", the result of the patch will be final task data, including the new etag, with 200 status code, instead of default behavior of returning 204 "no content" status code.
Write operations in Planer are asynchronous. So, when possible, you should always update your local data based on the results of write operations with the prefer header, instead of reading them again.
In your requests, since you are reading the data before the task update is complete, essentially you are updating the same state of the task to be completed and not completed at the same time, which is the reason for the conflict.

Related

1-3 'SyntaxError: Unexpected end of JSON input' errors occurring per minute when streaming tweets in Node

Using express and Node.js, I'm using the twitter streaming API and the needle npm package for accessing APIs to pull tweets related to keywords. The streaming is functional and I am successfully pulling tweets using the following (simplified) code:
const needle = require('needle');
const TOKEN = // My Token
const streamURL = 'https://api.twitter.com/2/tweets/search/stream';
function streamTweets() {
const stream = needle.get(streamURL, {
headers: {
Authorization: `Bearer ${TOKEN}`
}
});
stream.on('data', (data) => {
try {
const json = JSON.parse(data); // This line appears to be causing my error
const text = json.data.text;
} catch (error) {
console.log("error");
}
});
}
However, no matter which search term I use (and the subsequent large or small volume of tweets coming through), the catch block will consistently log 1-3 errors per minute, which look like this:
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at PassThrough.<anonymous> (C:\Users\danie\OneDrive\Documents\Personal-Projects\twitter-program\server.js:56:31)
at PassThrough.emit (events.js:315:20)
at addChunk (internal/streams/readable.js:309:12)
at readableAddChunk (internal/streams/readable.js:284:9)
at PassThrough.Readable.push (internal/streams/readable.js:223:10)
at PassThrough.Transform.push (internal/streams/transform.js:166:32)
at PassThrough.afterTransform (internal/streams/transform.js:101:10)
at PassThrough._transform (internal/streams/passthrough.js:46:3)
at PassThrough.Transform._read (internal/streams/transform.js:205:10).
I've seen previous advice which says that data can be fired in multiple chunks, and to push the chunks to an array i.e. something like the following:
let chunks = [];
stream.on('data', (dataChunk) => {
chunks.push(dataChunk);
}).on('end',() => {
// combine chunks to create JSON object
})
But this didn't work either (may have been my implementation but I don't think so) and now I'm wondering if it's perhaps an error with the twitter API, because most of the tweet objects do come through correctly. I should note that the streamTweets() function above is called from an async function, and I am also wondering if that is having something to do with it.
Has anyone else encountered this error? Or does anyone have any idea how I might be fix it? Ideally i'd like 100% of the tweets to stream correctly.
Thanks in advance!
For future readers, this error is triggered by Twitter's heartbeat message that is sent every 20 seconds. Per the documentation:
The endpoint provides a 20-second keep alive heartbeat (it will look like a new line character).
Adding a guard against parsing the empty string will prevent the JSON parsing error.
if (data === "")
return
An empty string is invalid JSON, hence the emitted error.
Now, acknowledging that the heartbeat exists, it may be beneficial to add read_timeout = 20 * 1000 in the needle request to avoiding a stalled program with no data, be that due to a local network outage or DNS miss, etc.

Service worker to save form data when browser is offline

I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};
this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.
Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.
I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.

Unable to batch delete playlistitems?

I'm attempting to batch delete YouTube videos (playlistItems) from a created playlist. I have an array of playlistitem ids which I loop through and pass each Id as a parameter to the function below.
I get a status 204 response on every single delete... however the videos aren't actually deleted. Rather... only 2 or 3 (if I'm lucky) are deleted. If, for example, I were to try to batch delete 10 videos in a playlist... only 2 or 3 of those videos might actually be deleted. I can continue running my batch delete until, slowly but surely, all the items are deleted... but that seems hardly ideal and an unnecessary waste of quota.
I'm wondering if there's a limit to how quickly I'm allowed to delete playlistItems? And if so, why am I receiving a status 204 when it's clearly failing to delete?
function deleteVideo(id) {
var url = "https://www.googleapis.com/youtube/v3/playlistItems?id="
url += id;
url += "&key=" + oauth2Provider.getApiKey();
return $http({
method: "DELETE",
url: url,
headers: {
"Authorization": "Bearer " + oauth2Provider.getToken()
}
}).then(function successCallback(response) {
console.log(response);
return response;
}, function errorCallback(response) {
console.log(response);
});
}
If this is ever helpful to anyone, I believe I later found out that you can't do all the deletes at once asynchronously. You can start off a chain of deletes via async, but each delete must complete before you start the next one in the chain.

difference between fetching page and file in serviceworker

event.respondWith(caches.match(event.request).then(function (response) {
if (response) {
return response;
}
//return fetch(event.reuqest, { credentials: 'include' });
//event.respondWith(fetch(event.request, { credentials: 'include' }));
}));
This is a common code for handling request via serviceworkers , if the url is in cache then return cache response or fetch it from server .
But my doubt is regarding the 2 commented lines , we need to use one of them for fetching the response .
My doubt is, when i use event.respondWith(fetch(event.request, { credentials: 'include' for fetching a page , i get the following error
DOMException: Failed to execute 'respondWith' on 'FetchEvent': The fetch event has already been responded to.
But the page is finally rendered , definitely browser is finally fetching the response , but when i use sam for fetching an image , i get the same error and on top of that the image is not fetched .
if i use the second option that return fetch(event.reuqest, { credentials: 'include' }); , then it works fine for both image as well as page.
I am not able to figure out what is the reason of that error , and also why it is behaving differently for file and page .
My another doubt is , do i actually need the credential parameter here ,i added it because most of the implementations i saw in web have used it,but what i have observed is that the request object already has a credential property with it , now it is not always
include
sometime it is
same-origin
too.
So could it happen that i am actually overriding the actual credential value by adding it .If that is not the case , then there is no difference in including it or not.It does not matter .
But if it is other way around , then we should not overwrite the credential value, which can have bad side effects.
You already have a call to event.respondWith, you don't need to call it twice.
Your first call is going to use the promise returned by:
caches.match(event.request).then(function(response) {
if (response) {
return response;
}
return fetch(event.reuqest, { credentials: 'include' });
})
This promise resolves to:
response, if the request is in the cache;
the promise returned by the call to fetch, otherwise.
The promise returned by fetch will resolve to a response, which is then going to be used by respondWith.

How to create a bigquery table and import from cloud storage using the ruby api

Im trying to create a table on BigQuery - I have a single dataset and need to use the api to add a table and import data (json.tar.gz) from cloud storage. I need to be able to use the ruby client to automate the whole process. I have two questions:
I have read the docs and tried to get it to upload (code below) and have not been successful and have absolutely no idea what Im doing wrong. Could somebody please enlighten me or point me in the right direction?
Once I make the request, how do I know when the job has actually finished? From the API, I presume Im meant to use a jobs.get request? Having not completed the first part I have been unable to get to look at this aspect.
This is my code below.
config= {
'configuration'=> {
'load'=> {
'sourceUris'=> ["gs://person-bucket/person_json.tar.gz"],
'schema'=> {
'fields'=> [
{ 'name'=>'person_id', 'type'=>'integer' },
{ 'name'=> 'person_name', 'type'=>'string' },
{ 'name'=> 'logged_in_at', 'type'=>'timestamp' },
]
},
'destinationTable'=> {
'projectId'=> "XXXXXXXXX",
'datasetId'=> "personDataset",
'tableId'=> "person"
},
'createDisposition' => 'CREATE_IF_NEEDED',
'maxBadRecords'=> 10,
}
},
'jobReference'=>{'projectId'=>XXXXXXXXX}
}
multipart_boundary="xxx"
body = "--#{multipart_boundary}\n"
body += "Content-Type: application/json; charset=UTF-8\n\n"
body += "#{config.to_json}\n"
body += "--#{multipart_boundary}\n"
body +="Content-Type: application/octet-stream\n\n"
body += "--#{multipart_boundary}--\n"
param_hash = {:api_method=> bigquery.jobs.insert }
param_hash[:parameters] = {'projectId' => 'XXXXXXXX'}
param_hash[:body] = body
param_hash[:headers] = {'Content-Type' => "multipart/related; boundary=#{multipart_boundary}"}
result = #client.execute(param_hash)
puts JSON.parse(result.response.header)
I get the following error:
{"error"=>{"errors"=>[{"domain"=>"global", "reason"=>"wrongUrlForUpload", "message"=>"Uploads must be sent to the upload URL. Re-send this request to https://www.googleapis.com/upload/bigquery/v2/projects/XXXXXXXX/jobs"}], "code"=>400, "message"=>"Uploads must be sent to the upload URL. Re-send this request to https://www.googleapis.com/upload/bigquery/v2/projects/XXXXXXXX/jobs"}}
From the request header, it appears to be going to the same URI the error says it should go to, and I am quite at a loss for how to proceed. Any help would be much appreciated.
Thank you and have a great day!
Since this is a "media upload" request, there is a slightly different protocol for making the request. The ruby doc here http://rubydoc.info/github/google/google-api-ruby-client/file/README.md#Media_Upload describes it in more detail. I'd use resumable upload rather than multipart because it is simpler.
Yes, as you suspected, the way to know when it is done is to do a jobs.get() to look up the status of the running job. The job id will be returned in the response from jobs.insert(). If you want more control, you can pass your own job id, so that in the event that the jobs.insert() call returns an error you can find out whether the job actually started.
Thank you for that. Answer resolved. Please see here :
How to import a json from a file on cloud storage to Bigquery
I think that the line of code in the docs for the resumable uploads section (http://rubydoc.info/github/google/google-api-ruby-client/file/README.md#Media_Upload) should read:
result = client.execute(:api_method => drive.files.insert,
Otherwise, this line will throw an error with 'result' undefined:
upload = result.resumable_upload

Resources