Parse cloud code object.save() not working - parsing

I’m new to Parse and Cloud Code. I’m trying to initialize a new user’s data via a cloud code function. The function queries my “initial items objects” (a total of 223 items), then loops through each one creating a new “item” for the user. Inside the loop the new item’s fields are set and ends by calling save(). The function seems to be working, however, instead of saving 223 new items, only 9 are created. I placed a log statement after the save to see if in fact that loop was iterating through the 223 items … and it is. Below is my cloud code and log.
Any thoughts as to why only 9 items are being saved? The 9 items that are saved look fine.
Parse.Cloud.define("initializeNewUser", function(request, response) {
var InitialItemsObject = Parse.Object.extend("InitialParseItems");
var itemsQuery = new Parse.Query(InitialItemsObject);
itemsQuery.limit(500);
itemsQuery.find().then(function(results) {
console.log('Found ' + results.length +' Items.');
var user = request.user;
var ACL = new Parse.ACL(user);
for (var i = 0; i < results.length; i++) {
var defaultItem = results[i];
var item = new Parse.Object("Items");
item.set('itemID', defaultItem.get('itemID'));
item.set('author', user);
item.set('groupID', defaultItem.get('groupID'));
item.set('itemName', defaultItem.get('itemName'));
item.set('itemNote', defaultItem.get('itemNote'));
item.set('itemChecked', defaultItem.get('itemChecked'));
item.set('itemIsFavorite', defaultItem.get('itemIsFavorite'));
item.set('itemSelected', defaultItem.get('itemSelected'));
item.set('itemStruckOut', defaultItem.get('itemStruckOut'));
item.set('manualSortOrder', defaultItem.get('manualSortOrder'));
item.set('productID', defaultItem.get('productID'));
item.set('itemTimestamp', defaultItem.get('itemTimestamp'));
item.setACL(ACL);
item.save();
//console.log(defaultItem.get('itemName') + ' saved.');
}
// success has been moved inside the callback for query.find()
console.log('Successfully initialize ' + results.length + ' Items.');
response.success(results.length);
},
function(error) {
// Make sure to catch any errors, otherwise you may see a "success/error not called" error in Cloud Code.
console.log('Failed to initialize Items. Error code: ' + error.code + ' ' + error.message);
response.error("Failed to initialize Items. Error code: " + error.code + ": " + error.message);
});
});
I2015-07-17T15:16:38.661Z]v22 Ran cloud function initializeNewUser for user va0TTGwOk7 with:
Input: {}
Result: 223
I2015-07-17T15:16:38.930Z]Found 223 Items.
I2015-07-17T15:16:39.141Z]Successfully initialize 223 Items.

Ok I have a few comments.
1.) on line 7 you set user with request.user. I'm not sure how you are sending your data, but I've always used request.params.[insert item name]
2.) The next big thing is that by using "then()" you are using promises. at the end of every "then", you should be returning a promise. That means you wouldn't say "item.save()" but "return item.save()". Saving is a promise also so you will need to return those. Here should be your general promise pattern.
SomePromise.then( function(a)
{return promiseA} ).then( function(b)
{return PromiseB} ).then( function(c)
{response.success}, function(error){response.error})
3.) When you are saving lots of items, you either have to use "saveAll()" or make an array of promises, then save them all at once. A good rule of thumb is to use a promise array every time you are saving lots of things in an array. Here is the portion of the Cloud Code developer guide that will show you the correct format https://parse.com/docs/js/guide#promises-promises-in-parallel

Related

Twilio Video - switch media devices option not working

I am using twilio with twilio-video v beta-2 counting on the master branch of this repohttps://github.com/twilio/video-quickstart-js
I got to display the select media and push the devices into it but when I am trying to updateVideoDevice I got an error
updateVideoDevice error TypeError: track must be a LocalAudioTrack, LocalVideoTrack, LocalDataTrack, or MediaStreamTrack
at Object.INVALID_TYPE (index.js:30952)
at Object.validateLocalTrack (index.js:31469)
at LocalParticipant.unpublishTrack (index.js:17047)
at index.js:17096
at Array.reduce (<anonymous>)
at LocalParticipant.unpublishTracks (index.js:17095)
at index.js:36056
my updateVideoDevice function is as the following
function updateVideoDevice(event) {
const select = event.target;
const localParticipant = room.localParticipant;
if (select.value !== '') {
Video.createLocalVideoTrack({
deviceId: { exact: select.value }
}).then(function(localVideoTrack) {
const tracks = Array.from(localParticipant.videoTracks.values());
localParticipant.unpublishTracks(tracks);
log(localParticipant.identity + " removed track: " + tracks[0].kind);
detachTracks(tracks);
localParticipant.publishTrack(localVideoTrack);
log(localParticipant.identity + " added track: " + localVideoTrack.kind);
const previewContainer = document.getElementById('local-media');
attachTracks([localVideoTrack], previewContainer);
})
.catch(error => {
console.error('updateVideoDevice error' ,error);
});
}
}
can any one explain what I am doing wrong?
Twilio developer evangelist here.
This looks to be a breaking change between Twilio Video JS v1 and v2. In the v2 documentation, calling localParticipant.videoTracks returns a Map of <Track.SID, LocalVideoTrackPublication>. Calling .values() on that map returns an iterator of LocalVideoTrackPublications which is then turned to an array using Array.from.
The issue is that you then pass that array of LocalVideoTrackPublications to localParticipant.unpublishTracks(tracks); which causes the error because unpublishTracks expects an array of LocalTracks not LocalVideoTrackPublications.
You could fix this by mapping over the publications and returning the track property:
const tracks = Array.from(localParticipant.videoTracks.values())
.map(publication => publication.track);
Let me know if that helps.

In a Service Worker install listener, why does fetch return TypeError: Failed to fetch?

Using the sw-precache utility, I created a service worker that works if I refresh the page. But initially, the fetch call returns TypeError: Failed to fetch on a file which seems to cancel the rest. Refresh gets a few more files, refresh a few more times until all are fetched and then the service worker works fine. Each url loads fine, every time, even with the cache buster. All the calls are to https://localhost:9001 and are straight get calls. There are 17 total files.
Here is a code snippet:
self.addEventListener('install', function(event) {
event.waitUntil(
caches.open(cacheName).then(function(cache) {
return setOfCachedUrls(cache).then(function(cachedUrls) {
return Promise.all(
Array.from(urlsToCacheKeys.values()).map(function(cacheKey) {
// If we don't have a key matching url in the cache already, add it.
if (!cachedUrls.has(cacheKey)) {
var request = new Request(cacheKey, {credentials: 'same-origin'});
return fetch(request).then(function(response) {
// Bail out of installation unless we get back a 200 OK for
// every request.
if (!response.ok) {
throw new Error('Request for ' + cacheKey + ' returned a ' +
'response with status ' + response.status);
}
return cleanResponse(response).then(function(responseToCache) {
return cache.put(cacheKey, responseToCache);
});
}).catch(function(e) {
throw new Error('fetch error: ' + e);
});
}
})
);
});
}).then(function() {
})
);
});
I tried adding keepalive:true,headers:{"Access-Control-Allow-Origin": "*"} to the Request but get the same result.
It seems the requests fail very soon, and I have seen the requests take a little more than a second to respond. Perhaps there is a way to set a timeout?
In the chrome dev tools network tab, it shows the failed attempts in red with a (canceled) status and the little service worker icon to the left.
It seems to work fine in FireFox. Chrome is having the trouble.

Modifying list and sending it to front end sends original list

I couldn't phrase my Title better, but what's happening is: I fetch a list when the page opens, but it should remove one member of it based on something the user does on the page.
I'm using a ui:repeat in the front-end, and the getter method of this list is:
public List<Employee> getAvailableLoaders() {
if (availableLoaderList != null) {
List<Employee> availableLoaderListWithoutDriver = new ArrayList(availableLoaderList);
if (selectedDriver != null) {
logInfo("SelectedDriver is: " + selectedDriver.getName());
logInfo("List's size before removal is: " + availableLoaderListWithoutDriver.size());
logInfo("Removal was successfull? " + availableLoaderListWithoutDriver.remove(selectedDriver));
logInfo("List's size after removal is: " + availableLoaderListWithoutDriver.size());
}
return availableLoaderListWithoutDriver;
}
return null;
}
And my logs say:
Info: SelectedDriver is: [Driver name]
Info: List's size before removal is: 5
Info: Removal was successful? true
Info: List's size after removal is: 4
But it still shows a list with the driver removed and 5 members anyway. I know the getter is being called with the correct info and when the place that shows it is called because I'm watching the logs.
Any explanation or suggestion on how should I do this or am I just doing a really stupid mistake?

ios Swift Program app to delete things from parse after x amount of time?

Im creating an ios app using parse and swift. I want a users post to delete itself after x amount of time from the server. Is it possible to do this ?
Similar to how snapchat stories disapear after 24 hours.
i was thinking within the app i would make posts only visible if they were posted withiin the alotted time frame. That stops people from seeing old posts. I understand that i would then need something called cloud code to delete the posts. is this correct and how would i go about doing that.?
you can query through, get the createdAt date of the parse object and compare it to the current time, then delete it if its overdue.
wherever the data is retrieved, if they are user posts, you can put the query wherever you load the user posts. Once any one person tries to load that data and it's too old, it will be deleted and no one will see it.
The best way is using Cloud Jobs.
Sometimes you want to execute long running functions, and you don’t
want to wait for the response. Cloud Jobs are just meant for that. read More [https://docs.parseplatform.org/cloudcode/guide/#cloud-jobs]
Example :
// REMOVE A MOMENT VIDEO AFTER 24 HOURS
Parse.Cloud.job("remove", function (request, status) {
var date = new Date();
var timeNow = date.getTime();
var intervalOfTime = 1*24*60*60*1000; // 24 hours in milliseconds
var timeThen = timeNow - intervalOfTime;
// Limit date
var queryDate = new Date();
queryDate.setTime(timeThen);
// Query Moments
var query = new Parse.Query("Moments");
// Query the Moments after 24 hours
query.lessThanOrEqualTo("createdAt", queryDate);
query.find({
success: function (results) {
console.log("Moments: " + results.length);
// Delete Moment
query.each(function (object, err) {
object.destroy({
success: function (object) {
console.log("Successfully deleted: " + object.objectId);
},
error: function (error) {
console.log("Error: " + error.code + " - " + error.message);
},useMasterKey: true
})
})
},
error: function (error) {
console.log("Error: " + error.code + " - " + error.message);
}
});
});

Timeout Notification for Asynchronous Request

I am sending SPARQL queries as asynchronous requests to a SPARQL endpoint, currently DBpedia using the dotNetRDF library. While simpler queries usually work, more complex queries sometimes result in timeouts.
I am looking for a way to handle the timeouts by capturing some event when they occur.
I am sending my queries by using one of the asynchronous QueryWithResultSet overloads of the SparqlRemoteEndpoint class.
As described for SparqlResultsCallback, the state object will be replaced with an AsyncError instance if the asynchronous request failed. This does indicate that there was a timeout, however it seems that it only does so 10 minutes after the request was sent. When my timeout is, for example, 30 seconds, I would like to know 30 seconds later whether the request was successful. (35 seconds are ok, too, but you get the idea.)
Here is a sample application that sends two requests, the first of which is very simple and likely to succeed within the timeout (here set to 120 seconds), while the second one is rather complex and may easily fail on DBpedia:
using System;
using System.Collections.Concurrent;
using VDS.RDF;
using VDS.RDF.Query;
public class TestTimeout
{
private static string FormatResults(SparqlResultSet results, object state)
{
var result = new System.Text.StringBuilder();
result.AppendLine(DateTime.Now.ToLongTimeString());
var asyncError = state as AsyncError;
if (asyncError != null) {
result.AppendLine(asyncError.State.ToString());
result.AppendLine(asyncError.Error.ToString());
} else {
result.AppendLine(state.ToString());
}
if (results == null) {
result.AppendLine("results == null");
} else {
result.AppendLine("results.Count == " + results.Count.ToString());
}
return result.ToString();
}
public static void Main(string[] args)
{
Console.WriteLine("Launched ...");
Console.WriteLine(DateTime.Now.ToLongTimeString());
var output = new BlockingCollection<string>();
var ep = new SparqlRemoteEndpoint(new Uri("http://dbpedia.org/sparql"));
ep.Timeout = 120;
Console.WriteLine("Server == " + ep.Uri.AbsoluteUri);
Console.WriteLine("HTTP Method == " + ep.HttpMode);
Console.WriteLine("Timeout == " + ep.Timeout.ToString());
string query = "SELECT DISTINCT ?a\n"
+ "WHERE {\n"
+ " ?a <http://www.w3.org/2000/01/rdf-schema#label> ?b.\n"
+ "}\n"
+ "LIMIT 10\n";
ep.QueryWithResultSet(query,
(results, state) => {
output.Add(FormatResults(results, state));
},
"Query 1");
query = "SELECT DISTINCT ?v5 ?v8\n"
+ "WHERE {\n"
+ " {\n"
+ " SELECT DISTINCT ?v5\n"
+ " WHERE {\n"
+ " ?v6 ?v5 ?v7.\n"
+ " FILTER(regex(str(?v5), \"[/#]c[^/#]*$\", \"i\")).\n"
+ " }\n"
+ " OFFSET 0\n"
+ " LIMIT 20\n"
+ " }.\n"
+ " OPTIONAL {\n"
+ " ?v5 <http://www.w3.org/2000/01/rdf-schema#label> ?v8.\n"
+ " FILTER(lang(?v8) = \"en\").\n"
+ " }.\n"
+ "}\n"
+ "ORDER BY str(?v5)\n";
ep.QueryWithResultSet(query,
(results, state) => {
output.Add(FormatResults(results, state));
},
"Query 2");
Console.WriteLine("Queries sent.");
Console.WriteLine(DateTime.Now.ToLongTimeString());
Console.WriteLine();
string result = output.Take();
Console.WriteLine(result);
result = output.Take();
Console.WriteLine(result);
Console.ReadLine();
}
}
When I run this, I reproducibly get an output like the following:
13:13:23
Server == http://dbpedia.org/sparql
HTTP Method == GET
Timeout == 120
Queries sent.
13:13:25
13:13:25
Query 1
results.Count == 10
13:23:25
Query 2
VDS.RDF.Query.RdfQueryException: A HTTP error occurred while making an asynchron
ous query, see inner exception for details ---> System.Net.WebException: Der Rem
oteserver hat einen Fehler zurückgegeben: (504) Gatewaytimeout.
bei System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
bei VDS.RDF.Query.SparqlRemoteEndpoint.<>c__DisplayClass13.<QueryWithResultSe
t>b__11(IAsyncResult innerResult)
--- Ende der internen Ausnahmestapelüberwachung ---
results == null
Obviously, the exact times will be different, but the crucial point is that the error message based on the second query is received approximately 10 minutes after the request was sent, nowhere near the 2 minutes set for the timeout.
Am I using dotNetRDF incorrectly here, or is it intentional that I have to run an additional timer to measure the timeout myself and react on my own unless any response has been received meanwhile?
No you are not using dotNetRDF incorrectly rather there appears to be a bug that the timeouts set on an endpoint don't get honoured when running queries asynchronously. This has been filed as CORE-393
By the way even with this bug fixed you won't necessarily get a hard timeout at the set timeout. Essentially the value you set for the Timeout property of the SparqlRemoteEndpoint instance that value is used to set the Timeout property of the .Net HttpWebRequest. The documentation for HttpWebRequest.Timeout states the following:
Gets or sets the time-out value in milliseconds for the GetResponse
and GetRequestStream methods.
So you could wait up to the time-out to make the connection to POST the query and then up to the time-out again to start receiving a response. Once you start receiving a response the timeout becomes irrelevant and is not respected by the code that processes the response.
Therefore if you want a hard timeout you are better off implementing it yourself, longer term this may be something we can add to dotNetRDF but this is more complex to implement that simply fixing the bug about the timeout not getting honoured for the HTTP request.

Resources