I'm using Google Docs API. How can I know when a file resource changes?
I don't want to repeatedly poll data.
Firstly, you should be using the Google Drive API, which has replaced the Google Documents List API.
Right now, there is no notification system, but it is a feature we are working on. Polling the changes feed every few minutes is not too bad, but it is really not good in some situations.
You can't get a callback, I'm afraid. To avoid polling the file data, however, you could poll the metadata only and test the md5 checksum field to determine when the file has changed.
The md5 checksum field is located in docs$md5Checksum (JSON) or docs:md5Checksum (XML) in the document list entry for the file.
Related
We have an ios app which uses firebase as database. We don't have configure server for the application. We are inserting, updating and getting data (records) from firebase storage (behaves like server). My requirement is i need to get one week records in excel or pdf format and send the file over email. Can we automate this process once in every week. Is there any possibility to execute some scripts in firebase console to automate this process.
Thanks in advance.
This question is asking multiple questions. In the future, try to stick to one per post as multiple questions can make answers very long and convoluted.
1) My requirement is i need to get one week records in excel or pdf
format
No, firebase does not have that as a direct option. Firebase is a JSON database and exporting (from the Real Time Data Base) will be json formatted text.
It's pretty straightforward get the data you want via a query for the Real Time Database or query for Firestore and then export it from your app into whatever format you like.
Note that Excel, Numbers.app, OpenOffice, other processing apps etc can read comma delimited files easily so that may be an option instead of creating a specific filetype. There are also a number of JSON 'converters' as well.
Lastly, Firestore exporting is supported through their managed export options which would enable data to be exported to BigQuery for example.
2) Can we automate this process once in every week.
Yes you can automate tasks to occur on a regular basis via a cron job and Firebase Cloud Functions.
There's a great Firebase Blog on Scheduling Cloud Functions for Firebase (cron) as well as additional reading in the Firebase Guide Schedule Functions
and for reference here's information about cron
i implemented a chat app in swift using firebase real time db, there user can send images, emojis and Text.Now i have a requirement of export chat or get the conversation's backup with media and text as per whatsApp.help me to solving it out.
While Firebase offers a backup for the Realtime Database, this doesn't fit your needs here, since you'll want a per-user export of the data.
Since this is specific to your application, you will have to code it yourself, just like the good folks at WhatsApp have probably done. It should be a matter of iterating over all data sources for the user, getting the data through the relevant API (that you're already using to display that data), and then writing to a local file). You can do this either client-side in your Swift code, or server-side on a server you already may have, or using Cloud Functions.
If you're looking for some inspiration for the latter, there is a sample repository that shows how to clean up a user's data, based on a set of wipe-out rules. You'll need to significantly modify this example though, so I'm not convinced this will be less work than rolling your own from scratch.
I can't seem to find information on how to print files through google app script.
I found an answer in this website but it doesn't work, I think it's because it's 3 years old and google script has had some changes.
google app script print button
You can't print directly from GAS (I've searched far and wide), as described in the post that you linked. However it is possible to create a downloadable document with the desired content and then pass that blob off to the client for downloading (depending on wether you're still working in GAS or a web app environment).
Personally, I felt that it was a headache to deal with how each browser deals with downloading blobs. Therefore I usually go one of these two routes: provide a public downloadable link from the drive of the owner of the script and then push it to the user to download, OR just email the document to the user and let their email client handle downloading the blob.
Alternatively, if this is an add-on, you can make it so that the document is downloaded directly to the users own drive and just inform them where the document will be located.
I'm working with the very frustrating Bing Ads Api (SOAP) and while I've successfully executed the majority of SOAP requests I need, the last one is giving me trouble.
The team there tells me that to get an ad campaigns stats (clicks, impressions, conversions, etc.) I need to Request a report be generated (pass it the parameters), then take the reportID from the response and "poll" the report with another SOAP request, which yields a download URL for a zip file.
I've successfully done all the above and the download URL (which is only good for 5mins) looks like this:
https://download.api.bingads.microsoft.com/ReportDownload/Download.aspx?q=k471B%2fhtf62jwhaelHhu0EqMSfWCvWSpOOBRu76%2bUC%2bgATLEobf%2bMYiVKX0CBOr52d95ViPXJeKbvAbnb%2bSK%2bGumYlSYQT80kTtt5waa5z%2fmbeXT%2fPFqde95DFR1%2b4yQgekl5T6gKipbMFcQJOn5aGYmtI1ALcREIwJRA%2bi%2b3jOE55Cl69TAzBOUWvB73NAKX6S0Y7zF%2bERnSu7TJnJfmqHopWihGtkeMzoqqwsJVgVDEKz84RrPPaDOs2pxg3qE%2bLSrEwu2cpa7bP%2f9t%2fjUVtIgiZMbMjzSf73VnAUSpYNz
When I go to that URL, its starts to download a zip file that once unzipped, does contain the XML that I need to parse to report to users of the web app I'm creating.
My question is - What is the best way to get at that XML consistently within the app? This really seems like an arduous approach for the app to take considering all the above would have to take place every time a user loads the Bing page, or changes the date range, but they tell me its the only way to do it.
The path I've been heading down is to get the report with HTTParty and then unzip with RubyZip (have been unsuccessful because of undefined conversion error issues) but I'm unsure what to do from there. Store in a database (maybe temporarily)?
Any help would be greatly appreciated.
If there is no better way to use the API, cache the results (in your db or on the filesystem) and refresh the data using a rake task that will run periodically. If you do this, consider adding an option for the user to request an immediate refresh.
Use a background task to download the zip file and then process it, something like delayed_job or resque could be used to start the job with the URL.
I'm using RSS library so i can parse Atom and RSS in Ruby and Rails and store it in a model.
I've looked at the standard RSS library, but is there one library that will auto-detect that there is a new rss feed so i can update my database ?
what are the best practice to trigger an instruction in order to store the new rss feed ?
should i use threads to handle that problem ?is it going to be slow?
thank you for your help
OK heres the deal.
If you want a real fast feed parser go for Feedzirra. Does not work on windows. http://github.com/pauldix/feedzirra
Autodiscovery?
-Theres truffle-hog if you don't want to do GET redirects. http://github.com/pauldix/truffle-hog
-Theres feedbag if you want to do GET redirects to find feeds from given urls. This is slower though. http://github.com/damog/feedbag
Feedzirra is the best bet if you want to poll for new entries for your feed. But if you want a more non-polling solution to your problem then i would suggest going through the pubsubhubbub spec. Make sure while parsing your feeds they are pubsubhubbub enabled. Check for the link tag. If it points to pubsubhubbub.appspot.com or any other pubsub enabled hub then just subscribe to the feed by sending a subscription request to the hub. You can then define a endpoint in your app which will in turn receive updated entry pings for your feed subscription from the hub. Just read the raw POST data and store it in your database. Stats are that 95% of the blogger blogs are pubsub enabled. That is a lot of data in your hands already. :)
If you are polling for changes then you should check the last-modified or etag from the header rather than parse the entire feed again. Saves you from wasting resources. Feedzirra takes care of this for you.
I am not sure what you mean by "auto-detect" a new feed?
Are you looking for code that can discover when someone creates a new feed on a site? Or, do you mean discover when an existing feed has a new article?
The first is tough because your code needs to know what site to look at so it needs some sort of auto-discovery of sites with new feeds. Searching the google for "new rss feeds" doesn't return anything that looks useful, at least not on the first page. If you, or your users, know of a new site then you can have an interface to add new sites to search. Then you grab the page at that URL, look for the RSS/Atom auto-discovery links, and go from there. Auto-discovery links can open a can of worms because of duplicate content being served using different protocols (RDF, RSS and Atom), so you have to determine which to use, or multiple feeds with alternate content listed.
If you mean you want to discover when an existing feed has new articles, then you have to keep track of the last time your code looked at the feed, and the last article that was seen, then retrieve the feed and see if any articles were not in your list of previously seen articles. Your code needs to be sensitive to the time-to-live information in a lot of feeds too. Hitting the feed every fifteen minutes when they update once a week is bad form. Most aggregation code can do those things already but you might need to configure a database and tell the code how to find it.
Generally, for this sort of task I set up a crontab entry on a production Linux or Unix system and fire off the job periodically, looking in the database for feeds whose last-run-time plus the stored time-to-live value is in the past.
Does that help any?
Very easy solution is to use Dynamic attribute-based finders
When you are filling your model with RSS feed data, instead of Model.create(...) use Model.find_or_create_by_column(value, :other_column => other_value).
You can specify a date as unique value or RSS message title ... (whatever you want)
I think this is pretty easy. You can make some cron task to fill your model once per hour for example. Only new feeds will be added.
There is no chance to get some "event" when RSS is updated without downloading whole RSS feed again.