My iOS application fetches some photos, tags and comments from web server. I want it to fetch only changed or new added data. I don't want it to fetch repeated data again and again.
I use SDWebImage for pictures. But text are based on SQL text.
How could I understand the result of the SQL is changed or not? What kind of technique
should I use?
Is there a third party library for client side SQL catching?
I think it is not iOS related question technically. You should query always with the last queried timestamp.
Like:
SELECT * FROM comments WHERE last_modified > last_queried_timestamp;
last_modified field should store the timestamp of the last modification date or the timestamp of creation and the last_queried_timestamp parameter is the timestamp of the last date when you queried from the server.
This way you will not get twice the same changes.
(Unless you want it)
Related
I am trying to extract a large amount of details out of our Eloqua system using it's API and got this API to work perfectly for single IDs: https://docs.oracle.com/en/cloud/saas/marketing/eloqua-rest-api/op-api-rest-1.0-data-contact-id-get.html
The problem is that I need to run this for a large number of IDs and it will require alot in order to run it for the entire population. Is there any bulk APIs that can extract all of the following details out of Eloqua/Contact for the entire population? I don't see any on that pages documentation that meet this need under the Bulk section.
contactid, company, employees, company_revenue, business_phone, email_address, web_domain, date_created, date_modified, address_1, address_2, city, state_or_province, zip_or_postal_code, mobile_phone, first_name, last_name, title
It's a multi-step process with the Bulk API, typically in the following fashion:
Get a list of the current internal field names - useful for creating your export definition
Create an export definition and post it here. There is a useful example on the page, you do not need a filter criteria. Store the export ID somewhere
Using your export definition id, create a sync. It will gather the data in the background and prepare it for you. Take note of the sync ID provided in the initial response.
Check on the sync status with your sync ID here. It should only take a couple of minutes - and there is a callback url option as well in the previous step, if you don't want to keep polling.
Once your data is ready, use that sync id and request the data. Depending on how many rows were retrieved, you might need to paginate through the results using the offset query param. By default it will give you JSON, but I usually choose CSV (specify in the header).
If you need updated data, feel free to create a new sync using the same export definition id. You do not need to create a new export definition each time.
i need to refresh data in a TFDQuery which is in cached updates.
to simplify my problem, let's suppose my MsACCESS database is composed of 2 tables that i have to join.
LABTEST(id_test, dat_test, id_client, sample_typ)
SAMPLEType(id, SampleName)
in the Delphi application, i am using TFDConnection and 1 TFDQuery (in cached updates) in which i join the 2 tables which script is:
"SELECT T.id_test, T.dat_test, T.id_client, T.sample_typ, S.SampleName
FROM LABTEST T
left JOIN SAMPLEType S ON T.sample_typ = S.id"
in my application, i also use a DBGrid to show the result of the query.
and a button to edit the field "sample_typ", like this:
qr.Edit;
qr.FieldByName('sample_typ').AsString:=ce2.text;
qr.Post;
the edition of the 'sample_typ' field works fine but the corresponding 'sampleName' field is not changing (in the grid) after an update.
in fact it is not refreshed !
the problem is here: if i do refresh of the query, an exception is raised: "cannot refresh dataset. cached updates must be commited or canceled
and batch mode terminated before refreshing"
if i commit the updates, data will be sent to database and i don't want that, i need to keep the data in cache till the end of the operation.
also if i get out of the cache, data will be refreshed in the grid but will be sent to the database after qr.post and i don't want that.
i need to refresh data in the cache. what is the solution ?
Thanks in advance.
The issue comes down to the fact that you haven't told your UI that there is any dependency on the two fields - it clearly can't know how to do the join itself without resubmitting it so if you don't want to send the updates and reload you will have a problem.
It's not clear exactly what you are trying to do, but these two ideas may help you.
If you are not going to edit the fields in the SAMPLEType tables (S) then load the values from that table into a lookup table. You can load this into a TFDMemTable. You can use an adapter which loads from a query. Your UI controls can then show the value based on the valus looked up in your local TFDMemTable. Dependiong on the UI control this might be a 'LookupField' or some such.
You may also be able to store your main data in a TFDMemTable with an Adapter - you can specify diferent TFDCommands to read the whole recordset, refresh a record, update, insert and delete a record. The TFDCommands can act on multiple tables for joined recordsets like this. That would automatically refresh the individual record for you when you post it.
I want to know how to retrieve the history of tracked fields in odoo 11, so I can use them later for statistics and maybe use graphs to display some of its significant changes and such.
I know they get displayed in the chatter below the record and its related to mail.thread, but I don't know if there's a way to get those information for other manipulations, or where they're located in the database
Changes to tracked fields are stored in the mail.tracking.value model. You can review the table structure and methods in core/addons/mail/models/mail_tracking_value.py (in Odoo 11.0).
You can view the Messages directly to review some of the data by going to Settings > Technical > Email > Messages and filtering on Tracking values "is set".
The model is very basic, but you should be able to work with the message values to get your report/history data by sorting on the mail_message_id's date and time.
I have a datetime value which comes from the API in this format: 2015-07-07T17:30:00+00:00. I simply want to split it up between the date and time values at this point. I am not using an Active Record model and I prefer not to use an sql database if I can.
The way I have set up the app means that the value is "stored" like this in my view: #search.dining_date_and_time
I have tried two approaches to solving this problem:
Manually based on this previous stackoverflow question from 2012: Using multiple input fields for one attribute - but the error I get is the attribute is "nil" even though I put a "try"
Using this gem, https://github.com/ccallebs/split_date_time which is a bit more recent and seems to be a more elegant solution, but after closely following the doc, I get this error, saying my Search model is not initalized and there is no method: undefined method dining_date' for #<Search not initialized>
This is when instead I put #search.dining_date in the view, which seems to be the equivalent of the doc's example (its not that clear). The doc also says the method will be automatically generated.
Do I need to alter my model so I receive the data from the API in another way? ie. not get the variable back as #search.dining_date_and_time from the Search model for any of this to work?
Do I need an Active Record model so that before_filter or before_save logic works - so i can (re)concatenate after splitting so the data is sent back to the API in a format it understands. Can I avoid this - it seems a bit of overkill to restructure the whole app and put in a full database just so I can split and join date/time as needed.
Happy to provide further details, code snippets if required.
As I am not using a conventional Rails DB like MySql Lite or Postgresql, I found that the best solution to the problem was by using this jQuery date Format plugin: https://github.com/phstc/jquery-dateFormat to split the date and time values for display when I get the data back from the API.
The Github docs were not too expansive, but once I put the simply put the library file in my Rails javascript assets folder, I just had to write a few lines of jQuery to get the result and format I wanted:
$(function() {
var rawDateTime = $('#searchDiningDateTime').html();
// console.log(rawDateTime);
var cleanDate = $.format.date(rawDateTime, "ddd, dd/MM/yyyy");
// console.log(cleanDate);
$('#searchDiningDateTime').html(cleanDate);
var cleanTime = $.format.date(rawDateTime, "HH:mm");
// console.log(cleanTime);
$('#searchTime').html(cleanTime);
});
Next challenge: rejoin the values on submit, so the API can read the data by sending/receiving a valid request/response. (The values can't be split like this when sent to the remote service).
I'm creating an app that uses core data to store information from a web server. When there's an internet connection, the app will check if there are any changes in the entries and update them. Now, I'm wondering which is the best way to go about it. Each entry in my database has a last updated timestamp. Which of these 2 will be more efficient:
Go through all entries and check the timestamp to see which entry needs to be updated.
Delete the whole entity and re-download everything again.
Sorry if this seems like an obvious question and thanks!
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.
I recently did something similiar.
I solve the problem, by assigning an unique ID and a global 'updated timestamp' and thinking about 'delta' change.
I explain better, I have a global 'latest update' variable stored in user preferences, with a default value of 01/01/2010.
This is roughly my JSON service:
response: {
metadata: {latestUpdate: 2013...ecc}
entities: {....}
}
Then, this is what's going on:
pass the 'latest update' to the web service and retrieve a list of entities
update the core data store
if everything went fine with core data, the 'latestUpdate' from the service metadata became my new 'latest update variable' stored in user preferences
That's it. I am only retrieving the needed change, and of course the web service is structured to deliver a proper list. Which is: a web service backed by a database, can deal with this matter quite well, and leave the iphone to be a 'simple client' only.
But I have to say that for small amount of data, it is still quite performant (and more bug free) to download the whole list at each request.
As per our discussion in the comments above, you can model your core data object entries with version control like this
CoreDataEntityPerson:
name : String
name_version : int
image : BinaryData
image_version : int
You can now model the server xml in the following way:
<person>
<name>michael</name>
<name_version>1</name_version>

<image_version>1</image_version>
</person>
Now, you can follow the following steps :
When the response arrives and you parse it, you initially create a new object from entity and fill the data directly.
Next time, when you perform an update on the server, you increase the version count of an entry by 1 and store it.
E.g. lets say the name michael is now changed to abraham, then version count of name_version on server will be 2
This updated version count will come in the response data.
Now, while storing the data in the same object, if you find the version count to be same, then the data update of that entry can be skipped, but if you find the version count to be changed, then the update of that entry needs to be done.
This way you can efficiently perform check on each entry and perform updates only on the changed entries.
Advice:
The above approach works best when you're dealing with large amount of data updation.
In case of simple text entries for an object, simple overwrite of data on all entries is efficient enough. And this also keeps the data reponse model simple.