I have a big table with 70 rows and 10 columns. It's a table of Saturated Water and Steam that shows different values for different cases. For examle, when temperature is 1 the pressure is 0.006571 and volume is 0.001000, when temperature is 2 then pressure is 0.007060 and volume is 0.001000 and so on.But there's not only temperature, pressure and volume columns, there're more. When,for instance, I select temperature from pickerlist and enter 2 into the textbox below the it I need to populate corresponding values of the other physical properties to other textboxes in the view.(in this particular case 0.007060 for p and 0.001000 for v). I'm thinking of implementing Core Data for keeping the data. But on the other hand I think SqLite might be more suitable. Anyway I don't know and would like to get your advice.
It sounds to me like this is static data (physical properties of water don't change much), so you may not even need a database, and as Stephen noted, it is not a lot of data.
For a start, you can just save your data in a plist file in your app resources, read it in at launch (or on opening of the relevant screen) and go from there.
Later, if optimization is needed, you can move to Core Data.
Seventy rows and ten columns is not very much data, even for an iOS device. (My app often has thousands of rows for example.)
I would stick with whatever is easiest to develop, which will probably be Core Data.
Related
My goal is to render time-series data from set locations on a map. Essentially, I have about 30 predefined (static) locations in Switzerland from which I will be receiving real-time data. The data itself is relatively simple, just the signal/noise ratio of the signal we're receiving, which should be updated every few seconds or every minute. I am using InfluxDB as my database. Are there any specific setups I should be using for this kind of visualization?
My first question is: is it best to use the worldmap panel or the geomap panel at this time? I seem to be finding more information/documentation on the worldmap panel even though i have also read that geomap is (or at least will be) its replacement.
Second, I assume that since I'm using time-series data, that I should be using the Time-Series format, and not the Table format. However, I have not been able to render any data points using the time-series feature, even by following the simplest of examples in your documentation. The best I can do is use the Table feature, and internally remove previous points from my database at every iteration (so that multiple points aren't rendered at the same time for each location). Here are two screenshots of when I'm able to render data on the geomap using the Table format, and then after switching to Time-Series format that the points are no longer there (note that I have the same problem with the Worldmap application as well).
I'm able to render data using the Table method:
...but not using time series:
Thanks for any help!
For rendering timeseries data on the geomap, you must convert your lat/long fields to a single geohash field. You'll have to do that prior to inserting the lat/longs into influxDB
See this answer
I wish to store time series data with versioning. By versioning, I mean that I might have a metric energy_mwh with tag meter_id=123 and a fieldset something like this time=2016-01-01 10:00, mwh=20.50, read-time=2016-01-01 20:15 and if I re-read the meter at a later time I want to keep both the new and old version of the meter reading. Later when I query the data I will be mostly just interested in the mwh value with the highest read-time for any given time. If I query over a range of times the read-time is going to vary.
I am thinking of using InfluxDB or some other time series database with a similar data model.
Is there a right way of doing this? I believe that I must keep read-time as a tag - not a field or I will lose the older version of the data. I guess that is answer - but it doesn't feel right to me to have what I see as a piece of data: read-time sitting in an identifier - specifically a tag. Am I on the right track?
I'm working on an app that connects to a mysql backend. It's a little simliar to snapchat in that once the current user gets the pics from the users they follow and see them they can never again see these pics. However, I can't just delete the pics from the database, the user who uploaded the pic still needs to see them. So I've come up with an interesting design and I want to know if its good or not.
When uploading the pic I would also create a mysql event that would run the same time exactly one day after the pic was uploaded deleting itself. If I have people uploading pics all the time events would be created all the time. How does this effect the mysql database. Is this even scalable?
No, not scalable: Deleting of single records is quick, however if your volume increases, you run into trouble. You do however have a classic case for using partitioning:
Create table your_images (insert_date DATE,some_image BLOB, some_owner INT)
ENGINE=InnoDB /* row_format=compressed key_block_size=4 */
PARTITION BY RANGE COLUMNS (insert_date)
PARTITION p01 VALUES LESS THAN ('2015-07-12'),
PARTITION p02 VALUES LESS THAN ('2015-07-03'),
PARTITION p0x VALUES LESS THAN (ETC),
PARTITION p0n VALUES LESS THAN (MAXVALUE));
You can then insert just as you are used to, drop the partitions once per day (using 1 event for all your data), and create new partitions also once per day (using the same event which is dropping your old partitions).
To make certain a photo lives for 24 hours (minimum), the partition cleanup has to occur with a 1 day delay (So cleanup the day before yesterday, not yessterday itself).
A date filter in your query getting the image from the database is still needed to prevent the images from older then a day being displayed.
I'm using a NSMutableDictionary to cache high scores that I pull from Game Center (storing scores by GC rank as key). The pulling happens as soon as the user views that line in a tableview. If there are a million rows and the user views them all, that would mean that the cache fills up to a million rows...
Ok in practice I guess I'll be happy if a million people played my game but still to be on the safe side I'd like to limit the amount of rows that go into the NSMutableDictionary.
Anyone got a simple approach here? Maybe another structure than a dictionary would be useful. My idea was to remove the entries from the dictionary that were the most old, and out of current tableview.
Taken a look at NSCache? ・゜゚・:.。..。.:*・'(゚▽゚)'・:.。. .。.:・゜゚・*
I have a lot of logfile data that I want to display dynamic graphs from, for basically arbitrary time periods, optionally filtered or aggregated by different columns (that I could pregenerate). I'm wondering about the best way to store the data in a database and access it for displaying charts, when:
the time resolution should be variable from one second to a year
there are entries that span several 'time buckets', e.g. a connection might have been open for a few days and I want to count and display the user for every hour she was connected, not just in the hour 'slot' the connection was created or finished
Are there best practices, or tools/plugins for rails that help handle this kind and amount of data? Are there maybe database engines specifically tailored towards this, or having helpful functions (e.g. CouchDB indexes)?
EDIT: I'm looking for a scalable way to handle this data and access pattern. Things we considered: Run a query for each bucket, merge in app - probably way too slow. GROUP BY timestamp/granularity - does not count connections correctly. Preprocessing data into rows by smallest granularity and downsampling on query - probably the best way.
I think you can use mysql timestamps for this.
The way I solved it in the end was to pre-process the data into per-minute buckets, so there's one row for every event and minute. That makes it easy and fast enough to select and yields correct results. To get different granularity, you can do integer arithmetic on the timestamp columns - select abs(timestamp/factor)*factor and group by abs(timestamp/factor)*factor.