Big SQLITE database in iOS - ios

In my ios app I have got an sqlite database with more then 800.000 rows. More or less only text in the rows. At the moment my select statement needs more than a minute to perform. Is that normal or do I something wrong.
Is there any possibility to speed that task up?
My problem is, that I cant do it with a webservice or something else. It should be performed right on the iPad without any connection to the internet.
Any idea is highly appreciated.

It's difficult to offer advice in the abstract, but in general:
Measure. Find where and why there are bottle necks
Improve code / database scheme
Go to 1
The best tool for (1) is the "explain plan" feature of SQLite. This tells you how it's going to get your results. If it's taking over a minute I think you'll see lots of full table scans.
The answer for (2) depends on what you find on (1). You might be able to improve the query; you might need index(es).
Point (3) is important. Don't guess at what makes the best performance improvemets. Measure.

Related

Trying to make a search engine for issues

Our company has a lot of data that are issue which are stored in a database.We want to create a search engine so that people can check how the issues were previously dealt with.We cannot use any 3rd party api as there is sensitive data an we want to keep it as in house. Right now the approach is as following :-
Clean up the data and then use a DOC2VEC to represent each issue as a vector .
Find the closest 5 issue using some distance metric.
The problem is that the results are not at all useful.The problem is most of the data is one liner and some issue description.There are spelling mistakes and stack traces and other things.
Is this the right approch or should we switch to something else?
Right now we are testing on 200K data.
Thanks for the help.

Best way to loop through indexDb using ydn-db

I'm using ydn-db as a shim for mobile dev but am experiencing some poor performance with IOS and record retrieval.
My question is, what is the best way to loop over a data store?
Right now I use
db.values() and set the limit to the number of items in the list ( usually about 200, but it has a hard limit at 100 )
Anyhow was just wondering what the best way to go about looping over the results might be.
I have read the docs, and while they are extensive they are also confusing, hence why I'm posting here.
Anyhow, any help would be appreciated.
The best way to loop through is db.open. It iterate one record at a time and very memory efficient.

Current road's speed limit

Is it possible to get the current speed limit of a road? I’m not sure if this would be done using the ‘Maps' app as I don’t think its holds speed limit data.
Is anyone able to point me in the right direction?
There is no functionality in MapKit for retrieving speed limits.
You may be able to get it by querying Google directly, or by querying some other third-party speed-limit database. However, it's probably unlikely that you'd find a service offering that kind of data for free.
Figure out where from where you going to get the data and then someone can likely help with how you'd retrieve it.
I have found an open data project that might be of help to you as well as many others:
http://en.wikipedia.org/wiki/Wikispeedia

iOS - 50MB file with 1 million entries

I have text-delimited file with 1 million entries, e.g.
A;B;C;D;E;F;G......n
I'd like to create an iOS app that will able the user to sort and filter columns.
What is the best way to store this info locally? Will this be too much for iOS to handle?
Would I be better creating a Web service?
Thanks
Use sqlite3. Do proper indexing.
If your data is read-only, performance will be far more manageable. If you're updating the data very frequently, you may run into performance issues.
Make sure to experiment with different generations of hardware and restrict app to only the ones that can support.
It should be simple enough to experiment. I've had no trouble with database sizes ~8Megs on 2nd genration iphones/iPod touch (~10-20K entries). Latest generation hardware and software should be able to support much more. I'm not sure how much more.
Make sure to experiment and report your findings here.

Best 3rd Party Resume Parser Tool [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
We are working on a hiring application and need the ability to easily parse resumes. Before trying to build one, was wondering what resume parsing tools are available out there and what is the best one, in your opinion? We need to be able to parse both Word and TXT files.
I suggest looking at some AI tools. Three that I'm aware of are
ALEX
Sovren
Resume Mirror
I think all the products handle Word, txt, and pdf along with a bunch of other document types. Although I've never used it, I've heard unfavorable things about Resume Mirror's accuracy and customer support. I'm a contract recruiter and have used both Sovren's and Hireability's parsers in different ATS's. From my view I thought Hireability did a better job, with Sovren it seemed like I was always fixing errors. And when there was a goof with Hire's I gave it to my ATS vendor and it seemed like it was fixed pretty quickly. Good luck.
Don't try to build one unless you want to dedicate your life to it. Don't re-invent wheels!
We build and sell a recruitment system. I did a long evaluation a few years ago and went for Daxtra - the other one in the frame was Burning Glass but I got the impression that Daxtra did non-US resumes better.
Anyway, we're re-evaluating it. Some parts it does brilliantly (name, address, phone numbers, work history) as long as the resume is culturally OK. But if it's not then it fails. What do I mean: Well, if the resume has as the first line:
Name: Sun Yat Sen
then Daxtra is smart enough to figure out that Sun Yat Sen is the guy's name. (Girl's?)
But if it has as the first line:
Sun Yat Sen
It can't figure it out.
On the other hand if the first line is
Johnny Rotten
then Daxtra works out his name.
Also, it works really well on UK addresses, fairly well on Australian addresses, crashes and burns on Indonesian addresses. That said, we've just parsed 35,000 Indonesian resumes relatively well - CERTAINLY far better than not doing it at all, or doing it manually!
On Skilling: I reckon if someone really tried to make the Skills section work then it would take 3 man-months or so and it would work really well.
Summary: Don't write it yourself, do some really good research on real resumes that you want parsing and dive in.
The key thing is: Don't expect any tool to be anywhere near 100% accurate - but it's a lot better than not having it.
Neil
FWIW I just ran 650 international resumes through Rchilli and found the accuracy to be very poor. Names & addresses were mangled and the detail fields were hit and miss.
This was a mix of pdfs & Word docs, primarily from Europe & Asia.
I have seen a lot of resumes in PDF format. Are you sure you don't care about them?
I'd recommend something simple:
Download google desktop search or
similar tool (i.e. Copernic)
Drop the files in a directory
Point the index tool to that
directory, and punch in your search
terms.
You may want to have a look at egrabber and rchilli these are two best tools out in the market.
I was wondering if any one update these list. Seems all are 2010 old almost 3 yrs old.
We integrated RChilli, and found them no flaw, support is best, and product is easier to use.
We tested RChilli, Hireability, and Daxtra. Sovren never responded to our emails.
Integration was smooth, and support is best in there.

Resources