Create an interactive map in iOS [closed] - ios

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need to create an interactive map in iOS.
I have to do a thing like Expo app.
An image or a map view in background and draw the way between 3-4 points.
I don't know if use Google Maps, Apple Maps, images or something else.

Developing a spatially aware application is no trivial matter on any platform. It will require careful planning and architecture design UP FRONT or you'll find yourself doing a lot of "extreme programming" (tons of refactoring). In order to develop a spatially aware application you will need several items:
A familiarity with a map API. Apple's MapKit API is fine, but there are others such as Mapbox which offer additional services such as
offline caching, custom basemaps, etc.
A custom basemap: The basemap you're seeing here is certainly a custom job and probably not open source, so you'll need to come up
with one of your own. Unfortunately, every map API has a different
approach to this so you'll need to do some research to determine the
right solution for your API.
Map features: You'll need to understand how to add features to your map. Some APIs call these Annotations, while others simply call
them Features (like ESRI). In either case, you will need to generate
your own feature geometry using the Core Location API and whatever components the map API utilizes. You will also need to create custom graphics for these annotations,
unless you can find something suitable in the public domain. If you
intend to add polylines (for directions) or polygons (to highlight an
area) you will also need to define your own custom symbology (line
color, width, fill colors, etc). Again, not every API uses the term
symbology to describe these details but hopefully you get the idea.
Data storage: You'll need to decide how you're going to store and retrieve data for the mapview. You can store everything online in a
custom web service. You could also use something like the Parse API
if you don't have the resources for your own web service.
Alternatively, you could store everything locally in a SQLite
database or using Core Data. In either case, you will need to have a
plan for querying the location data in an efficient manner. SQLite
supports R*Tree indices which are a good way to store a geometry's
bounding box (envelope) information, but you still need to roll your own INSERT and SELECT queries. Most likely you'll need to come up with some combination of the two.
Learn the language: Overall, you absolutely must learn the language of the map APIs. Its vital that you are familiar with the
language of spatially aware applications, including the fundamentals
of location technology, if you intend to be successful in this
project. I would suggest beginning to do some research into the iOS
MapKit API, and maybe an open source solution like Mapbox. Learning geoJSON isn't a bad idea even if you don't intend to use it in your app. It is very simple and could help you learn a lot about spatial technology very quickly.
As you can see, there's a LOT going on in a spatially aware application, and this list is just a starting point. I am not trying to dissuade you from your goal, but just be aware that this isn't a "drag and drop" sort of project.

Related

If Apple's multiple frameworks span across each of these layers, why is this iOS layers image stacked in this order? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
While researching iOS, I've seen this image brought up numerous times by instructors however the Apple frameworks list is shown as if you're free to pick and choose which framework or classes you'd like to implement without necessarily chiseling down through each layer to use a specific feature. For example, I was under the impression that in order to use blue tooth services I needed to start with a UIKit object such as UIButton and utilize an object from each layer before being allowed to use blue tooth capability but this doesn't appear to be the process when building an app. So my question is why is this image so important or what message is it attempting to relay in regards to the multiple frameworks provided by Apple?
To expand on #Baglan's comment: Each "layer" is a level of abstraction, but none is comprehensive, and you're free to walk up and down the ladders of abstraction as you please.
Layers add abstraction
True, there's no UIKit API (or other API on the highest "layer") that encompasses Bluetooth connectivity. But that's because Bluetooth itself is a lower level technology. There's no usefulness in UIKit presenting an API that's conceptually the same as that offered by CoreBluetooth.
On the other hand, what UIKit and other high-level frameworks do is offer further abstractions from underlying technology where there are use cases for such abstraction. So, even though Apple Pencil and the Siri Remote (for Apple TV) use Bluetooth, UIKit offers APIs that abstract away the details of Bluetooth to let you work with those devices in terms that are more relevant to your user and to the rest of your app.
Layers aren't comprehensive
So, if all you want to do is talk Bluetooth, use the CoreBluetooth framework. If all you want to do is work with audio, use the CoreAudio framework, etc.
Having layers of abstraction doesn't mean that you have to wrap and push through all of them to get to what you need — it means you can drop down to whichever level of abstraction for the work you want to perform.
I was under the impression that in order to use blue tooth services I needed to start with a UIKit object such as UIButton and utilize an object from each layer
In a sense, your impression is actually correct. An iOS app needs some form of UI, and UIKit is the place to do that. So you start with views and buttons, and the code you write for things like responding to button presses can use objects from lower layers as needed for the task at hand.
"Layers" is just one possible roadmap
Moreover, the idea of "layers" is itself an abstraction. With over a hundred developer-accessible frameworks on Apple's platforms, it helps to have some way to think about them in useful clusters. The schemes that Apple tend to use for that usually rely on this idea of layers of abstraction.
But there's more than one such scheme. For example, check out the framework list at https://developer.apple.com/reference/ — here you'll find six clusters that mix the "layers" metaphor with some more task-oriented ideas: App Frameworks, Graphics and Games, App Services, Media and Web, Developer Tools, and System. Notably, "App Frameworks" isn't just "high level" stuff — together, UIKit, Foundation, and/or Swift Standard Library are the basic tools that everyone needs to make an app, and all the other clusters are for more specific (or lower level) tasks.

Human annotation tool for corpora in NLP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to build my own training corpus for Named Entity Recognition, but I don't know if there is already an existing tool for this or if I have to implement one myself.
Basically, what I need to do is take a corpus and manually tag it word by word, which is pretty tedious, but it has to be done.
Can anyone tell me if there is already an existing one and where to get it?
I had a good experience working with BRAT.
GATE is also a very complex tool for annotating, steeper learning curve.
We had a nice experience using DataTurks . They provide nice intuitive UI which allows to add collaborator, insights into data, leaderboard for annotators and some other funky features.
https://dataturks.com
For online annotation of text or HTML corpus of relatively short documents I also recommend BRAT. You will have to go under the hood of the python web application if you want to do anything custom. It also failed to work for me on large HTML documents (100 or so pages).
I have also used stand-alone apps:
Protege + Knowtator: a bit cumbersome to setup / use, but it
works;
Gate: also cumbersome, and it somewhat works. Backup
your annotations at regular intervals as you might get
surprised by a stacktrace that also wiped or corrupted your annotated
corpus (which is just serialized Java objects).
If you are dealing with PDF documents, we built a web-based PDF Annotation Tool: NOTA. It accepts anything printed to PDF, including scans. We do commercial OCR on our end to recover text from images. There is a REST API to create color-coded annotation schemas and pre-populate documents with annotations, as well as a REST API for exporting formatted text and annotation offsets. There is also a JS API you can use to customize any annotation workflows, add metadata to annotations, etc. Relationships are not supported out of the box. Large documents, 200+ pages are supported. Email us and we can give you an API key to try it out. Details and documentation links can be found here. It is free for small research projects.
Here is a screenshot of what the annotations looks like :
I co-develop myself the web-based text annotation tool: tagtog.net
There is nothing to install, and you can define the type of entities you want to annotate. Additionally you can annotation relationships, document labels, and much more. You can upload your documents in many different formats, including PDF or markdown. You can annotate together with your team collaboratively. We have put great care in making the interface easy and beautiful. It looks like this:
You can start right away with a free account. Also I would be happy to help you with any doubt or issue you may have; just ping me or write us an email to the address shown on the website, tagtog.net.
Our annotation tool Prodigy is very scriptable, and is designed for active learning. It integrates especially well with our NLP library spaCy.
We've paid particular attention to the Named Entity Recogntion (NER) annotation workflows, as entity recognition can otherwise be very slow. I have a tutorial video on this:
https://www.youtube.com/watch?v=l4scwf8KeIA
There is this tool called, Dataturks is super simple to use, fully online NLP annotation tool, so that I even can easily push my teammates to complete datasets for our projects.
try TagEditor ,
It is a desktop application designed to annotate text for training with spaCy library.
You can tag Named Entities, Dependencies, Parts of speech, text categories
and print json file.
Example

Mapkit. Getting nearby places from a server and possibly caching them (e.g. for offline use)

I am developing iOS 5 app which I want to communicate with server providing information about the nearby places for a given location: places locations and annotations. I want to use MapKit to populate my map with this information.
I didn't find any straightforward information regarding the following questions:
Does MapKit has tiles functionality (Google Maps way) out-of-the-box and do I need to work on it additionally, if not?
What is the best practice of retrieving places information (markers positions and annotations) from server?
Is it possible to cache this information so an user can see the nearby places of "his city" in offline mode?
Actually questions 2 and 3 are interrelated: they both address the problem of not retrieving an information (locations + annotations) that is already on map multiple times.
Hopefully I am not overlooking something obvious here.
Thanks!
Update 1: (Regarding places, not maps) More specifically I am interested in, how should I create a "hand-crafted" logical tiles for regions containing the places I fetch from the server, so they would not require refetching themselves when user scrolls the map? I know I can dive into implementing this functionality myself. For example, should I write the places just fetched to a local storage using Core Data immediately after fetching them or organize some queue? Or how could I know when I need to perform a request about the specific region on server and when I just fetch local data that is already on the device? I just want to know, are there any recommended approaches, best practices? Hopefully, I wrote it clear here.
Update 2: I am wondering about best practices here (links, example) not to start creating all this (points 2+3) from scratch. Are there any frameworks incapsulating this or good tutorials?
#Stanislaw - We have implemented the functionality you describe in an app called PreventConnect for one of our clients. The client already had some data stored out in a Google Fusion table. We extended their existing solution by adding another Google Fusion table which stores the geocoordinates for a number of locations. All this being said, to answer your questions...
1) The map portion itself is pretty out of the box, the tiles and what not, but you'll need to do some coding to get zoom extents, pin drops, annotations, and things like that working the way you expect them to work.
2) We found the Google Fusion solution to be quite effective. If you don't want to use Google Fusion there are other cloud database providers like StackMob, database.com, and many others. Google is free and they have an iOS SDK that makes communicating with Google Fusion pretty simple.
3) Absolutely! We cache much of the data in a Core Data store locally on the device. This greatly improves performance and responsiveness.
Time to write a solid answer to this my question (I could have it written a year ago but somehow I lost it from my mind).
Does MapKit has tiles functionality (Google Maps way) out-of-the-box and do I need to work on it additionally, if not?
The answer is yes: MapKit does have it. The keywords here are overlays (MKOverlay, MKOverlayView and others). See my another answer.
See also:
WWDC 2010 Session: Customizing Maps with Overlays,
Apple-WWDC10-TileMap.
What is the best practice of retrieving places information (markers positions and annotations) from server?
Actually since then I didn't learn a lot about "best practices" - unfortunately, nobody told me about them :( - that is why I will describe "my practices".
First of all, there are two strategies of populating a MapKit map with places:
The first strategy is about populating your map with the places by demand: imagine you want to display and see all places nearby (for example, no more than 1km from current user location) - this approach assumes that you ask your server only the places for the box you are being interested in. It means something like: "if I am in Berlin (and I expect 200 places for Berlin), why should I ever fetch the places from Russia, Japan, ... (10000+ places)".
This approach leads to relying on "tiles" functionality that question N1 address: Google maps and Apple maps are usually drawn using 'tiles' so for your "Berlin" portion of map you rely on corresponding "Berlin" tiles that are drawn by MKMapView - you use their dimensions, to ask your server only the places within the "Berlin" box (see my linked answer and the demo app there).
Initially this was the approach I've used and my implementation worked perfectly but later I was pushed to use second approach (see below) because the problem of clustering appeared.
The second strategy is to fetch all the places at once (yeah, all this 10000+ or more) and use Core Data to fetch the places needed for the visible portions of map your are interested in.
Second approach means, that during the first run you send your server a request to fetch all places (I have about 2000 in my app). Important here is that you restrict the fields you fetch to only geo-ones that you really need for your map: id, latitude, longitude.
This 'fetch-all' fetch has a significant impact on my app's first start time (On "the oldest" iPhone 4, I have near 700ms for the whole Fetch + Parse-JSON-into-Core-Data process, and extensive benchmarks show me that it is Core Data and its inserts is a bottleneck) but then you have all the essential geo-info about your places on your device.
Note, that whatever strategy you use, you should do a process of fetching these geo-points efficiently:
Imagine Core Data entity Place which has the following fields structure (preudo-Objective-C code):
// Unique identificator
NSNumber *id,
// Geo info
NSNumber *latitude,
NSNumber *longitude,
// The rest "heavy" info
NSString *name,
NSString *shortDescription,
NSString *detailedDescription, etc
Fetching places efficiently means that you ask only your place records geo-data from your server to make the process of this mirroring as fast as possible.
See also this hot topic: Improve process of mirroring server database to a client database via JSON?.
The clustering problem is out of scope of this question but is still very relevant and affects the whole algorithm you use for the whole proces - the only note I will leave here is that all the current existing clustering solutions will require you to use second strategy - you must have all the places prepared before you run into the clustering algorithms that will organize your places on a map - it means that if you decide to use clustering you must use strategy #2.
Relevant links on clustering:
WWDC 2011 Session: Visualizing Information Geographically with MapKit,
How To Efficiently Display Large Amounts of Data on iOS Maps,
kingpin - Open-source clustering solution: performant and easy-to-use.
Is it possible to cache this information so an user can see the nearby places of "his city" in offline mode?
Yes, both strategies do this: the first one caches the places you see on your map - if you observed Berlin's portion of map you will have Berlin part cached...
When using the second strategy: you will have all essential geo-information about your places cached and ready to be drawn on a map in offline mode (assuming that MapKit cached the map images of regions you browse in offline mode).

HCI challenges of Web 2.0 [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
What are the HCI challenges of Web 2.0?
Here are a few more:
Clear privacy options
Facebook has repeatedly changed the way it deals with content ownership and privacy. (See here, here and here.) Aside from the obvious PR gaffes, this has also demonstrated the difficulty users have understanding privacy.
Geeks like us are familiar with ideas of inheritance and groups. Heck, many of us work explicitly with permission structures when dealing with files on *nix systems. To most users though, it's not clear who can see what or why.
Service Interoperability
On the desktop we're used to being able to chain together tools to get the outcome we want. A simple example would be dragging image thumbnails from a file explorer to an image editor. We'd expect that to work, but not on the web
The Flock browser goes some way to overcome this shortfall, as does the Google Docs web clipboard, but interaction between web services is still a long way off what we expect from the desktop.
Accessibility
Web 1.0 was primarily text based, so the main accessibility issues were easy to fix: stuff like text as images and tables for layout, which both affect screen-readers used by the blind.
As the content of the web gets richer (more images, video and audio), the chances get larger that someone will be excluded from it. Moreover, making video and audio accessible is much harder than making text or images accessible, so it's much less likely to be done.
Lastly, Web 2.0 introduced a whole new problem for accessibility: dynamic content. How should screen-readers (for example) deal with new content appearing on a page after an AJAX query? WAI-ARIA aims to address these issues, but they still require the web-designer to implement them.
Hope this was useful.
There are plenty as I see it,
Different screen resolutions.
Different hardware capabilities. (mobile; touch; desktop; laptop; soon orientation too.)
Localized content.
Location based.
With HTML5 upcoming, hardware acceleration; native api's; localstorage; offline.

Is the Map Rails Kit worth the money?

http://railskits.com/map/
Would you like to launch your own
google map mashup? Need a way to
easily get data onto a map, but don’t
want to have to dig through piles of
poorly documented Google Maps
javascript code?
The Map Rails Kit allows you to deploy
a map mashup instantly. It extracts
all the Google Maps implementation
details, organizes all the
customizations into an easy to use
config file, and reimplements the map
controls, bubbles, and markers so your
app looks unique.
Populating your map with markers
consists of working with a few simple
ActiveRecord models so it’s amazingly
easy to get started. Create marker
records with titles, bubble content,
and location. If you specify just an
address to your markers, your markers
will be automatically geocoded for
you. You can even add tens of
thousands of markers to your maps
easily, and they’ll dynamically load
onto the map only when they are
currently in view as your users
navigate the map.
The Kit includes all the usability
polish that your users would expect in
a commercial map mashup. Their current
map settings are always saved via
session so when they come back to the
page later on, they’re right where
they left off before. For new
visitors, we support hooking into an
ip2location service in order to
initialize their current position. So
they immediately see their current
spot on the map, and can begin
interacting with it.
This Kit was authored by Jacques
Crocker.
This is kind of subjective, but I don't find the Google Maps API nearly as daunting as the blurb makes it out to be. I don't think I'd pay half a grand for an API to the Maps API — especially since you can buy a whole book on the topic for like $15 even if you find Google's docs lacking.
This guy doesn't even make it clear what it is he's selling. He makes the features of using the google maps API with Rails sound more difficult than entire feature set of Google maps itself.
There are plenty of other plugins and/or gems available that do more or less the same thing with slightly more effort involved and the book of course (possibly more than one at this point).
If you want a turnkey solution for stacks of money, .NET or some more commercial platform will have more options. I would avoid using this guys solutions out of selfishness, if he does well they'll be others with more colorful marketing making such grand solutions. After which Google will be clogged with them and we'll have to wade through dozens of such spectacular offerings to find the better, albeit less polished (less advertised) open source versions.
Are there any good googlemaps plugins for rails?

Resources