I am currently working on an iOS application that is to be used as a museum display. Part of the display concerns a speech by Abraham Lincoln. My client would like me to display the text of the speech with certain words and phrases highlighted. When you click on the highlighted words a popup annotation should appear. The tricky part is that they don't want to use digital text. They want to display the words of the speech in the form of an image taken from a facsimile of Lincoln's own handwriting.
I have a good deal of experience in iOS development, and I think I am up to the task of responding to clicks on the highlighted words with a simple touchesBegan event handler to get the CGPoint of the touch. However, graphics are not my strong suit. I don't have a good idea of what to do about highlighting the words. I imagine I need to use some kind of filter or mask, but I have never done this in iOS before.
Any help is appreciated (and I'm very generous with upvotes). Thanks to everyone!
If you're going to be building up the composite image anyway, how about putting the highlighted text as the image on buttons set on top of the background, with the rest of the text as part of the background image?
If there's no button text, just the image (of the word or phrase) then you're sorted.
Related
I'd like to know if there is a way to best detect a users tap on a label?
The new iOS15 Maps app allows a tap on e.g. a cities name and then shows informations about that city.
I am now wondering if something similar can be done with mapbox?
I know that there is a mapView.visibleFeatures(in: myRect) function that can somehow help here. So I can convert my finger location to a rect and then get all features there.
BUT... my city e.g. might have a label that is let's say 200 px wide. So I would need to have a quite large rect to find the point of my city label. And then I will also get all kinds of other labels that might be there. Maybe even not visible, but in the dataset.
Is there no way to ask the map what the frontmost element was when I tapped? So that when I tap on the far end of the label, I still get that ONE feature?
I am still using Mapbox V6.3... the latest before their last major update.
But if it's not possible with that version, an answer about the latest V10.something would also be great.
For v10, this example demonstrates how to identify features near a click. While the overall example is to a different end, the onMapClick functions shows the method to find a feature and then build an annotation.
https://docs.mapbox.com/ios/maps/examples/view-annotation-marker/
I noticed a behaviour of UITextView in iOS, where auto detected links and phone numbers do not appear in UITextView's tint color, but black and underlined instead.
It appears to be related to other surrounding links. My guess is, that iOS tries to autodetect whole contact information.
However I don't want this behaviour in my app. What is this behaviour, and more importantly: How do I override it? I want links and phone numbers always to use the selected tint color. Setting the linkTextAttributes does not help either.
All of the following examples were made with a new, empty project containing a single UITextView; Data Detectors set to Phone Number, Link and Address.
Example 1:
Example 2:
Example 3:
EDIT: In case I did not word my question clearly enough: I want data detection to happen, but in a consistent way. Every phone number and link should appear blue and not black and underlined.
My actual app pretty much looks like example 1, although with fewer phone numbers. The blue and black mix just doesn't look right.
I've also added a screenshot of a sample project. There's no code involved. Open Xcode, create an empty project, drop an UITextView in your storyboard and enter my example text. Enable data detectors and run the app:
I am quite new to iOS app development so I could not find solution or keyword to search for my problem.
I have been working on an workout tracking app which has tableview of workout exercise lists. and users can select each tableview cell as "finished" workout or "unfinished" workout.
Im trying to make an image view where the image of body parts corresponds to the workout results. for example, if user finished chest workouts, the chest part of the body image gets colored to light green, and as user finishes more workouts, the coloring gets darker in green.
I have finished all other parts of the app but i don't know how to color specific part of an image.
Could anyone suggest me where to look? or the key word that I need to search for?
The right way to achieve such thing is by having each body part separated in individual images, and displaying them together so that they form a body.
In order to tint your images, you can either use a UIImage category or built-in UIImage tint mode (available from iOS 7).
Before this question is dismissed, let me start by saying I've read the dozens of questions that sound similar. I haven't found anyone that has asked for this specific use case, though, so I'm going to give it a shot.
I would like to create custom images to use (similar to emojis) in a custom keyboard that can be accessed with the globe icon. I understand that I can create a custom keyboard inside my own app, but it will only work within that app. I also understand how the emoji keyboard works.
Is it possible to create a situation where if two people are using the app, though, that the keyboard could be used to input the custom images (emojis) and be viewed only by a receiving user that ALSO has the app - even if the keyboard is being used outside of the custom image keyboard app.
So, basically, there would be a set of images stored within the app and the custom keyboard would reference those images to display whenever the keystroke has been entered and then the receiving phone can locate those images stored within the app to display them (but this could all be done within the native SMS messaging app, not solely in the new custom image keyboard app).
I've researched this a good deal, but can't find a straight answer. Any help or direction would be greatly appreciated!
You could create your own Keyboard that creates somehow special Strings that imply an image. For example could a smiley be encoded like ".CoolSmiley". When you now want to display that text, you should search for patterns you want to recognise. That's how emoji is working. When you type in (y) it will get replaced by a thumb up image because it's recognising the string as a known pattern for a thumb up. This will of course just work inside your app.
I'm trying to make my app more accessible but I have a problem. Part of my app consists of pieces of advice, each composed of an UIScrollView with long text. (The text is a UIImage I prepared with Photoshop ).
I would like to make it accessible so that the users could listen to all the advice and pause it whenever they want. I thought of using UIAccessibilityLabel but the text is too long.
Thanks in advance for your help.
Let me preface this by saying I am not an iOS developer but am a long time blind iOS user.
There is no way to easily pause the reading of text and resume at the exact same spot that I know of. According to the documentation, I've found accessibilityLabel is meant to provide accessibility information that can be conveyed in under a sentence. An option I can think of would be to test whether VoiceOver is enabled using UIAccessibilityIsVoiceOverRunning. If this is true, you could put your text into a text view, and display that instead of your UIImage.
A textView will allow a VoiceOver user to read the text by character, word, or line, which is the best option available. If VoiceOver isn’t running, your test will return false, the UIImage will be displayed as normal, and the user won’t see anything different.