I'm currently working on an application that downloads posts from a Wordpress blog and displays them for a user to view. The current obstacle I've run into is that the blog uses a large number of images within their posts, and the wording of text is quite dependent on having those images in the right place (so they can't be at the bottom of the view, for example).
The solution I've come up with so far would be to use one of the NSString methods to look through the text and find all the sections containing img and then grab those out and separate the NSString at that point. Then I would use a bunch of if statements to work out how many sections there were, and lay out the UIScrollView from there.
To me this sounds like a horrible solution, so I'm hoping there is a better one out there someone could recommend.
Thanks!
OK. First take a look at DTCoreText Framework (parse HTML tagged text and can handle 'img'-tags).
The second: Easly all images have the same size (e.g. images must groped or so). Count the img-tags and you now the needed space.
A good way is to load the images asynchronous....
Related
let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.
I'm making a tile-based adventure game in iOS. Currently my level data is stored in a 100x100 array. I'm considering two approaches for displaying my level data. The easiest approach would be to make an SKSpriteNode for each tile. However, I'm wondering if an iOS device has enough memory for 10,000 nodes. If not I can always create and delete nodes from the level data as needed.
I know this is meant to work with Tiled, but the code in there might help you optimize what you are looking to do. I have done my best to optimize for big maps like the one you are making. The big thing to look at is more so how you are creating textures I know that has been a big killer in the past.
Swift
https://github.com/SpriteKitAlliance/SKATiledMap
Object-C
https://github.com/SpriteKitAlliance/SKAToolKit
Both are designed to load in a JSON string too so there is a chance you could still generate random maps without having to use the Tiled Editor as long as you match the expected format.
Also you may want to consider looking at how culling works in the Objective-C version as we found more recently removing nodes from the parent has really optimized performance on iOS 9.
Hopefully you find some of that helpful and if you have any questions feel free to email me.
Edit
Another option would be to look at Object Pooling. The core concept is to create only sprites you need to display and when you are done store them in a collection of sorts. When you need a new sprite you ask the collection for one and if it doesn't have one you create a new one.
For example you need a grass tile and you ask for one and it doesn't have one that has been already created that is waiting to be used so it creates one. You may do this to fill a 9 x 7 grid to fill up your screen. As you move away grass that gets moved off screen gets tossed into the collection to be used again when the new row comes in and needs grass. This works really well if all you are doing is displaying tiles. Not so great if tiles have dynamic properties that need to be updated and are unique in nature.
Here is a great link even if it is for Unity =)
https://unity3d.com/learn/tutorials/modules/beginner/live-training-archive/object-pooling
I'm using Path's FastImageCache library (https://github.com/path/FastImageCache) in order to have pre-resized images cached ready for use in UIImageViews.
To use FIC, you define FICImageFormats which include a bunch of data, including the image size. In order for best performance, this image size should be identical to the size of the UIImageView that the image will be used in.
This gives rise to a chicken-and-egg sort of problem: should the code that sets up FIC (in the AppDelegate or wherever you do the rest of your basic init work for your app, presumably?) know the sizes of the UIImageViews in the rest of your app? This has the obvious downside of very tight coupling of your app's startup code with UI implementation details.
An alternative is that you could have your UIs implement a protocol that defines a method such as
+(NSArray *)imageFormats;
which would return an array of FICImageFormat objects representing all image formats that would be required by that bit of UI. Then the startup code would only have to know which classes implement that protocol in order to get a full list of image formats required for the app.
This second approach has the downside of potential duplicate FICImageFormats. It would be non-optimal to have two (or more!) image formats for the same image format family that also have the same dimensions. Then you'd be caching the exact same data more than once.
Any other approaches you can think of? Best practices? All thoughts are welcome!
I think what could help here is to start at the design level - you should agree with the designers for a certain set of image formats for any image family and then you would have a central list of all your image formats and families in your style guide anyway.
It doesn't matter then so much if your code is in the AppDelegate or in the View layer, because neither the view layer nor the startup code really determine which formats exist, but your style guide. It's not an implementation detail anymore, but part of an external specification.
I am using the most excellent PHP library ePub to on-the-fly create digital books from HTML stored in my database.
As these are part of a collection, I am including a cover image for every book. Everything works fine in the code but depending upon the device/software interpreting the ePub, the image may get cut off. I have seen 600x800 pixels as a recommended size, but it still cuts it off (for example in Aldiko in Android). Is there a standard size that is recommended in the documentation?
Honestly, I would love a good and readable recommendation for documentation of the ePub format.
So, it seems that Aldiko has the problem, and not the other e-Readers I have tested (Calibre, Overdrive).
After trying various ratios, I found that Aldiko only respects the height:100% style I have called out in the height direction. It doesn't scale the image, only sets the height at 100% of the screen width. I am going to have to go with this being a bug in Aldiko, and keep the recommended 600x800 ratio for maximum resolution.
Another interesting thing I discovered as well; the Aldiko reader didn't recover as well from non-standard HTML. On one of the database entries, a <style> tag inside the <body> disappeared, but the style text did not. This is not the same for the other e-Readers.
The best general advice I found on the internet is Preparing Images for Ebooks Project (PIFEP).
I have a set of images and they overlap one another.
So if I were to put them together I would have a single larger image than any one of them, and it would show me the "full picture". It could be used for a set of images for instance, that would make a full panorama.
I can do this by hand, but then.. that's quite some work.
Is there any freely available program out there that does this?
Especially one that is free of charge and with relaxed license restrictions and works on Unix?
I have tried googling it to no avail and I have checked through ImageMagick's functionalities and haven't found this there.
The main work in a functionality like this would be to figure out first how the images need be put together in order to make a single coherent image.