UIBezierPath vs putting png in imageassets - ios

I'm helping create an app that will use images that can be resized (think AutoCAD). Fortunately, I have PaintCode and I have Illustrator so it's very easy for me to convert a svg file into code should I want to.
I converted one image into code and it's around 10,000 lines of code for the image. For speed purposes, is it better to have just a frame with a uiimage inside of it or to use the 10,000 lines of code filled with bezier paths?

I agree with Sami that benchmarking is the best way to answer the question.
In general bitmaps tend to be faster but take more storage space. Vector graphics tend to be smaller, and resolution-independent, but get slower and slower as complexity goes up. (Where bitmap performance is all but independent of image complexity. I say "all but" because some compression formats like JPEG do more work on complex images.)

Related

Image processing technique for image segmentation

I'm trying to create a model that segment various part of an aerial image.
I'm using a dataset found in kaggle: https://www.kaggle.com/datasets/bulentsiyah/semantic-drone-dataset
My question regards about the right way of treat images for semantic segmentation.
In this case is it better to simply resize the images (e.g. 6000x4000 to 256x256 pixel) or is it better to resize them less but then create patches from it (e.g. 6000x4000 to 1024x1024 pixel and then patches in 256x256 pixel).
I think that resizing too much an image may cause the loss of information but at the same time patching could not guarantee a full view of the image.
I also found a notebook that got 96% accuracy just by resizing so i'm not sure how to proceed:
https://www.kaggle.com/code/yesa911/aerial-semantic-segmentation-96-acc/notebook
I think there is not one correct answer to this. Dependant on the amount and size of the areas you want to segmentate, it seems unlikely to get a proper/accurate segemantion with images of your size. However, if there are only easy detectable and big areas in the image I would definetly go for the approach without patches, since the patch-approach is way more complex as it has more variables to consider (size of patches, overlapping patches, edge treatment). It would save you a lot of implementation time for preprocessing and stichting afterwards.
TLDR: I would start without patching and - if the result is sufficient - stop there. Else, try the patching approach afterwards.

Direct2D – Drawing rectangeles and circles to large images and saving to disk

My task is to draw a lot of simple geometric figures like rectangles and circles to large black-and-white images (about 4000x6000 pixels in size) and save the result to both, bitmap-files and a binary array representing each pixel as 1 if drawn or 0 otherwise. I was using GDI+ (=System.Drawing). Since this, however, took too long, I started having a look at Direct2D. I quickly learned how to draw to a Win32-window and thought I could use this to draw to a bitmap instead.
I learned how to load an image and display it here: https://msdn.microsoft.com/de-de/library/windows/desktop/ee719658(v=vs.85).aspx
But I could not find information on how to create a large ID2D1Bitmap and render to it.
How can I create a render target (must that be a ID2D1HwndRenderTarget?) associated with such a newly created (how?) big bitmap and draw rectangles and circles to it and save it to file, afterwards?
Thank You very much for showing me the right direction,
Jürgen
If I was to do it, I would roll my own code instead of using GDI or DirectX calls. The structure of a binary bitmap is very simple (packed array of bits), and once you have implemented a function to set a single pixel and one to draw a single run (horizontal line segment), drawing rectangles and circles comes easily.
If you don't feel comfortable with bit packing, you can work with a byte array instead (one pixel per byte), and convert the whole image in the end.
Writing the bitmap to a file is also not a big deal once you know about the binary file I/O operations (and you will find many ready-made functions on the Web).
Actually, when you know the specs of the layout of the bitmap file data, you don't need Windows at all.

What consumes less memory an actual image or a drawn image?

I am designing an app and I am creating some images with PaintCode.
Using that program I get the actual code for each image that I create, thus allowing me to choose to insert code or use an actual image. I was wondering what would consume less memory, the image code or an actual PNG?
I know an image memory consumption is width x height x 4 = bytes in memory but I have no idea whether an image that is generated by code is more memory efficient, less memory efficient or breaks even?
This decision is particularly important given the different screen resolutions. its a lot easier to create an image in code and expand it to whatever size I want rather than go to Photoshop every time.
This answer varies from other answers because I have the impression the graphics context is your most common destination -- that you are not always rendering to a discrete bitmap. So for the purposes of typical drawing:
I was wondering what would consume less memory, the image code or an actual PNG?
It's most likely that the code will result in far less memory consumption.
I have no idea whether an image that is generated by code is more memory efficient, less memory efficient or breaks even?
There are a lot of variables and there is no simple equation to tell you which is better for any given input. If it's simple enough to create with a WYSIWYG, it's likely much smaller as code.
If you need to create intermediate rasterizations or layers for a vector based renderer, then memory will be about equal once you have added the first layer. Typically, one does/should not render each view or layer (not CALayer, btw) to these intermediates and instead render directly into the graphics context. When all your views render directly into the graphics context, they write to the same destination.
With code, you also open yourself to a few other variables which have the potential to add a lot of memory. The effects of font loading and caching can be quite high, and the code generator you use is not going to examine how you could achieve the best caching and sharing of these resources if you find you need to minimize memory consumption.
If your goal is to draw images, you should try to use UIImageView if you possibly can. It's generally the fastest and cheapest way to get an image to the screen, and it's reasonably flexible.
someone explaind it better here.
source
A vector image is almost always smaller in storage than it's raster counterpart, except for photographs. In memory though, they both have to be rasterized if you need to display them, so they will use more or less the same the same amount of memory.
However, I am highly skeptical of the usefulness of PaintCode; in general it's better to use a standard image format such as .svg or .eps, instead of non standard format such as a domain specific language (DSL) within Objective C.
It makes no difference at all, provided the final image size (in point dimensions) is the same as the display size (in point dimensions). What is ultimately displayed in your app is, say, a 100x100 bitmap. Those are the same number of bits no matter how they were obtained to start with.
The place where memory gets wasted is from holding on to an image that is much larger (in point dimensions) than it is actually being displayed in the interface.
If I load a 3MB PNG from my app bundle, scale it down to 100x100, and draw it in the interface, and let go of the original 3MB PNG, the result is exactly the same amount of memory in the backing store as if I draw the content of a 100X100 graphics context from scratch myself using Core Graphics (which is what PaintCode helps you do).

Understanding just what is an image

I suppose the simplest understanding of what a (bitmap) image is would be an array of pixels. After that, it gets pretty technical.
I've been trying to understand the sort of information that an image may provide and have come across a large collection of technical terms like "mipmap", "pitch", "stride", "linear", "depth", as well as other format-specific things.
These seem to pop up across a lot of different formats so it'd probably be useful to understand what purpose they serve in an image. Looking at the DDS, BMP, PNG, TGA, JPG documentations has only made it clear that an image is pretty confusing.
Though searching around for some hours, there wasn't any nice tutorial-like break-down of just what an image is and all of the different properties.
The eventual goal would be to take proprietary image formats and convert them to more common formats like DDS or BMP. Or to make up some image format.
Any good readings?
Even your simplified explanation of an image doesn't encompass all the possibilities. For example an image can be divided by planes, where the red pixel values are all together followed by the green pixel values, followed by the blue pixel values. Such layouts are uncommon but still possible.
Assuming a simple layout of pixels you must still determine the pixel format. You might have a paletted image where some number of bits (1, 4, or 8) will be an index into a palette or color table which will define the RGB color of the pixel along with the transparency of the pixel (one index will typically be reserved as a transparent pixel). Otherwise the pixel will be 3 or 4 bytes depending on whether a transparency or alpha value is included. The order of the values (R,G,B) or (B,G,R) will depend on the format - Windows bitmaps are B,G,R while everything else will most likely be R,G,B.
The stride is the number of bytes between rows of the image. Windows bitmaps for example will take the width of the image times the number of bytes per pixel and round it up to the next multiple of 4 bytes.
I've never heard of DDA, and BMP is only common in the Windows world (and there's a lot more computing in the non-windows world than you might think). Rather than worry about all of the technical details of this, why not just use an existing toolkit such as image magick, which can already batch convert from dozens of formats to your one common format?
Unless you're doing specialized work, where you would need something fancy like hdr (which most image formats don't even support -- so most of your sources would not have it in the first place), you're probably best off picking something standard like PNG or JPG. They both have plusses and minuses. You might want to support both of those depending on the image.

Increase image size, without messing up clarity

Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.

Resources