What consumes less memory an actual image or a drawn image? - ios

I am designing an app and I am creating some images with PaintCode.
Using that program I get the actual code for each image that I create, thus allowing me to choose to insert code or use an actual image. I was wondering what would consume less memory, the image code or an actual PNG?
I know an image memory consumption is width x height x 4 = bytes in memory but I have no idea whether an image that is generated by code is more memory efficient, less memory efficient or breaks even?
This decision is particularly important given the different screen resolutions. its a lot easier to create an image in code and expand it to whatever size I want rather than go to Photoshop every time.

This answer varies from other answers because I have the impression the graphics context is your most common destination -- that you are not always rendering to a discrete bitmap. So for the purposes of typical drawing:
I was wondering what would consume less memory, the image code or an actual PNG?
It's most likely that the code will result in far less memory consumption.
I have no idea whether an image that is generated by code is more memory efficient, less memory efficient or breaks even?
There are a lot of variables and there is no simple equation to tell you which is better for any given input. If it's simple enough to create with a WYSIWYG, it's likely much smaller as code.
If you need to create intermediate rasterizations or layers for a vector based renderer, then memory will be about equal once you have added the first layer. Typically, one does/should not render each view or layer (not CALayer, btw) to these intermediates and instead render directly into the graphics context. When all your views render directly into the graphics context, they write to the same destination.
With code, you also open yourself to a few other variables which have the potential to add a lot of memory. The effects of font loading and caching can be quite high, and the code generator you use is not going to examine how you could achieve the best caching and sharing of these resources if you find you need to minimize memory consumption.

If your goal is to draw images, you should try to use UIImageView if you possibly can. It's generally the fastest and cheapest way to get an image to the screen, and it's reasonably flexible.
someone explaind it better here.
source

A vector image is almost always smaller in storage than it's raster counterpart, except for photographs. In memory though, they both have to be rasterized if you need to display them, so they will use more or less the same the same amount of memory.
However, I am highly skeptical of the usefulness of PaintCode; in general it's better to use a standard image format such as .svg or .eps, instead of non standard format such as a domain specific language (DSL) within Objective C.

It makes no difference at all, provided the final image size (in point dimensions) is the same as the display size (in point dimensions). What is ultimately displayed in your app is, say, a 100x100 bitmap. Those are the same number of bits no matter how they were obtained to start with.
The place where memory gets wasted is from holding on to an image that is much larger (in point dimensions) than it is actually being displayed in the interface.
If I load a 3MB PNG from my app bundle, scale it down to 100x100, and draw it in the interface, and let go of the original 3MB PNG, the result is exactly the same amount of memory in the backing store as if I draw the content of a 100X100 graphics context from scratch myself using Core Graphics (which is what PaintCode helps you do).

Related

Image processing technique for image segmentation

I'm trying to create a model that segment various part of an aerial image.
I'm using a dataset found in kaggle: https://www.kaggle.com/datasets/bulentsiyah/semantic-drone-dataset
My question regards about the right way of treat images for semantic segmentation.
In this case is it better to simply resize the images (e.g. 6000x4000 to 256x256 pixel) or is it better to resize them less but then create patches from it (e.g. 6000x4000 to 1024x1024 pixel and then patches in 256x256 pixel).
I think that resizing too much an image may cause the loss of information but at the same time patching could not guarantee a full view of the image.
I also found a notebook that got 96% accuracy just by resizing so i'm not sure how to proceed:
https://www.kaggle.com/code/yesa911/aerial-semantic-segmentation-96-acc/notebook
I think there is not one correct answer to this. Dependant on the amount and size of the areas you want to segmentate, it seems unlikely to get a proper/accurate segemantion with images of your size. However, if there are only easy detectable and big areas in the image I would definetly go for the approach without patches, since the patch-approach is way more complex as it has more variables to consider (size of patches, overlapping patches, edge treatment). It would save you a lot of implementation time for preprocessing and stichting afterwards.
TLDR: I would start without patching and - if the result is sufficient - stop there. Else, try the patching approach afterwards.

Problems understanding quadtrees

I am interested in the datastructure "quadtree" and for an project of mine, i want to use them. Ok lets make an example :
We have a 3D space where the cameraposition is locked but i can rotate the camera. Whenever i rotate my camera to a certain point, a large 2d image(bigger than frustum) is shown.
1.Loading the whole image isnt necessary when i can only see 1/4 of it! . Does it make sense to use quadtrees here, to load only the parts of the image that are visible to me?(When using opengl/webgl) If so, each quadtree node has to contain its own vertexbuffer and texture or not?
Quad tree fits good when you need to switch between multiple precision levels on demand. Geographical maps with zooming is a good example. If you have tiles with only one level of precision it should be more handy to control their loading / visibility without having such complicated structure. You could just load low precision image fast and then load high precision images on demand.
Also, speaking of your case - 50mb for 4k image sounds strange. Compressed DDS/dxt1 or PVRTC textures should take less space (and uncompressed jpg/png much less). Also, it is helpful to determine, what is the lowest applicable image precision in your case (so you don't waste space/traffic without reason).

UIBezierPath vs putting png in imageassets

I'm helping create an app that will use images that can be resized (think AutoCAD). Fortunately, I have PaintCode and I have Illustrator so it's very easy for me to convert a svg file into code should I want to.
I converted one image into code and it's around 10,000 lines of code for the image. For speed purposes, is it better to have just a frame with a uiimage inside of it or to use the 10,000 lines of code filled with bezier paths?
I agree with Sami that benchmarking is the best way to answer the question.
In general bitmaps tend to be faster but take more storage space. Vector graphics tend to be smaller, and resolution-independent, but get slower and slower as complexity goes up. (Where bitmap performance is all but independent of image complexity. I say "all but" because some compression formats like JPEG do more work on complex images.)

Convoluting a large filter in GPGPU

I wish to apply a certain 2D filter to 2D images, however, the filter size is huge. Image dimensions are about 2000x2000 and the filter size is about 500*500.
No, I cannot do this in frequency domain so FFT is no go. I'm aware of normal GPU convolution and the use of shared memory for coalescing memory access, however shared memory doesn't seem feasible since the space needed by the filter is large and would therefore need to be divided, this might even prove to be very complex to write.
Any ideas?
I think you can easily manage doing filtering for such sized images. You can transfer hundreds of megabytes to the videomemory. Such size is going to be working well.
You can use byte matrices to transfer the image data then you can use your filter to operate on it.

Increase image size, without messing up clarity

Are there libraries, scripts or any techniques to increase image size in height and width....
or you must need to have a super good resolution image for it?.....
Bicubic interpolation is pretty much the best you're going to get when it comes to increasing image size while maintaining as much of the original detail as possible. It's not yet possible to work the actual magic that your question would require.
The Wikipedia link above is a pretty solid reference, but there was a question asked about how it works here on Stack Overflow: How does bicubic interpolation work?
This is the highest quality resampling algorithm that Photoshop (and other graphic software) offers. Generally, it's recommended that you use bicubic smoothing when you're increasing image size, and bicubic sharpening when you're reducing image size. Sharpening can produce an over-sharpened image when you are enlarging an image, so you need to be careful.
As far as libraries or scripts, it's difficult to recommend anything without knowing what language you're intending to do this in. But I can guarantee that there's an image processing library including this algorithm already around for any of the popular languages—I wouldn't advise reimplementing it yourself.
Increasing height & width of an image means one of two things:
i) You are increasing the physical size of the image (i.e. cm or inches), without touching its content.
ii) You are trying to increase the image pixel content (ie its resolution)
So:
(i) has to do with rendering. As the image physical size goes up, you are drawing larger pixels (the DPI goes down). Good if you want to look at the image from far away (sau on a really large screen). If look at it from up close, you are going to see mostly large dots.
(ii) Is just plainly impossible. Say your image is 100X100 pixels and you want to make 200x200. This means you start from 10,000 pixels, end up with 40,000... what are you going to put in the 30,000 new pixels? Whatever your answer, you are going to end up with 30,000 invented pixels and the image you get is going to be either fuzzier, or faker, and usually both. All the techniques that increase an image size use some sort of average among neighboring pixel values, which amounts to "fuzzier".
Cheers.

Resources