I have UIImageView having image within it. Now i want to draw a line on Image, and when i zoom image the pixels of line should also be zoom according to image.
please give me suggestion guys.
Thanks in advance
You can draw using a CAShapeLayer and put that onto the image, this is a vector using a UIBeizierPath hence it will re-scale accordingly when you zoom the image.
Related
I have an image that I'm using which is 960x1280. I have coordinates for a rectangle of x:391 y:772, x:574 y:772, x:574 y:870, x:391 y:870 which allows me to put the rectangle in the proper spot IF the image is still 960x1280. Of course, when I'm in Xcode, the screen size is 375x667.
When drawing the rectangle with the above coordinates, the rectangle is no longer visible. If just use the screen scale of 3 (UIScreen.main.scale) it's not accurate either.
I create a UIImageView that has constraints of 0 for all four sides and using aspect .fit or .fill. How do I now know the proper scale of the image so the rectangle is drawn in the right spot?
Thanks
I'm using UIColor's colorWithPatternImage function to set a tiled image on one of my views. The result is a grid of 1 pixel lines all over.
Fig: The clear color grid of lines is the issue.
My intention was to obtain a perfect background using the tiled image.
I first suspected that the image I was using could be faulty, but zooming it to 800% doesn't really show the presence of any transparent one-pixel border anywhere.
Here's the image (#2x version):
Any ideas what it could be related to?
Thanks,
p.
you are doing everything fine, but your problem is that your pattern image have 1 pixel line on the top an 1 pixel line on the left side with alpha color so you only need to modify your pattern image simply as that, I have been testing and this is the problem
I hope this helps you
Am having the mask image and the normal image i want to do the exact clipping mask effect and want to set that image to the UIImageView.
In this PIP Camera app they did this
https://itunes.apple.com/in/app/hua-zhong-hua-xiang-ji-nice/id521922264?mt=8
And i tried this
How to Mask an UIImageView
exactly what they suggest but no luck
Could anyone here give an idea of how we do that?
I am using an OpenLayers 3 map with a simgle static image displayed as an ImageLayer like the sample below:
http://openlayers.org/en/v3.2.0/examples/static-image.html
On zooming in, the image gets blurred, is there any way to remove the blurring and get a sharp pixelated image ?
You cannot achieve that with a static image unless it is way bigger than all the resolutions you want to use on the map.
If you're looking for deep zooming into images, you may want to use Zoomify to create a tile pyramid for your image. See the Zoomify example for how this will look in the browser: http://openlayers.org/en/v3.4.0/examples/zoomify.html.
I'm trying to figure out the best way to approach this. I'm looking to take an UIImage, detect if there are any shapes/blobs of a specific RGB color, find their frame and crop them into their own image. I've seen a few posts of people recommending OpenCV as well as other links similar to this - Link
Here are 2 screenshot's of what I'm looking to do. So in Example 1 there is the light blue rectangle with some text inside it. I need to detect the blue background and crop the image along the black lines. Same for the red image below it. This is just showing that it doesn't matter what's inside of the color blob. Example 2 shows the actual images that will be cropped once the 2 color blobs are found and cropped. All image will always be on a white background.
Example 1
Example 2
This question goes way beyond a simple answer. What you will need to do is access the raw data on that image based on the color then create a frame to crop. I would find the upper, left,right, lower frame of all matches of that specific color then make a frame out of it to crop the image.
Access the color
Get Pixel color of UIImage
Crop the image
Cropping an UIImage