I am using an OpenLayers 3 map with a simgle static image displayed as an ImageLayer like the sample below:
http://openlayers.org/en/v3.2.0/examples/static-image.html
On zooming in, the image gets blurred, is there any way to remove the blurring and get a sharp pixelated image ?
You cannot achieve that with a static image unless it is way bigger than all the resolutions you want to use on the map.
If you're looking for deep zooming into images, you may want to use Zoomify to create a tile pyramid for your image. See the Zoomify example for how this will look in the browser: http://openlayers.org/en/v3.4.0/examples/zoomify.html.
Related
I am using a library called SnapSliderFilters to apply filters to an image. I need the filters to be applied exactly the way it is presented in the library's example.
When I implemented the code exactly as directed on an image captured with a custom camera, sliding sideways, while applying the image filter, it also causes the image to rotate by 90 degrees and the image to zoom.
In other case, when I select an image from gallery, that is of exact dimension as the screen, it works perfectly fine.
So, I think I have found the cause to be image size but I don't know where to begin.
Here is how it displays:
I'm trying to figure out the best way to approach this. I'm looking to take an UIImage, detect if there are any shapes/blobs of a specific RGB color, find their frame and crop them into their own image. I've seen a few posts of people recommending OpenCV as well as other links similar to this - Link
Here are 2 screenshot's of what I'm looking to do. So in Example 1 there is the light blue rectangle with some text inside it. I need to detect the blue background and crop the image along the black lines. Same for the red image below it. This is just showing that it doesn't matter what's inside of the color blob. Example 2 shows the actual images that will be cropped once the 2 color blobs are found and cropped. All image will always be on a white background.
Example 1
Example 2
This question goes way beyond a simple answer. What you will need to do is access the raw data on that image based on the color then create a frame to crop. I would find the upper, left,right, lower frame of all matches of that specific color then make a frame out of it to crop the image.
Access the color
Get Pixel color of UIImage
Crop the image
Cropping an UIImage
My requirement is to fill specific color on specific area of an image. But the image should be an image taken from iphone camera or photo gallery. For example, I could take a picture of me with a blue shirt, the app should allow me to change the color of the shirt to red.
Exactly the functionality of "Paint bucket" tool of the photoshop.
I found couple of approaches
1) Using MASKS with prepared images
color selected part of image on touch
Fill color on specific portion of image?
Scanline Flood Fill Algorithm
https://github.com/Chintan-Dave/UIImageScanlineFloodfill
2) Using GLPaint (Actually this is NOT the solution I am running after)
My question is,
Is it possible to color specific area of a image WITH OUT having MASKS or with generating masks for the image on run time?
Scanline Flood Fill Algorithm does that in to a certain level. But when it comes to real time images(like selfie images) it wont work correctly?
I have UIImageView having image within it. Now i want to draw a line on Image, and when i zoom image the pixels of line should also be zoom according to image.
please give me suggestion guys.
Thanks in advance
You can draw using a CAShapeLayer and put that onto the image, this is a vector using a UIBeizierPath hence it will re-scale accordingly when you zoom the image.
When I create png in Photoshop with parameters 30x30pxl, clear background with white lines it looks like poor resolution. link to snapshot of presenting tabbaritem: http://yadi.sk/d/9zrBjrjxBFyva
I want smooth lines in this picture. What way I can get it?
It looks like you're using a non-retina image on a retina device. Create a 60x60 version and name the same as the 30x30 version with the suffix #2x(for example, clock#2x.png). Add this to your project.
Edit: The graphics you're producing in Photoshop are pixelated, and in the case of the #2x image, very blurry. Since the source images are poor quality, there's no way iOS can magically make them smooth.
Use the shape tool in Photoshop to create crisp, sharp shape paths. Make sure to align the shapes to pixel boundaries for extra crispness.