I would like to extract the letter from the blue images shown below.
Ideally, the result would be a cropped black letter image on a white background as shown below.
There may be functions of CV that would enable me to go this effectively.
I think this would be a more simple thing to do if the original letters weren't white. It would be more simple to crop the image to the extent of the letter and change the font color to black.
Appreciate any help.
Image example 1
Image example 2
Result
Related
I want to use a background image for a kivy button. I have used the following builder string:
<MenuScreen>:
BoxLayout:
Button:
background_normal: './Pictures/my_background.png'
The image is displayed, however, it is much darker than expected and partially inverted. What do I have to do in order to display my original image without changes?
The resulting button
Original image
I think your problem is that the two colors in your image are grey and transparent (the white appearing areas are actually transparent in your image). In a Button, the background_normal image is multiplied by the background_color which is also a shade of grey by default. So what you end up with is the background grey showing through your transparent areas (white areas above) and your grey appearing everywhere else. So there is not much contrast. See the Button Docs for more information.
You can modify your image (using something like Gimp) to replace the transparent areas with a white color, and perhaps change the grey colors in your image with black. The distortion of the image is due to Kivy fitting your image to the button size.
Here is your image edited as I described. You should be able to click on it and download it.
I'm trying to figure out the best way to approach this. I'm looking to take an UIImage, detect if there are any shapes/blobs of a specific RGB color, find their frame and crop them into their own image. I've seen a few posts of people recommending OpenCV as well as other links similar to this - Link
Here are 2 screenshot's of what I'm looking to do. So in Example 1 there is the light blue rectangle with some text inside it. I need to detect the blue background and crop the image along the black lines. Same for the red image below it. This is just showing that it doesn't matter what's inside of the color blob. Example 2 shows the actual images that will be cropped once the 2 color blobs are found and cropped. All image will always be on a white background.
Example 1
Example 2
This question goes way beyond a simple answer. What you will need to do is access the raw data on that image based on the color then create a frame to crop. I would find the upper, left,right, lower frame of all matches of that specific color then make a frame out of it to crop the image.
Access the color
Get Pixel color of UIImage
Crop the image
Cropping an UIImage
My requirement is to fill specific color on specific area of an image. But the image should be an image taken from iphone camera or photo gallery. For example, I could take a picture of me with a blue shirt, the app should allow me to change the color of the shirt to red.
Exactly the functionality of "Paint bucket" tool of the photoshop.
I found couple of approaches
1) Using MASKS with prepared images
color selected part of image on touch
Fill color on specific portion of image?
Scanline Flood Fill Algorithm
https://github.com/Chintan-Dave/UIImageScanlineFloodfill
2) Using GLPaint (Actually this is NOT the solution I am running after)
My question is,
Is it possible to color specific area of a image WITH OUT having MASKS or with generating masks for the image on run time?
Scanline Flood Fill Algorithm does that in to a certain level. But when it comes to real time images(like selfie images) it wont work correctly?
quick question : I usually colorize my images, but they are only black and white image masks. I now have an image with a transparent background, a black part and other parts that are already colorized.
Is there a way to programmatically colorize only the black part of the image? I tried several kCGBlendMode, but none worked. I wonder if I have a to create a new image, use it as a mask for the colorized parts and then programmatically colorize the rest of the image.
Do you have any idea how to achieve this simply? Thanks.
take a look this solution: there is not bad example how to make a mask any shape with a black and white image
https://github.com/NickTitle/iOS7-Trans-Blur/blob/master/README.md
I am developing a book app. Book pages are images. At a time one image is shown on iphone screen. Image contains lines(Paragraphs). I want to highlight the whole paragraph on touch on that. Is it possible to highlight the paragraph or make overlay on the whole paragraph or change the color of that paragraph. Any Help please?
As your book page is an image, the only way you can do it is to either change the image or add a view on top with a color and an alpha value. With the latter you also overlay the letters of course, so the effect will kind of blurry your text.
Next to that, recognising the paragraph won't be trivial, depending on your image quality. The same counts for changing the image - if your background is not a clean color you have to introduce some threshold for which color is the background color and should be changed to the highlight color.
For how to change the color of specific pixels in an UIImage see here:
iPhone : How to change color of particular pixel of a UIImage?