I'm quite new in computer vision, currently learning about google cloud vision SDK using Go. And right now I have one problem.
So I have an image scanned using the DetectTexts() method. The result was great! all of the texts are scanned.
However, I don't actually need all of those texts. I only need some of it. Below is the image I use as a sample. What I want to get is the two blocks highlighted in red.
Images
Result
WE-2
Sam WHO
Time
PM 1:57
SYS
mmHg
mmHg
DIA
mmHg
90
62
82
mmHg
PUL
/MIN
MR AVGA
SET
START
STOP
MEM
I do not know what is the best approach to do it. What's in my mind right now is these approaches:
split the images that are highlighted in red, then perform OCR scan on those new images
or, get all of the texts, and then use some algorithm (NLP maybe?) to get the highlighted texts.
Can somebody please help what is the correct and best approach to solves this problem?
You mentioned that you were using Go, which unfortunately I dont have any experience with, but I have approached this problem in other languages like Python and C#. What I would recommend is just create an ROI, or Region of Interest. Basically what that means is you would be cropping the image to only the highlighted region that you want to detect text from. Like I said, I'm not entirely sure if you can do that in Go, so you might have to do some raw pixel manipulation rather than just using a member function. I assumed that the position of the regions that you wanted to detect text from would remain the same. If your open to it, you could just create a simple python script that generates a ROI, and pipes the cropped image to GO.
import cv2
img = cv2.imread('inputImg.png')
output = img[c1:c1+25,r1:r1+25]
#You could do something like this
cv2.imwrite("path/to/output/outputimg.png", output)
Related
I'm newbie with machine learning, and I have only basic knowledge in neural networks.
I have pretty clear task:
1. Video stream shows static picture (white area with yellow squares)
(in different videos squares located in different places)
2. In some moment content of the video changes, and starts to show white area without some of the yellow squares.
3. I need to create mechanism which can determines and somehow indicates that changes.
I'm going to use for that task TensorFlow framework. Could anybody push me in right direction? Or I'll be very happy to see list of steps to overcome the problem.
Thanks in advance.
If you know how the static picture looks beforehand, may be some background-subtraction would work? Basically you just subtract the static picture from every frame and check the content of the result. If the resulting picture is empty (zeros or close to it up to some threshold) there is no change to detect. If the resulting picture contains a region that is non-zero (may be above or below a certain manually tuned threshold), you detected a change in that region.
I am in my final year of BS Computer Science. I have chosen a project in the image processing domain. But I really don't know where to start from! Here is a rough draft of my project idea:
Project Description:
Often people are faced with the problem of deciding which colors to choose to paint their walls, doors and ceilings. They want to know how their rooms will look like after applying a certain color. We want to design a mobile application that can give people the opportunity to preview their rooms/walls/ceilings, etc, with a certain color before applying the color. Through our application the user can take photos of their rooms/walls/ceilings, etc, and change their colors virtually and preview them. This will give them a good estimate about the final look of their house.
Development will be in java using open CV libraries.
Can anyone provide some help?
For starting OpenCV with android you can follow the tutorial here.
And as your above description, I think you need to do the following...
Filter out the color of room's wall or ceiling color.
Replace with your preview color.
But as your room's color is not unique, you may need to mark the color manually and segment it. Here watershed algorithm might be helpful.
And one more thing is that there might be a chance of lighting variation, so you should use HSV color space instead of RGB.
And finally this is not the full solution, but you will get some idea about how to start with your project.
ImageMagick as a famous image processing library.You may look that one too.It can perform numerous operations with images
Thanks
i am trying to subtract 2 image using the function cvAbsDiff(img1, img2, dest);
it working but sometimes when i bring my hand before my head or body the hand is not clear and background comes into picture... the background image(head) overlays my foreground.(hand)..
it works correctly on plain surfaces i.e when the background is even like a wall.
please check out my image...so that you can better understand my problem...!!!!
http://www.2shared.com/photo/hJghiq4b/bg_overlays_foreground.html
if you have any solution/hint please help me.......
There's nothing wrong with your code . Background subtraction is not a preffered way for motion detection or silhoutte detection because its not very robust.The problem is coming because both the background and the foreground are similar in colour at many regions which on subtractions pushes the foreground to back . You might try using
- optical flow for motion detection
- If your task is just detecting silhoutte or hand try training a HOG classifier over it
In case you do not want to try a new approach . You may try around playing with the threshold value(in your case 30).So when you subtract similar colour image there difference is less than 30 . And later you threshold with 30 so it just blacks out. Also you may try HSV or some other colourspace as well .
Putting in the relevant code would help. Also knowing what you're actually trying to achieve.
Which two images are you subtracting? I've done subtracting subsequent images (so, images taken with a delay of a fraction of a second), and the background subtraction generally results in the edges of moving objects, for example the edges of a hand, and not the entire silhouette of a hand. I'm guessing you're taking the difference of the current frame and a static startup frame. It's possible that parts aren't different enough (skin+skin).
I've got some computer problems tonight, I'll test it out tomorrow (pls put up at least the steps you actually carry thorough though) and let you know.
I'm still not sure what your ultimate goal is, although I'm guessing you want to do some gesture-recognition (since you have a vector called "fingers").
As Manpreet said, your biggest problem is robustness, and that is from the subjects having similar color.
I reproduced your image by having my face in the static comparison image, then moving it. If I started with only background, it was already much more robust and in anycase didn't display any "overlaying".
Quick fix is, make sure to have a clean subject-free static image.
Otherwise, you'll want to have dynamic comparison image, simplest would be comparing frame_n with frame_n-1. This will generally give you just the moving edges though, so if you want the entire silhouette you can either:
1) Use a different segmenting algorithm (what I recommend. Background subtraction is fast and you can use it to determine a much smaller ROI in which to search, and then use a different algorithm for more robust segmentation.)
2) Try to make a compromise between the static and dynamic comparison image, for example as an average of the past 10 frames or something like that. I don't know how well this works, but would be quite simple to implement, worth a try :).
Also, try with CV_THRESH_OTSU instead of 30 for your threshold value, see if you like that better.
Also, I noticed often the output flares (regions which haven't changed switch from black to white). Checking with the live stream, I'm quite certain it because of the webcam autofocusing/adjusting white balance etc.. If you're getting that too, turning off the autofocus etc. should help (which btw isn't done through openCV but depends on the camera. Possibly check this: How to programatically disable the auto-focus of a webcam?)
I am looking for a library that would help scrape the information from the image below.
I need the current value so it would have to recognise the values on the left and then estimate the value of the bottom line.
Any ideas if there is a library out there that could do something like this? Language isn't really important but I guess Python would be preferable.
Thanks
I don't know of any "out of the box" solution for this and I doubt one exists. If all you have is the image, then you'll need to do some image processing. A simple binarization method (like Otsu binarization) would make it easier to process:
The binarization makes it easier because now the pixels are either "on" or "off."
The locations for the lines can be found by searching for some number of pixels that are all on horizontally (5 on in a row while iterating on the x axis?).
Then a possible solution would be to pass the image to an OCR engine to get the numbers (tesseractOCR is an open source OCR engine hosted at Google (C++): tesseractOCR). You'd still have to find out where the numbers are in the image by iterating through it.
Then, you'd have to find where the lines are relative to the keys on the left and do a little math and you can get your answer.
OpenCV is a beefy computer vision library that has things like the binarization. It is also a C++ library.
Hope that helps.
Greetings Overflowers,
Given that:
I have images of documents with text of mixed languages
I need this text to be highlightable (word by word) by end users
I have this text in plain digital format already
I will help my program to figure out where words are
I do not want my help to be tedious to me
I will also manually fix small inaccuracies after my program
What is the best easy help I can provide for my program to be able to draw rectangles around selected words ? What algorithm would you use for this program ? I tried OCR stuff like OmniPage Pro but they do not provide this functionality.
Regards
I have implemented a word bounding box and highlighted words in my application some years ago. You said "I have this text in plain digital format". One key component is to have coordinates of characters or words in order to map them to proper image areas. Like in searchable PDF, when you select text it is internally mapped to the image layer, and opposite selection on the image selects matching text. But even from PDF those coordinates cannot be exported I believe. If no such coordinate informaiton exists currently in your text, easiest is probably to re-OCR images with a high quality engine that can produce coordinates as part of output. If you were to use WiseTREND OCR Cloud 2.0, then XML output will produce all that detailed metadata. If coordinates informaiton exists, then all major components are there and it is just work around efficient UI design.