Receieved msg_Image getting distored while displaying in openCv - opencv

I have published an image from one node and then i want to subscribe that image in my second node. But after subscribing it in the second node, when i try to store it in cv::Mat image then, it get distorted.
The patchImage in the following code is distored. there are some horizontal lines and four images of the same image merged.
An overview of my code is following.
first_node_publisher
{
im.header.stamp = time;
im.width = width;
im.height = height;
im.step = 3*width;
im.encoding = "rgb8";
image_pub.publish(im);
}
second_node_imageCallBack(const sensor_msgs::ImageConstPtr& msg)
{
cv::Mat patchImage;
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8); //
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
}
patchImage=cv_ptr->image;
imshow("Received Image", patchImage); //This patchImage is distored
}

I believe the problem is with your encoding setting, are you sure the encoding is actually rgb8? That is unlikely because OpenCV stores images by default in the BGR format (such as CV_8UC3). It is also possible that your images are actually not even stored as unsigned characters, but shorts, floats, doubles, etc.
I always include assert(image.type==CV_8UC3) in my publishers to make sure the encoding is correct

Related

EmguCV 3.1 Capture.QueryFrame returning error intermittantly

I am using EmguCV to create a capture from a video file stored on disk. I set the capture property for frame position and then perform a QueryFrame. On certain frames from the video, when I go to process the Mat further I get the error '{"OpenCV: Unrecognized or unsupported array type"}'. This doesn't happen on all frames of the video but when I run it for the same video it happens for the same frames in the video. If I save the Mat to disk the image looks perfectly fine and saves without error. Here is the code for loading and processing the image:
Capture cap = new Capture(movieLocation);
int framePos = 0;
while (reading)
{
cap.SetCaptureProperty(CapProp.PosFrames, framePos);
using (var frame = cap.QueryFrame())
{
if (frame != null)
{
try
{
var fm = Rotate(frame); // Works fine
// Other Processing including classifier.DetectMultiScale -- Error occurs here
frameMap.Add(framePos, r);
}
catch (Exception ex)
{
var s = ""; // Done to just see the error
}
framePos = framePos + 2;
}
else
{
reading = false;
}
}
}
Line of code which throws exception in further processing
var r = _classifier.DetectMultiScale(matIn, 1.1, 2, new Size(200, 200), new Size(375, 375));
As I said, this does not fail for every frame of the video.
I'm trying to solve this because sometimes it skips 1 frame but at other times it will skip whole blocks of frames which is causing me to miss important events in the video.
After a bit more working on it, I figured out that the Mat had a ROI set on it before going to the cascade classifier. In the instances where the mat was failing the ROI was set to 0 height and 0 width. This caused the issue.

Subtracting background from image on iOS using OpenCV and BackgroundSubtractorMog2 Objective-C++

I would like to subtract a background of an image usine OpenCV on iOS with Objective-C++ and I am currently facing a termination due to memory issue, whenever I want to apply the BackgroundSubtractorMOG2 method to the input pictures.
I was researching on the topic, however I haven't find any useful answers. I am applying the BackgroundSubtractorMOG2 method properly, passing the right arguments. I also have checked the Diagnostics in my Edit Scheme and my Zombie Objects are disabled, so that shouldn't cause the problem.
I am thinking, whether the processing power of the phone could be the problem.
I am passing in my function a background image and and the same background image with a person in it. I would like to subtract the person from the image somehow.
My code is:
// Subtracts two images: the image taken from the background image
-(cv::Mat) subtractBackground:(cv::Mat)backPic fromImage:(cv::Mat)inImage {
if (!inImage.data) {
NSLog(#"Unable to the picture image");
} else if (!backPic.data) {
NSLog(#"Unable to open background");
}
cv::Mat fgMaskMOG2;
int history = 2;
int distance_threshold = 16;
bool shadow_detection = true;
cv::Mat frame[2] = { backPic, inImage};
cv::Ptr<cv::BackgroundSubtractorMOG2> createSubtractor = new cv::BackgroundSubtractorMOG2(history, distance_threshold, shadow_detection);
for(int i = 0; i < 2; i++) {
createSubtractor->operator()(frame[i], fgMaskMOG2);
}
return fgMaskMOG2;
}

Strange File Save Behavior with ImageJ

I wrote an imageJ script to color and merge a series of black and white images. The script saves both the unmerged colored images and merged colored images. Everything works beautifully when I'm running in debug mode and step through the script. When I run it for real, however, it occasionally saves a couple of the original black and whites instead of the resulting colored image. All the merged images appear to be fine.
Why would everything work fine in debug mode but fail during regular usage?
Below is my code :
// Choose the directory with the images
dir = getDirectory("Choose a Directory ");
// Get a list of everything in the directory
list = getFileList(dir);
// Determine if a composite directory exists. If not create one.
if (File.exists(dir+"/composite") == 0) {
File.makeDirectory(dir+"/composite")
}
// Determine if a colored directory exists. If not create one.
if (File.exists(dir+"/colored") == 0) {
File.makeDirectory(dir+"/colored")
}
// Close all files currently open to be safe
run("Close All");
// Setup options
setOption("display labels", true);
setBatchMode(false);
// Counter 1 keeps track of if you're on the first or second image of the tumor/vessel pair
count = 1;
// Counter 2 keeps track of the number of pairs in the folder
count2 = 1;
// Default Radio Button State
RadioButtonDefault = "Vessel";
// Set Default SatLevel for Contrast Adjustment
// The contrast adjustment does a histogram equalization. The Sat Level is a percentage of pixels that are allowed to saturate. A larger number means more pixels can saturate making the image appear brighter.
satLevelDefault = 2.0;
// For each image in the list
for (i=0; i<list.length; i++) {
// As long as the name doesn't end with / or .jpg
if (endsWith(list[i], ".tif")) {
// Define the full path to the filename
fn = list[i];
path = dir+list[i];
// Open the file
open(path);
// Create a dialog box but don't show it yet
Dialog.create("Image Type");
Dialog.addRadioButtonGroup("Type:", newArray("Vessel", "Tumor"), 1, 2, RadioButtonDefault)
Dialog.addNumber("Image Brightness Adjustment", satLevelDefault, 2, 4, "(applied only to vessel images)")
// If it's the first image of the pair ...
if (count == 1) {
// Show the dialog box
Dialog.show();
// Get the result and put it into a new variable and change the Default Radio Button State for the next time through
if (Dialog.getRadioButton=="Vessel") {
imgType = "Vessel";
RadioButtonDefault = "Tumor";
} else {
imgType = "Tumor";
RadioButtonDefault = "Vessel";
}
// If it's the second image of the pair
} else {
// And the first image was a vessel assume the next image is a tumor
if (imgType=="Vessel") {
imgType="Tumor";
// otherwise assume the next image is a vessel
} else {
imgType="Vessel";
}
}
// Check to see the result of the dialog box input
// If vessel do this
if (imgType=="Vessel") {
// Make image Red
run("Red");
// Adjust Brightness
run("Enhance Contrast...", "saturated="+Dialog.getNumber+" normalize");
// Strip the .tif off the existing filename to use for the new filename
fnNewVessel = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewVessel+"_colored");
// Get the title of the image for the merge
vesselTitle = getTitle();
// Othersie do this ...
} else {
// Make green
run("Green");
// Strip the .tif off the existing filename to use for the new filename
fnNewTumor = replace(fn,"\\.tif","");
// Save as jpg
saveAs("Jpeg", dir+"/colored/"+ fnNewTumor+"_colored");
// Get the title of the image for the merge
tumorTitle = getTitle();
}
// If it's the second in the pair ...
if (count == 2) {
// Merge the two images
run("Merge Channels...", "c1="+vesselTitle+" c2="+tumorTitle+" create");
// Save as Jpg
saveAs("Jpeg", dir+"/composite/composite_"+count2);
// Reset the number within the pair counter
count = count-1;
// Increment the number of pairs counter
count2 = count2+1;
// Otherwise
} else {
// Increment the number within the pair counter
count += 1;
}
}
}
Not sure why I'd need to do this but adding wait(100) immediately before saveAs() seems to do the trick
The best practice in this scenario would be to poll IJ.macroRunning(). This method will return true if a macro is running. I would suggest using helper methods that can eventually time out, like:
/** Run with default timeout of 30 seconds */
public boolean waitForMacro() {
return waitForMacro(30000);
}
/**
* #return True if no macro was running. False if a macro runs for longer than
* the specified timeOut value.
*/
public boolean waitForMacro(final long timeOut) {
final long time = System.currentTimeMillis();
while (IJ.macroRunning()) {
// Time out after 30 seconds.
if (System.currentTimeMillis() - time > timeOut) return false;
}
return true;
}
Then call one of these helper methods whenever you use run(), open(), or newImage().
Another direction that may require more work, but provide a more robust solution, is using ImageJ2. Then you can run things with a ThreadService, which gives you back a Java Future which can then guarantee execution completion.

Android MediaCodec: how to request a key frame when encoding

In Android4.1, a key frame is often requested in a real-time encoding application. But how to do it using MediaCodec object? The current Android4.2 SDK seems not support it.
You can produce random keyframe by specifying MediaCodec.BUFFER_FLAG_SYNC_FRAME when queuing input buffers:
MediaCodec codec = MediaCodec.createDecoderByType(type);
codec.configure(format, ...);
codec.start();
ByteBuffer[] inputBuffers = codec.getInputBuffers();
for (;;) {
int inputBufferIndex = codec.dequeueInputBuffer(timeoutUs);
if (inputBufferIndex >= 0) {
// fill inputBuffers[inputBufferIndex] with valid data
...
codec.queueInputBuffer(inputBufferIndex, 0, inputBuffers[inputBufferIndex].limit(), presentationTime,
isKeyFrame ? MediaCodec.BUFFER_FLAG_SYNC_FRAME : 0);
}
}
Stumbled upon the need to insert random keyframe when encoding video on Galaxy Nexus.
On it, MediaCodec didn't automatically produce keyframe at the start of the video.
MediaCodec has a method called setParameters which comes to the rescue.
In Kotlin you can do it like:
fun yieldKeyFrame(): Boolean {
val param = Bundle()
param.putInt(MediaCodec.PARAMETER_KEY_REQUEST_SYNC_FRAME, 0)
try {
videoEncoder.setParameters(param)
return true
} catch (e: IllegalStateException) {
return false
}
}
in above snippet, the videoEncoder is an instance of MediaCodec configured to encode.
You can request a periodic key frame by setting the KEY_I_FRAME_INTERVAL key when configuring the encoder. In the example below I am requesting one every two seconds. I've omitted the other keys like frame rate or color format for the sake of clarity, but you will still want to include them.
encoder = MediaCodec.createByCodecName(codecInfo.getName());
MediaFormat inputFormat = MediaFormat.createVideoFormat(mimeType, width, height);
/* ..... set various format options here ..... */
inputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 2);
encoder.configure(inputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
encoder.start();
I suspect, however, that what you are really asking is how to request a random key frame while encoding, like at the start of a cut scene. Unfortunately I haven't seen an interface for that. It is possible that stopping and restarting the encoder would have the effect of creating a new key frame at the restart. When I have the opportunity to try that, I'll post the result here.
I hope this was helpful.
Thad Phetteplace - GLACI, Inc.

Using cvGet2D OpenCV function

I'm trying to get information from an image using the function cvGet2D in OpenCV.
I created an array of 10 IplImage pointers:
IplImage *imageArray[10];
and I'm saving 10 images from my webcam:
imageArray[numPicture] = cvQueryFrame(capture);
when I call the function:
info = cvGet2D(imageArray[0], 250, 100);
where info:
CvScalar info;
I got the error:
OpenCV Error: Bad argument (unrecognized or unsupported array type) in cvPtr2D, file /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp, line 1824
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp:1824: error: (-5) unrecognized or unsupported array type in function cvPtr2D
If I use the function cvLoadImage to initialize an IplImage pointer and then I pass it to the cvGet2D function, the code works properly:
IplImage* imagen = cvLoadImage("test0.jpg");
info = cvGet2D(imagen, 250, 100);
however, I want to use the information already stored in my array.
Do you know how can I solve it?
Even though its a very late response, but I guess someone might be still searching for the solution with CvGet2D. Here it is.
For CvGet2D, we need to pass the arguments in the order of Y first and then X.
Example:
CvScalar s = cvGet2D(img, Y, X);
Its not mentioned anywhere in the documentation, but you find it only inside core.h/ core_c.h. Try to go to the declaration of CvGet2D(), and above the function prototypes, there are few comments that explain this.
Yeah the message is correct.
If you want to store a pixel value you need to do something like this.
int value = 0;
value = ((uchar *)(img->imageData + i*img->widthStep))[j*img->nChannels +0];
cout << "pixel value for Blue Channel and (i,j) coordinates: " << value << endl;
Summarizing, to plot or store data you must create an integer value (pixel value varies between 0 and 255). But if you only want to test pixel value (like in an if closure or something similar) you can access directly to pixel value without using an integer value.
I think thats a little bit weird when you start but when you work with it 2 o 3 times you will work without difficulties.
Sorry, cvGet2D is not the best way to obtain pixel value. I know its the shortest and clear way because you in only one line of code and knowing coordinates obtain the pixel value.
I suggest you this option. When you see this code you you wiil think that is so complicated but is more effecient.
int main()
{
// Acquire the image (I'm reading it from a file);
IplImage* img = cvLoadImage("image.bmp",1);
int i,j,k;
// Variables to store image properties
int height,width,step,channels;
uchar *data;
// Variables to store the number of white pixels and a flag
int WhiteCount,bWhite;
// Acquire image unfo
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
// Begin
WhiteCount = 0;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{ // Go through each channel of the image (R,G, and B) to see if it's equal to 255
bWhite = 0;
for(k=0;k<channels;k++)
{ // This checks if the pixel's kth channel is 255 - it can be faster.
if (data[i*step+j*channels+k]==255) bWhite = 1;
else
{
bWhite = 0;
break;
}
}
if(bWhite == 1) WhiteCount++;
}
}
printf("Percentage: %f%%",100.0*WhiteCount/(height*width));
return 0;
This code count white pixels and gives you a percetage of white pixels in the image.

Resources