My platform:opencv4.60 and visual studio2022 community
The opencv DOC describe WINDOW_FREERATIO adjusts the image with no respect to its ratio, whereas WINDOW_KEEPRATIO keeps the image ratio.
But when I used images(60004000 or30002000) that were higher than the resolution of my computer.
,None of them can maintain the picture ratio。
#include<opencv2/opencv.hpp>
#include<opencv2/imgproc/imgproc.hpp>
using namespace cv;
int main() {
Mat srcImage = imread("C://Users//2806//Downloads//650357.jpg");
namedWindow("原图", WINDOW_FREERATIO);
namedWindow("新图", WINDOW_KEEPRATIO);
imshow("原图", srcImage);
imshow("新图", srcImage);
waitKey(0);
}
The running results are too big,I can't upload it。
But you can clearly see that the picture has been stretched wide。I would like to know why this is causing the problem
Related
I am newbie to openCv, trying to analyze some code.
I know this line works fine and reduce destination by two but i want it to change to some other sizes, how can i change? Specifically "CV_GAUSSIAN_5x5"
cvPyrDown(frame, half_frame, CV_GAUSSIAN_5x5);
You cannot resize the image to any desired size by using pyrDown() because it will always resize your image by a factor of 2. Similar is the case with pyrUp().
If you want to resize your image to any desired size then you must use the resize()
cvResize(const CvArr* src, CvArr* dst, int interpolation=CV_INTER_LINEAR )
the detailed documentation about it is given here.
I capture some image data from a HD cam using OpenCV and this code (relevant snippets only):
data->capture =cvCaptureFromCAM(data->config.device); // open the device
...
IplImage *grabFrame=cvQueryFrame(data->capture);
Using this code I always get a grabFrame with a size of 640x480 while the camera supports 1920x1080. When I do something like that after initialisation:
cvSetCaptureProperty(data->capture,CV_CAP_PROP_FRAME_WIDTH,1920);
cvSetCaptureProperty(data->capture,CV_CAP_PROP_FRAME_HEIGHT,1080);
I get results in real HD resolution - but blurred images, means it is only upscaled from 640x480 to HD resolution. So: how can I force OpenCV to really use the fill native resolution?
It does not seem to be a driver/HW problem since it happens on Windows and Linux - and when I try any other application on these systems I get the full, native resolution as expected.
Thanks!
1. From (old) opencv C documentation :
The function cvSetCaptureProperty() sets the specified property of video capturing.
Currently the function supports only video files: CV_CAP_PROP_POS_MSEC, CV_CAP_PROP_POS_FRAMES, CV_CAP_PROP_POS_AVI_RATIO.
2. You can try to compile opencv with openNI since VideoCapture class make use of it for setting capture device options.
It helped me:
static const int MAX_WIDTH_RESOLUTION = 7680;
static const int MAX_HEIGHT_RESOLUTION = 4800;
cvCreateCameraCapture(...);
cvSetCaptureProperty(pCam, CV_CAP_PROP_FRAME_WIDTH, MAX_WIDTH_RESOLUTION);
cvSetCaptureProperty(pCam, CV_CAP_PROP_FRAME_HEIGHT, MAX_HEIGHT_RESOLUTION);
cvQueryFrame(...);
With two different webcams i've got 1900x1080 and 1280x960 frames.
I am trying to overlay image over video frame in android(native part, .cpp).I get my video frames in yuv420sp format, i pass this char * (buffer) to opencv and there i create a MAT with it
Mat frame(Size(width,height),8UC1,buff);
The same way,i have loaded my image into a char *(buffer),converted to yuv420SP and in the similar way i create image Mat:
Mat imageMat((Size(width,height),8UC1,imgBuff));
After this i copy the overlay image on the video frame i received as:
Rect roi(X,Y,U,V);
imageMat.copyTo(frame(roi));
and i need this merged frame in same yuv420sp format so after here i return :
return frame.data;//i am returning char * from this api to my .cpp file
The image is overlayed properly but problem is that only the overlayed part is in gray scale,rest full frame is properly coloured!
I just fail to understand whats wrong!
Note:
I tried few things already like:
1. I converted both the things(frame and image) via:
cvtColor(frame,frame,CV_COLOR_yuv420sp2BGR);
but i want response back in yuv420sp format only,and there is no such flag to turn it back to yuv420sp.
2.Tried with 8UC3,didnt really made any difference. Anyway,my video frame is returned coloured,then why the image is in grayscale?
Really stuck here! Any suggestion will be really helpful!
I use OpenCV to read the image. Then I use Matlab to load the same image.
Then I display the images. For OpenCV loaded image, the image is has no picture inside and just gray plane. For Matlab loaded image, it has the image what I want.
The image pixel values are very small floating point data like 0.0021. The code I used to load the image is shown as follow.
`Mat image(IMAGE_ROW, IMAGE_COL, CV_64FC3);
Mat gray(IMAGE_ROW, IMAGE_COL, CV_64FC1);
image = imread(filespath, CV_LOAD_IMAGE_COLOR );// Read the file
cv::imshow("Image", image);
cvtColor( image, gray, CV_BGR2GRAY, 1);
cv::imshow("gray", gray);`
Why I can't have the same image as loaded by Matlab?
well you can't do it with imwrite()/imread() as stated before.
but you can save/load floating point Mats using the FileStorage, like this:
Mat fm = Mat::ones(3,3,CV_32FC3); // dummy data
FileStorage fs("my.yml", FileStorage::WRITE );
fs << "mat1" << fm; //choose any key here, just be consistant with the one below
and read back in:
Mat fm;
FileStorage fs("my.yml", FileStorage::READ );
fs["mat1"] >> fm;
You don't need to explicitly initialize a cv::Mat image before calling cv::imread, it will initialize the image properly according to the size and format of the image read. So it doesn't matter that you've initialized your image with (IMAGE_ROW, IMAGE_COL, CV_64FC3).
OpenCV has no capabilities for writing/reading floating point images. From cv::imwrite manual:
Only 8-bit (or 16-bit in the case of PNG, JPEG 2000 and TIFF)
single-channel or 3-channel (with ‘BGR’ channel order) images can be
saved using this function.
You can load float images with opencv Mat img= imread(filename, CV_LOAD_IMAGE_ANYDEPTH);
Tried #berak solution, but got a "Missing , between elements" exception. As stated in this bug report, you must release the FileStorage object after writing operation, otherwise it will not properly finalise the file writing and thus raising that exception. Then, the corrected version of the codelet should be:
Mat fm = Mat::ones(3,3,CV_32FC3); // dummy data
FileStorage fs("my.yml", FileStorage::WRITE );
fs << "mat1" << fm; //choose any key here, just be consistant with the one below
fs.release(); //Release the file and finish the writing.
I would like to add a smaller image on top of a larger image (eventually for PiP on a video feed). I can do it by iterating through the relevant data property in the large image and add the pixels from the small image. But is there a simpler and neater way? I'm using EMGU.
My idea was to define an ROI in the large image of the same size as the small image. Set the Large image equal to the small image and then simply remove the ROI. Ie in pseudo code:
Large.ROI = rectangle defined by small image;
Large = Small;
Large.ROI = Rectangle.Empty;
However this doesn't work and the large image doesn't change. Any suggestions would be much appreciated.
Large image:
Small image:
Desired result:
If you using C++ API then the following code snippet should work:
cv::Mat big;
cv::Mat small;
// Define roi area (it has small image dimensions).
cv::Rect roi = cv::Rect(50,50, small.cols, small.rows);
// Take a sub-view of the large image
cv::Mat subView = big(roi);
// Copy contents of the small image to large
small.copyTo(subView);
Take care to not go out of dimensions of big image.
I don't know if this will help, i haven't used emgu. However this was how i was able to do image in image with opencv.
drawIntoArea(Mat &src, Mat &dst, int x, int y, int width, int height)
{
Mat scaledSrc;
// Destination image for the converted src image.
Mat convertedSrc(src.rows,src.cols,CV_8UC3, Scalar(0,0,255));
// Convert the src image into the correct destination image type
// Could also use MixChannels here.
// Expand to support range of image source types.
if (src.type() != dst.type())
{
cvtColor(src, convertedSrc, CV_GRAY2RGB);
}else{
src.copyTo(convertedSrc);
}
// Resize the converted source image to the desired target width.
resize(convertedSrc, scaledSrc,Size(width,height),1,1,INTER_AREA);
// create a region of interest in the destination image to copy the newly sized and converted source image into.
Mat ROI = dst(Rect(x, y, scaledSrc.cols, scaledSrc.rows));
scaledSrc.copyTo(ROI);
}
I have a lot of experience with EMGU. As far as I am aware the method your employing is the only direct way of display the sub-image data within your large image. You would likely have to refresh your larger image which would have the inherent effect of wiping your transferred data and copy the smaller image back over.
While a solution is possible I think the method is flawed. The required processing time will effect the display rate of any image in the larger viewing frame.
An improved method would be to add another control. Effectively you have your video feed window showing your larger image in the background and a smaller control on-top of this displaying your smaller image. Effectively you could have as many of these smaller controls as you like. You will in effect be displaying two images or video feeds in two different controls (e.g. image boxes). As you have the code to do so all you will have to do is ensure the order of which your controls are displayed.
I have assumed you are not programming the output to a Console Window. If you need any more help please feel free to ask.
As for the comments EMGU is written in C# and while appreciate your view on not calling EMGU OpenCV why should it not be tagged as an OpenCV orientated question. After all EMGU is simply OpenCV library with a c# wrapper. I have found many resources on OpenCV useful for EMGU and vice versa.
Cheers
Chris
Based on #BloodAxe's answer, using EMGU 3.4 the following works:
// Define roi area (it has small image dimensions).
var ROI = new System.Drawing.Rectangle(100, 500, 200, 200)
// Take a sub-view of the large image
Mat subView = new Mat(bigImage, ROI);
// Copy contents of the small image to large
small.CopyTo(subView);