OpenCV Fullscreen Windows on Multiple Monitors - opencv

I have an OpenCV application that displays a fullscreen window, via:
cv::namedWindow("myWindow", CV_WINDOW_NORMAL)
cv::setWindowProperties("myWindow", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN)
It works fine, but when I have multiple monitors it always displays the fullscreen window on the First monitor. Is there any way to display on the 2nd monitor? I've tried setting X/Y and Width/Height, but they seem to be ignored once fullscreen is enabled.

Edits:
Sometimes pure OpenCV code cannot do a fullscreen window on a dual display. Here is a Qt way of doing it:
#include <QApplication>
#include <QDesktopWidget>
#include <QLabel>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QDesktopWidget dw;
QLabel myLabel;
// define dimension of the second display
int width_second = 2560;
int height_second = 1440;
// define OpenCV Mat
Mat img = Mat(Size(width_second, height_second), CV_8UC1);
// move the widget to the second display
QRect screenres = QApplication::desktop()->screenGeometry(1);
myLabel.move(QPoint(screenres.x(), screenres.y()));
// set full screen
myLabel.showFullScreen();
// set Qimg
QImage Qimg((unsigned char*)img.data, img.cols, img.rows, QImage::Format_Indexed8);
// set Qlabel
myLabel.setPixmap(QPixmap::fromImage(Qimg));
// show the image via Qt
myLabel.show();
return app.exec();
}
Don't forget to configure the .pro file as:
TEMPLATE = app
QT += widgets
TARGET = main
LIBS += -L/usr/local/lib -lopencv_core -lopencv_highgui
# Input
SOURCES += main.cpp
And in terminal compile your code as:
qmake
make
Original:
It is possible.
Here is a working demo code, to show a full-screen image on a second display. Hinted from How to display different windows in different monitors with OpenCV:
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main ( int argc, char **argv )
{
// define dimension of the main display
int width_first = 1920;
int height_first = 1200;
// define dimension of the second display
int width_second = 2560;
int height_second = 1440;
// move the window to the second display
// (assuming the two displays are top aligned)
namedWindow("My Window", CV_WINDOW_NORMAL);
moveWindow("My Window", width_first, height_first);
setWindowProperty("My Window", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// create target image
Mat img = Mat(Size(width_second, height_second), CV_8UC1);
// show the image
imshow("My Window", img);
waitKey(0);
return 0;
}

I've tried different ways to make it working, but unfortunetely it seems that this is not possible using OpenCV. The only thing you can do is probably display one window on main(primary) screen just using your current code and handle second window manually - set window position, resize image, and just use imshow function to display it. Here is some example:
void showWindowAlmostFullscreen(cv::Mat img, std::string windowTitle, cv::Size screenSize, cv::Point screenZeroPoint)
{
screenSize -= cv::Size(100, 100); //leave some place for window title bar etc
double xScallingFactor = (float)screenSize.width / (float)img.size().width;
double yScallingFactor = (float)screenSize.height / (float)img.size().height;
double minFactor = std::min(xScallingFactor, yScallingFactor);
cv::Mat temp;
cv::resize(img, temp, cv::Size(), minFactor, minFactor);
cv::moveWindow(windowTitle, screenZeroPoint.x, screenZeroPoint.y);
cv::imshow(windowTitle, temp);
}
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat img1 = cv::imread("D:\\temp\\test.png");
cv::Mat img2;
cv::bitwise_not(img1, img2);
cv::namedWindow("img1", CV_WINDOW_AUTOSIZE);
cv::setWindowProperty("img1", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
cv::namedWindow("img2");
while(cv::waitKey(1) != 'q')
{
cv::imshow("img1", img1);
cv::setWindowProperty("img1", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
showWindowAlmostFullscreen(img2, "img2", cv::Size(1366, 768), cv::Point(260, 1080));
}
}
and the result:
Screen size and screen zero point (i don't know whether this is a correct name of this point - generally it's just a point in which there is screen (0,0) point) you can get using some other library or from windows control panel. Screen zero point will display when you will start moving screen:

If you use QT for writing your code, you can possibly utilize QT5's "Widget".
Here is a tutorial that will show you how to display an OpenCV image in a QT Widget.
Once you have that working you can then use something like this:
QScreen *screen = QGuiApplication::screens()[1]; // specify which screen to use
SecondDisplay secondDisplay = new SecondDisplay(); // your widget
** Add your code to display opencv image in widget here **
secondDisplay->move(screen->geometry().x(), screen->geometry().y());
secondDisplay->resize(screen->geometry().width(), screen->geometry().height());
secondDisplay->showFullScreen();
(Code found here on another SO answer)
I have not tried this myself, so I can't guarantee it will work, however, but it seems likely (if not a little overkill)
Hope this helps.

Related

removing watermark using opencv

I have used opencv and c++ to remove watermark from image using code below.
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <Windows.h>
#include <string>
#include <filesystem>
namespace fs = std::filesystem;
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
bool debugFlag = true;
std::string path = "C:/test/";
for (const auto& entry : fs::directory_iterator(path))
{
std::string fileName = entry.path().string();
Mat original = imread(fileName, cv::IMREAD_COLOR);
if (debugFlag) { imshow("original", original); }
Mat inverted;
bitwise_not(original, inverted);
std::vector<Mat> channels;
split(inverted, channels);
for (int i = 0; i < 3; i++)
{
if (debugFlag) { imshow("chan" + std::to_string(i), channels[i]); }
}
Mat bwImg;
cv::threshold(channels[2], bwImg, 50, 255, cv::THRESH_BINARY);
if (debugFlag) { imshow("thresh", bwImg); }
Mat outputImg;
inverted.copyTo(outputImg, bwImg);
bitwise_not(outputImg, outputImg);
if (debugFlag) { imshow("output", outputImg); }
if (debugFlag) { waitKey(0); }
else { imwrite(fileName, outputImg); }
}
}
here is result original to removed watermark.
Now in previous image as you can see original image has orange/red watermark. I created a mask that would kill the watermark and then applied it to the original image (this pulls the grey text boundary as well). Another trick that helped was to use the red channel since the watermark is most saturated on red ~245). Note that this requires opencv and c++17
But now i want to remove watermark in new image which has similar watermark color as text image is given below as you can see some watermark in image sideway in chinese overlaping with text. how to achieve it with my current code any help is appreciated.
Two ideas to try:
1: The watermark looks "lighter" than the primary text. So if you create a grayscale version of the image, you may be able to apply a threshold that keeps the primary text and drops the watermark. You may want to add one pass of dilation on that mask before applying it to the original image as the grey thresh will likely clip your non-watermark characters a bit. (this may pull in too much noise from the watermark though, so test it)
2: Try using the opencv opening function. Your primary text seems thicker than the watermark, so you should be able to isolate it. Similarly after you create the mask of your keep text, dilate once and mask the original image.

how can i draw lines using mouseclick in opencv in a webcam frame?

I want to draw a line using mouse-event in Opencv in a webcam frame. I also want to erase it just like an eraser in MS-Paint.How can i do it? I dont have much idea about it. But i have this scrambled pseduo code from my head which can be completely wrong but i will write it down anyway. I would like to know how to implement it in c++.
So, i will have two three mouse event-
event 1- Mouse leftbuttonup-- this will be used to start the drawing
event 2- Mouse move -- this will be used to move the mouse to draw
event 3:- Mouse leftbuttondown-this will be used to stop the drawing.
event 4- Mouse double click - this event i can use to erase the drawing.
I will also have a drawfunction for a line such as line(Mat image,Point(startx,starty),Point(endx,endy),(0,0,255),1));
Now, i dont know how to implement this in a code format. I tried a lot but i get wrong results. I have a sincere request that please suggest me the code in Mat format not the Iplimage format. Thanks.
please find working code below with inlined explained comments using Mat ;)
Let me know in case of any problem.
PS: In main function, I have changed defauld cam id to 1 for my code, you should keep it suitable for you PC, probably 0. Good Luck.
#include <iostream>
#include <opencv\cv.h>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
class WebCamPaint
{
public:
int cam_id;
std::string win_name;
cv::VideoCapture webCam;
cv::Size frame_size;
cv::Mat cam_frame, drawing_canvas;
cv::Point current_pointer, last_pointer;
cv::Scalar erase_color, paint_color;
int pointer_size;
//! Contructor to initialize basic members to defaults
WebCamPaint()
{
cam_id = 0;
pointer_size = 5;
win_name = std::string("CamView");
current_pointer = last_pointer = cv::Point(0, 0);
erase_color = cv::Scalar(0, 0, 0);
paint_color = cv::Scalar(250, 10, 10);
}
//! init function is required to set some members in case default members needed to change.
bool init()
{
//! Opening cam with specified cam id
webCam.open(cam_id);
//! Check if problem opening video
if (!webCam.isOpened())
{
return false;
}
//! Reading single frame and extracting properties
webCam >> cam_frame;
//! Check if problem reading video
if (cam_frame.empty())
{
return false;
}
frame_size = cam_frame.size();
drawing_canvas = cv::Mat(frame_size, CV_8UC3);
//! Creating Activity / Interface window
cv::namedWindow(win_name);
cv::imshow(win_name, cam_frame);
//! Resetting drawing canvas
drawing_canvas = erase_color;
//! initialization went successful ;)
return true;
}
//! This function deals wih all processing, drawing and displaying ie main UI to user
void startAcivity()
{
//! Keep doing until user presses "Esc" from Keyboard, wait for 20ms for user input
for (char user_input = cv::waitKey(20); user_input != 27; user_input = cv::waitKey(20))
{
webCam >> cam_frame; //Read a frame from webcam
cam_frame |= drawing_canvas; //Merge with actual drawing canvas or drawing pad, try different operation to merge incase you want different effect or solid effect
cv::imshow(win_name, cam_frame); //Display the image to user
//! Change size of pointer using keyboard + / -, don't they sound fun ;)
if (user_input == '+' && pointer_size < 25)
{
pointer_size++;
}
else if (user_input == '-' && pointer_size > 1)
{
pointer_size--;
}
}
}
//! Our function that should be registered in main to opencv Mouse Event Callback
static void onMouseCallback(int event, int x, int y, int flags, void* userdata)
{
/* NOTE: As it will be registered as mouse callback function, so this function will be called if anything happens with mouse
* event : mouse button event
* x, y : position of mouse-pointer relative to the window
* flags : current status of mouse button ie if left / right / middle button is down
* userdata: pointer o any data that can be supplied at time of setting callback,
* we are using here to tell this static function about the this / object pointer at which it should operate
*/
WebCamPaint *object = (WebCamPaint*)userdata;
object->last_pointer = object->current_pointer;
object->current_pointer = cv::Point(x, y);
//! Drawing a line on drawing canvas if left button is down
if (event == 1 || flags == 1)
{
cv::line(object->drawing_canvas, object->last_pointer, object->current_pointer, object->paint_color, object->pointer_size);
}
//! Drawing a line on drawing canvas if right button is down
if (event == 2 || flags == 2)
{
cv::line(object->drawing_canvas, object->last_pointer, object->current_pointer, object->erase_color, object->pointer_size);
}
}
};
int main(int argc, char *argv[])
{
WebCamPaint myCam;
myCam.cam_id = 1;
myCam.init();
cv::setMouseCallback(myCam.win_name, WebCamPaint::onMouseCallback, &myCam);
myCam.startAcivity();
return 0;
}

Is it possible to have a square resolution with a webcam video stream using OpenCV?

I wrote a simple OpenCV program that recovers my webcam video stream and display it on a simple window. I wante to resize this window to the resolution 256x256 but it changed it to 320x240.
Here's my source code :
#include <iostream>
#include <opencv/cv.h>
#include <opencv/highgui.h>
using namespace std;
int main(int argc, char** argv)
{
char key;
cvNamedWindow("Camera_Output", cv::WINDOW_NORMAL);
CvCapture *capture = cvCaptureFromCAM(CV_CAP_ANY);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 256);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 256);
while(1){
IplImage *frame = cvQueryFrame(capture);
cvShowImage("Camera_Output", frame);
key = cvWaitKey(10);
if (key == 27){
break;
}
}
cvReleaseCapture(&capture);
cvDestroyWindow("Camera_Output");
return 0;
}
The output resolution is 320x240 and I want a 256x256 resolution. I think it's not possible because the camera manages its output video stream buffer and it has to keep the same ratio (width/height). What do you think about this idea ?
Is there a function which can force the resolution as a square resolution using OpenCV ?
Thanks a lot in advance for your help.
Seems like you video source does not handle 256x256 resolution. If you want to display it as such, you will have to crop the image yourself before displaying it.
Simple, you can do this by:
VideoCapture cap;
cap.open(0); // open your web-camera
cap.set(CV_CAP_PROP_FRAME_WIDTH, 256);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 256);
If this doesn't work, you need to resize it manually by calling cv::resize().

OpenCV VideoCapture reading issue

This will probably be a dumb question, but i really can't figure it out.
First of all: sorry for the vague title, i'm not really sure about how to describe my problem in a couple of words.
I'm using OpenCV 2.4.3 in MS Visual Studio, C++. I'm using the VideoCapture interface for capturing frames from my laptop webcam.
What my program should do is:
Loop on different poses of the user, for each pose:
wait that the user is in position (a getchar() waits for an input that says "i'm in position" by simply hitting enter)
read the current frame
extract a region of intrest from that frame
save the image in the ROI and then label it
Here is the code:
int main() {
Mat img, face_img, img_start;
Rect *face;
VideoCapture cam(0);
ofstream fout("dataset/dataset.txt");
if(!fout) {
cout<<"Cannot open dataset file! Aborting"<<endl;
return 1;
}
int count = 0; // Number of the (last + 1) image in the dataset
// Orientations are: 0°, +/- 30°, +/- 60°, +/-90°
// Distances are just two, for now
// So it is 7x2 images
cam.read(img_start);
IplImage image = img_start;
face = face_detector(image);
if(!face) {
cout<<"No face detected..? Aborting."<<endl;
return 2;
}
// Double ROI dimensions
face->x = face->x-face->width / 2;
face->y = face->y-face->height / 2;
face->width *= 2;
face->height *=2;
for(unsigned i=0;i<14;++i) {
// Wait for the user to get in position
getchar();
// Get the face ROI
cam.read(img);
face_img = Mat(img, *face);
// Save it
stringstream sstm;
string fname;
sstm << "dataset/image" << (count+i) << ".jpeg";
fname = sstm.str();
imwrite(fname,face_img);
//do some other things..
What i expect from it:
i stand in front of the camera when the program starts and it gets the ROI rectangle using the face_detector() function
when i'm ready, say in pose0, i hit enter and a picture is taken
from that picture a subimage is extracted and it is saved as image0.jpeg
loop this 7 times
What it does:
i stand in front of the camera when the program starts, nothing special here
i hit enter
the ROI is extracted not from the picture taken in that moment, but from the first one
At first, i used img in every cam.capture(), then i changed the first one in cam.capture(img_start) but that didn't help.
The second iteration of my code saves the image that should have been saved in the 1st, the 3rd iteration the one that should have been saved in the 2nd and so on.
I'm probably missing someting important from the VideoCapture, but i really can't figure it out, so here i am.
Thanks for any help, i really appreciate it.
The problem with your implementation is that the camera is not running freely and capturing images in real time. When you start up the camera, the videocapture buffer is filled up while waiting for you to read in the frames. Once the buffer is full, it doesn't drop old frames for new ones until you read and free up space in it.
The solution would be to have a separate capture thread, in addition to your "process" thread. The capture thread keeps reading in frames from the buffer whenever a new frame comes in and stores it in a "recent frame" image object. When the process thread needs the most recent frame (i.e. when you hit Enter), it locks a mutex for thread safety, copies the most recent frame into another object and frees the mutex so that the capture thread continues reading in new frames.
#include <iostream>
#include <stdio.h>
#include <thread>
#include <mutex>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
void camCapture(VideoCapture cap, Mat* frame, bool* Capture){
while (*Capture==true) {
cap >> *frame;
}
cout << "camCapture finished\n";
return;
}
int main() {
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat *frame, SFI, Input;
frame = new Mat;
bool *Capture = new bool;
*Capture = true;
//your capture thread has started
thread captureThread(camCapture, cap, frame, Capture);
mtx.lock();
imshow(*frame,current_frame);
mtx.unlock();
//Terminate the thread
mtx.lock();
*Capture = false;
mtx.unlock();
captureThread.join();
return 0;
}
This is the code that I wrote from the above advice. I hope someone can get help from this.
When you are capturing the image continuously, no captured frame will be stored in the opencv buffer, such that there will be no lag in streaming.
If you take screenshot/capture image with some time gap inbetween, the captured image will be first stored in the opencv buffer, after that the image is retrieved from the buffer.
When the buffer is full, when you are calling captureObject >> matObject, the last frame from the image is returned, not the current frame in the capturecard/webcam.
So only you are seeing a lag in your code. This issue can be resolved by taking screenshot based on the frames per second (fps) value of the webcam and time taken to capture the screenshot.
The time taken to read frame from buffer is very less, Measure the time taken to take the screenshot. If it is lesser than the fps we can assume that is read from buffer else it means it is captured from webcam.
Sample Code:
For capturing a recent screenshot from webcam.
#include <opencv2/opencv.hpp>
#include <time.h>
#include <thread>
#include <chrono>
using namespace std;
using namespace cv;
int main()
{
struct timespec start, end;
VideoCapture cap(-1); // first available webcam
Mat screenshot;
double diff = 1000;
double fps = ((double)cap.get(CV_CAP_PROP_FPS))/1000;
while (true)
{
clock_gettime(CLOCK_MONOTONIC, &start);
//camera.grab();
cap.grab();// can also use cin >> screenshot;
clock_gettime(CLOCK_MONOTONIC, &end);
diff = (end.tv_sec - start.tv_sec)*1e9;
diff = (diff + (end.tv_nsec - start.tv_nsec))*1e-9;
std::cout << "\n diff time " << diff << '\n';
if(diff > fps)
{
break;
}
}
cap >> screenshot; // gets recent frame, can also use cap.retrieve(screenshot);
// process(screenshot)
cap.release();
screenshot.release();
return 0;
}

OpenCV GrabCut Algorithm example not working

I am trying to implement a grabcut algorithm in OpenCV using C++
I stumble upon this site and found a very simple way how to do it. Unfortunately, it seems like the code is not working for me
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/mango11a.jpg");
// define bounding rectangle
cv::Rect rectangle(50,70,image.cols-150,image.rows-180);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
cv::namedWindow("Image");
cv::imshow("Image",image);
// display result
cv::namedWindow("Segmented Image");
cv::imshow("Segmented Image",foreground);
waitKey();
return 0;
}
Can anyone help me with this please? What is supposed to be the problem
PS: NO errors were printed while compiling.
check your settings again. I just executed the same tutorial and it worked fine for me.

Resources