Printing Alpha image - printing

I'm new to printer
How to print alpha image?
I'm using Gdi+'s DrawImage to draw image.
(Light-gray part is Black Color with 20% alpha)
But, Results it not as I expected.
It seems like discard alpha.
I'm using following code to print
#include <windows.h>
#include <gdiplus.h>
#include <stdio.h>
#pragma comment(lib, "gdiplus.lib")
using namespace Gdiplus;
INT main()
{
// Initialize GDI+.
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
// Get a device context for the printer.
HDC hdcPrint = CreateDC(NULL, TEXT("\\\\printserver\\HP CP3505_2F"), NULL, NULL);
DOCINFO docInfo;
ZeroMemory(&docInfo, sizeof(docInfo));
docInfo.cbSize = sizeof(docInfo);
docInfo.lpszDocName = L"GdiplusPrint";
Bitmap image(L"e:\\__.png");
StartDoc(hdcPrint, &docInfo);
StartPage(hdcPrint);
Graphics* graphics = new Graphics(hdcPrint);
Pen* pen = new Pen(Color(255, 0, 0, 0));
graphics->DrawImage(&image, 50, 50);
delete pen;
delete graphics;
EndPage(hdcPrint);
EndDoc(hdcPrint);
DeleteDC(hdcPrint);
GdiplusShutdown(gdiplusToken);
return 0;
}
But, Using Word(MS-Office) image is printed as expected.
Please help.
Edit:
GDI Alphablend function draws image alpha part as a gray color.
It looks fine.
But, Overlapping alpha is not working correctly.
It simply just draw a gray alpha part twice.
Not an alpha blend.
source

Related

removing watermark using opencv

I have used opencv and c++ to remove watermark from image using code below.
#include <stdio.h>
#include <opencv2/opencv.hpp>
#include <Windows.h>
#include <string>
#include <filesystem>
namespace fs = std::filesystem;
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
bool debugFlag = true;
std::string path = "C:/test/";
for (const auto& entry : fs::directory_iterator(path))
{
std::string fileName = entry.path().string();
Mat original = imread(fileName, cv::IMREAD_COLOR);
if (debugFlag) { imshow("original", original); }
Mat inverted;
bitwise_not(original, inverted);
std::vector<Mat> channels;
split(inverted, channels);
for (int i = 0; i < 3; i++)
{
if (debugFlag) { imshow("chan" + std::to_string(i), channels[i]); }
}
Mat bwImg;
cv::threshold(channels[2], bwImg, 50, 255, cv::THRESH_BINARY);
if (debugFlag) { imshow("thresh", bwImg); }
Mat outputImg;
inverted.copyTo(outputImg, bwImg);
bitwise_not(outputImg, outputImg);
if (debugFlag) { imshow("output", outputImg); }
if (debugFlag) { waitKey(0); }
else { imwrite(fileName, outputImg); }
}
}
here is result original to removed watermark.
Now in previous image as you can see original image has orange/red watermark. I created a mask that would kill the watermark and then applied it to the original image (this pulls the grey text boundary as well). Another trick that helped was to use the red channel since the watermark is most saturated on red ~245). Note that this requires opencv and c++17
But now i want to remove watermark in new image which has similar watermark color as text image is given below as you can see some watermark in image sideway in chinese overlaping with text. how to achieve it with my current code any help is appreciated.
Two ideas to try:
1: The watermark looks "lighter" than the primary text. So if you create a grayscale version of the image, you may be able to apply a threshold that keeps the primary text and drops the watermark. You may want to add one pass of dilation on that mask before applying it to the original image as the grey thresh will likely clip your non-watermark characters a bit. (this may pull in too much noise from the watermark though, so test it)
2: Try using the opencv opening function. Your primary text seems thicker than the watermark, so you should be able to isolate it. Similarly after you create the mask of your keep text, dilate once and mask the original image.

Detect location(s) of objects in an image

I have an input image that looks like this:
Notice that there are 6 boxes with black borders. I need to detect the location (upper-left hand corder) of each box. Normally I would use something like template matching but the contents (the colored area inside the black border) of each box is distinct.
Is there a version of template matching that can configured to ignore the inner area of each box? Is the an algorithm better suited to this situation?
Also note, that I have to deal with several different resolutions... thus the actual size of the boxes will be different from image to image. That said, the ratio (length to width) will always be the same.
Real-world example/input image per request:
You can do this finding the bounding box of connected components.
To find connected components you can convert to grayscale, and keep all pixels with value 0, i.e. the black border of the rectangles.
Then you can find the contours of each connected component, and compute its bounding box. Here the red bounding boxes found:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the image, as BGR
Mat3b img = imread("path_to_image");
// Convert to gray scale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask
Mat1b binary = (gray == 0);
// Find contours of connected components
vector<vector<Point>> contours;
findContours(binary.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
// For each contour
for (int i = 0; i < contours.size(); ++i)
{
// Get the bounding box
Rect box = boundingRect(contours[i]);
// Draw the box on the original image in red
rectangle(img, box, Scalar(0, 0, 255), 5);
}
// Show result
imshow("Result", img);
waitKey();
return 0;
}
From the image posted in chat, this code produces:
In general, this code will correctly detect the cards, as well as noise. You just need to remove noise according to some criteria. Among others: size or aspect ratio of boxes, colors inside boxes, some texture information.

a simple frame-differencing

The background scene often evolves over time because, for instance, the lighting condition might change (for example,from sunrise to sunset), or because new objects could be added or removed from the background.
Therefore, it is necessary to dynamically build a model of the background scene.
based on above, I wrote a simple frame differencing code.It works good But it's very slow.
how can I make it faster? Any suggestions?
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/video/background_segm.hpp >
using namespace cv;
using namespace std;
#include <iostream>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/video/tracking.hpp>
int main()
{
cv::Mat gray; // current gray-level image
cv::Mat background; // accumulated background
cv::Mat backImage; // background image
cv::Mat foreground; // foreground image
// learning rate in background accumulation
double learningRate;
int threshold; // threshold for foreground extraction
cv::VideoCapture capture("video.mp4");
// check if video successfully opened
if (!capture.isOpened())
return 0;
// current video frame
cv::Mat frame;
double rate= capture.get(CV_CAP_PROP_FPS);
int delay= 1000/rate;
// foreground binary image
//cv::Mat foreground;
cv::Mat output;
bool stop(false);
while (!stop){
if(!capture.read(frame))
break;
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::namedWindow("back");
cv::imshow("back",gray);
// initialize background to 1st frame
if (background.empty())
gray.convertTo(background, CV_32F);
// convert background to 8U
background.convertTo(backImage,CV_8U);
// compute difference between image and background
cv::absdiff(backImage,gray,foreground);
// apply threshold to foreground image
cv::threshold(foreground,output, 10,255,cv::THRESH_BINARY_INV);
// accumulate background
cv::accumulateWeighted(gray, background, 0.01, output);
cv::namedWindow("out");
cv::imshow("out",output);
if (cv::waitKey(delay)>=0)
stop= true;
}
}
I modified and corrected some parts of your code:
in the while loop you call to cv::namedWindow("back") and cv::namedWindow("out"), this is only necessary to do once.
you use if (background.empty()) to see if the array is empty or not, this is just necessary for the first cycle in which the matrix background is empty because in the remaining matrix will be filled, so that your code does not error the first cycle initialize to zero background=cv::Mat::zeros(rows,cols,CV_32F) taking into account the type and size that will be required in the iteration while loop. Also it does not affect the operation of accumulation.
Here the updated code:
int main()
{
cv::Mat gray; // current gray-level image
cv::Mat background; // accumulated background
cv::Mat backImage; // background image
cv::Mat foreground; // foreground image
// learning rate in background accumulation
double learningRate;
int threshold; // threshold for foreground extraction
cv::VideoCapture capture("C:/Users/Pedram91/Pictures/Camera Roll/videoplayback.mp4");////C:/Users/Pedram91/Downloads/Video/videoplayback.mp4//C:/FLIR.mp4
// check if video successfully opened
if (!capture.isOpened())
return 0;
// current video frame
cv::Mat frame;
double rate= capture.get(CV_CAP_PROP_FPS);
int delay= 1000/rate;
// foreground binary image
//cv::Mat foreground;
cv::Mat output;
bool stop(false);
cv::namedWindow("back");//This should go here,You only need to call once
cv::namedWindow("out");//This should go here,You only need to call once
int cols=capture.get(CV_CAP_PROP_FRAME_HEIGHT);
int rows=capture.get(CV_CAP_PROP_FRAME_WIDTH);
background=cv::Mat::zeros(rows,cols,CV_32F);//this will save the "if (background.empty())" in the while loop
while (!stop){
if(!capture.read(frame))
break;
cv::cvtColor(frame, gray, CV_BGR2GRAY);
cv::imshow("back",gray);
// initialize background to 1st frame
// if (background.empty())
gray.convertTo(background, CV_32F);
// convert background to 8U
background.convertTo(backImage,CV_8U);
// compute difference between image and background
cv::absdiff(backImage,gray,foreground);
// apply threshold to foreground image
cv::threshold(foreground,output, 10,255,cv::THRESH_BINARY_INV);
// accumulate background
cv::accumulateWeighted(gray, background, 0.01, output);
cv::imshow("out",output);
if (cv::waitKey(delay)>=0)
stop= true;
}
}

OpenCV Fullscreen Windows on Multiple Monitors

I have an OpenCV application that displays a fullscreen window, via:
cv::namedWindow("myWindow", CV_WINDOW_NORMAL)
cv::setWindowProperties("myWindow", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN)
It works fine, but when I have multiple monitors it always displays the fullscreen window on the First monitor. Is there any way to display on the 2nd monitor? I've tried setting X/Y and Width/Height, but they seem to be ignored once fullscreen is enabled.
Edits:
Sometimes pure OpenCV code cannot do a fullscreen window on a dual display. Here is a Qt way of doing it:
#include <QApplication>
#include <QDesktopWidget>
#include <QLabel>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QDesktopWidget dw;
QLabel myLabel;
// define dimension of the second display
int width_second = 2560;
int height_second = 1440;
// define OpenCV Mat
Mat img = Mat(Size(width_second, height_second), CV_8UC1);
// move the widget to the second display
QRect screenres = QApplication::desktop()->screenGeometry(1);
myLabel.move(QPoint(screenres.x(), screenres.y()));
// set full screen
myLabel.showFullScreen();
// set Qimg
QImage Qimg((unsigned char*)img.data, img.cols, img.rows, QImage::Format_Indexed8);
// set Qlabel
myLabel.setPixmap(QPixmap::fromImage(Qimg));
// show the image via Qt
myLabel.show();
return app.exec();
}
Don't forget to configure the .pro file as:
TEMPLATE = app
QT += widgets
TARGET = main
LIBS += -L/usr/local/lib -lopencv_core -lopencv_highgui
# Input
SOURCES += main.cpp
And in terminal compile your code as:
qmake
make
Original:
It is possible.
Here is a working demo code, to show a full-screen image on a second display. Hinted from How to display different windows in different monitors with OpenCV:
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
int main ( int argc, char **argv )
{
// define dimension of the main display
int width_first = 1920;
int height_first = 1200;
// define dimension of the second display
int width_second = 2560;
int height_second = 1440;
// move the window to the second display
// (assuming the two displays are top aligned)
namedWindow("My Window", CV_WINDOW_NORMAL);
moveWindow("My Window", width_first, height_first);
setWindowProperty("My Window", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
// create target image
Mat img = Mat(Size(width_second, height_second), CV_8UC1);
// show the image
imshow("My Window", img);
waitKey(0);
return 0;
}
I've tried different ways to make it working, but unfortunetely it seems that this is not possible using OpenCV. The only thing you can do is probably display one window on main(primary) screen just using your current code and handle second window manually - set window position, resize image, and just use imshow function to display it. Here is some example:
void showWindowAlmostFullscreen(cv::Mat img, std::string windowTitle, cv::Size screenSize, cv::Point screenZeroPoint)
{
screenSize -= cv::Size(100, 100); //leave some place for window title bar etc
double xScallingFactor = (float)screenSize.width / (float)img.size().width;
double yScallingFactor = (float)screenSize.height / (float)img.size().height;
double minFactor = std::min(xScallingFactor, yScallingFactor);
cv::Mat temp;
cv::resize(img, temp, cv::Size(), minFactor, minFactor);
cv::moveWindow(windowTitle, screenZeroPoint.x, screenZeroPoint.y);
cv::imshow(windowTitle, temp);
}
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat img1 = cv::imread("D:\\temp\\test.png");
cv::Mat img2;
cv::bitwise_not(img1, img2);
cv::namedWindow("img1", CV_WINDOW_AUTOSIZE);
cv::setWindowProperty("img1", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
cv::namedWindow("img2");
while(cv::waitKey(1) != 'q')
{
cv::imshow("img1", img1);
cv::setWindowProperty("img1", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
showWindowAlmostFullscreen(img2, "img2", cv::Size(1366, 768), cv::Point(260, 1080));
}
}
and the result:
Screen size and screen zero point (i don't know whether this is a correct name of this point - generally it's just a point in which there is screen (0,0) point) you can get using some other library or from windows control panel. Screen zero point will display when you will start moving screen:
If you use QT for writing your code, you can possibly utilize QT5's "Widget".
Here is a tutorial that will show you how to display an OpenCV image in a QT Widget.
Once you have that working you can then use something like this:
QScreen *screen = QGuiApplication::screens()[1]; // specify which screen to use
SecondDisplay secondDisplay = new SecondDisplay(); // your widget
** Add your code to display opencv image in widget here **
secondDisplay->move(screen->geometry().x(), screen->geometry().y());
secondDisplay->resize(screen->geometry().width(), screen->geometry().height());
secondDisplay->showFullScreen();
(Code found here on another SO answer)
I have not tried this myself, so I can't guarantee it will work, however, but it seems likely (if not a little overkill)
Hope this helps.

OpenCV GrabCut Algorithm example not working

I am trying to implement a grabcut algorithm in OpenCV using C++
I stumble upon this site and found a very simple way how to do it. Unfortunately, it seems like the code is not working for me
#include "opencv2/opencv.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image;
image= cv::imread("images/mango11a.jpg");
// define bounding rectangle
cv::Rect rectangle(50,70,image.cols-150,image.rows-180);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; // the models (internally used)
// GrabCut segmentation
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cout << "oks pa dito" <<endl;
// Get the pixels marked as likely foreground
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
// Generate output image
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
// draw rectangle on original image
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
cv::namedWindow("Image");
cv::imshow("Image",image);
// display result
cv::namedWindow("Segmented Image");
cv::imshow("Segmented Image",foreground);
waitKey();
return 0;
}
Can anyone help me with this please? What is supposed to be the problem
PS: NO errors were printed while compiling.
check your settings again. I just executed the same tutorial and it worked fine for me.

Resources