mouse handler in opencv for large images, wrong x,y coordinates? - opencv

i am using images that are 2048 x 500 and when I use cvShowImage, I only see half the image. This is not a big deal because the interesting part is on the top half of the image. Now, when I use the mouseHandler to get the x,y coordinates of my clicks, I noticed that the coordinate for y (the dimension that doesnt fit in the screen) is wrong.
It seems OpenCV think this is the whole image and recalibrates the coordinate system although we are only effectively showing half the image.
I would need to know how to do 2 things:
- display a resized image that would fit in the screen
get the proper coordinate.
Did anybody encounter similar problems?
Thanks!
Update: it seems the y coordinate is divided by 2 of what it is supposed to be
code:
EXPORT void click_rect(uchar * the_img, int size_x, int size_y, int * points)
{
CvSize size;
size.height = size_y ;
size.width = size_x;
IplImage * img;
img = cvCreateImageHeader(size, IPL_DEPTH_8U, 1);
img->imageData = (char *)the_img;
img->imageDataOrigin = img->imageData;
img1 = cvCreateImage(cvSize((int)((size.width)) , (int)((size.height)) ),IPL_DEPTH_8U, 1);
cvNamedWindow("mainWin",CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin", 100, 100);
cvSetMouseCallback( "mainWin", mouseHandler_rect, NULL );
cvShowImage("mainWin", img1 );
//// wait for a key
cvWaitKey(0);
points[0] = x_1;
points[1] = x_2;
points[2] = y_1;
points[3] = y_2;
//// release the image
cvDestroyWindow("mainWin");
cvReleaseImage(&img1 );
cvReleaseImage(&img);
}

You should create a window with the CV_WINDOW_KEEPRATIO flag instead of the CV_WINDOW_AUTOSIZE flag. This temporarily fixes the problem with your y values being wrong.

I use OpenCV2.1 and visual studio C++ compiler. I fix this problem with another flag CV_WINDOW_NORMAL and work properly and returns correct coordinates, this flag enables you to resize the image window.
cvNamedWindow("Box Example", CV_WINDOW_NORMAL);

I am having the same problem with OpenCV 2.1 using it with Windows and mingw compiler. It took me forever to find out what was wrong. As you describe it, cvSetMouseCallback gets too large y coordinates. This is apparently due to the image and the cvNamedWindow it is shown in being bigger than my screen resolution; thus I cannot see the bottom of the image.
As a solution I resize the images to a fixed size, such that they fit on the screen (in this case with resolution 800x600, which can be any other values:
// g_input_image, g_output_image and g_resized_image are global IplImage* pointers.
int img_w = cvGetSize(g_input_image).width;
int img_h = cvGetSize(g_input_image).height;
// If the height/width ratio is greater than 6/8 resize height to 600.
if (img_h > (img_w*6)/8) {
g_resized_image = cvCreateImage(cvSize((img_w*600)/img_h, 600), 8, 3);
}
// else adjust width to 800.
else {
g_resized_image = cvCreateImage(cvSize(800, (img_h*800)/img_w), 8, 3);
}
cvResize(g_output_image, g_resized_image);
Not a perfect solution, but works for me...
Cheers,
Linus

How are you building the window? You are not passing CV_WINDOW_AUTOSIZE to cvNamedWindow(), are you?
Share some source, #Denis.

Related

Direct3D9 fullscreen app - deformed renderering

I have been hardly coding on a Direct3D9 based game. Everything went excellent util I hit a big problem. I created a class that wraps the process of loading a mesh from a .x file. I successfully loaded a cube with only one face visible. In theory, that face should look like a square but it is actually rendered as a rectangle. I am quite sure that there is something wrong with the D3DPRESENT_PARAMETERS structure. Down bellow are only the most important lines of my application's initialization.
First part to be created is the focus window:
HWND hWnd = CreateWindowEx(0UL, L"NewFrontiers3DWindowClass", Title.c_str(), WS_POPUP | WS_EX_TOPMOST, 0, 0, 1280, 1024, nullptr, (HMENU)false, hInstance, nullptr);
Then I fill out the D3DPRESENT_PARAMETERS structure.
D3DDISPLAYMODE D3DMM;
SecureZeroMemory(&D3DMM, sizeof(D3DDISPLAYMODE));
if(FAILED(hr = Direct3D9->GetAdapterDisplayMode(Adapter, &D3DMM)))
{
// Error is processed here
}
PresP.BackBufferWidth = D3DMM.Width;
PresP.BackBufferHeight = D3DMM.Height;
PresP.BackBufferFormat = BackBufferFormat;
PresP.BackBufferCount = 1U;
PresP.MultiSampleType = D3DMULTISAMPLE_NONE;
PresP.MultiSampleQuality = 0UL;
PresP.SwapEffect = D3DSWAPEFFECT_DISCARD;
PresP.hDeviceWindow = hWnd;
PresP.Windowed = false;
PresP.EnableAutoDepthStencil = EnableAutoDepthStencil;
PresP.AutoDepthStencilFormat = AutoDepthStencilFormat;
PresP.Flags = D3DPRESENTFLAG_DISCARD_DEPTHSTENCIL;
PresP.FullScreen_RefreshRateInHz = D3DMM.RefreshRate;
PresP.PresentationInterval = PresentationInterval;
Then the Direct3D9 device is created, followed by the SetRenderState functions.
Next, the viewport is assigned.
D3DVIEWPORT9 D3D9Viewport;
SecureZeroMemory(&D3D9Viewport, sizeof(D3DVIEWPORT9));
D3D9Viewport.X = 0UL;
D3D9Viewport.Y = 0UL;
D3D9Viewport.Width = (DWORD)D3DMM.Width;
D3D9Viewport.Height = (DWORD)D3DMM.Height;
D3D9Viewport.MinZ = 0.0f;
D3D9Viewport.MaxZ = 1.0f;
if(FAILED(Direct3D9Device->SetViewport(&D3D9Viewport)))
{
// Error is processed here
}
After this initialization, I globally declare some parameters that will be used later.
D3DXVECTOR3 EyePt(0.0f, 0.0f, -5.0f), Up(0.0f, 1.0f, 0.0f), LookAt(0.0f, 0.0f, 0.0f);
D3DXMATRIX View, Proj, World;
The update function looks like this:
Mesh.Render(Direct3D9Device);
D3DXMatrixLookAtLH(&View, &EyePt, &LookAt, &Up);
Direct3D9Device->SetTransform(D3DTS_VIEW, &View);
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, 1.0f, 1.0f, 1000.f);
Direct3D9Device->SetTransform(D3DTS_PROJECTION, &Proj);
D3DXMatrixTranslation(&World, 0.0f, 0.0f, 0.0f);
Direct3D9Device->SetTransform(D3DTS_WORLD, &World);
The device is not a null pointer.
I recently realized that there is no difference between declaring and setting up a view port and not doing so.
If there is anybody who can point me to the right answer, please help me solve this annoying problem.
If you don't set any transformation matrices, so the identity transformation is applied to your mesh, then face of the cube will be stretched to the same shape of the viewport. If your viewport isn't square (eg. it's the same size as the screen) then your cube's face also won't be square.
You can use a square viewport to workaround this problem, but that will limit your rendering to just that square on the screen. If you want to render to the entire screen you'll need to set a suitable projection matrix. You can calculate a normal perspective perspective matrix using D3DXMatrixPerspectiveFovLH. If you want an orthogonal perspective, where everything is the same size regardless of the distance from the camera, then use D3DXMatrixOrthoLH to calculate the perspective matrix. Note that if you use your viewport's width and height with the later function it will shrink your cube. A unit size cube will be rendered as a single pixel on the screen. You can either use a world or view transform to scale it up again, or use something like width/height and 1 as your width and height parameters to D3DXMatrixOrthoLH.
If you go with D3DXMatrixPerspectiveFovLH then you want something like this:
D3DXMatrixPerspectiveFovLH(&Proj, D3DX_PI/4, (double) D3DMM.Width / D3DMM.Height,
1.0f, 1000.f);
I think your problem not in D3DPP parameters but in your projective matrix. If you use D3DXMatrixPerspectiveFovLH, check aspect ratio to be 1280 / 1024 = 1.3333f

JavaFX: Disable image smoothing on Canvas object

I'm making a sprite editor using JavaFX for use on desktops.
I'm attempting to implement zooming functionality, but I've run into a problem: I can't figure out how to disable image smoothing on a Canvas object.
I'm calling Canvas.setScaleX() and Canvas.setScaleY() as per every tutorial implementing Canvas zooming. But my image appears blurred when zoomed in.
I have some test code here to demonstrate.
As this is a sprite editor, it's important for me to have crisp edges to work with. The alternative to fixing image smoothing on the Canvas is to have a non-smoothing ImageView, and have a hidden Canvas to draw on, which I would rather avoid.
Help is appreciated.
(here's a link to a related question, but doesn't address my particular problem)
I was having the same issue with the blurring.
In my case, my computer has Retina Display. Retina Display causes a pixel to be rendered with sub-pixels. When drawing images to the canvas, the image would be drawn with antialiasing for the sub-pixels. I have not found a way to prevent this antialiasing from occurring (although it is possible with other canvas technologies such as HTML5's Canvas)
In the meantime, I have a work-around (albeit I'm concerned about performance):
public class ImageRenderer {
public void render(GraphicsContext context, Image image, int sx, int sy, int sw, int sh, int tx, int ty) {
PixelReader reader = image.getPixelReader();
PixelWriter writer = context.getPixelWriter();
for (int x = 0; x < sw; x++) {
for (int y = 0; y < sh; y++) {
Color color = reader.getColor(sx + x, sy + y);
if (color.isOpaque()) {
writer.setColor(tx + x, ty + y, color);
}
}
}
}
}
The PixelWriter bypasses the anti-aliasing that occurs when drawing the image.

cvResizeWindow() flicker reaction

I have an OpenCV window that I would like to resize to fill my screen, but when I use the resize function the window flickers. The output is my webcam and I guess the flicker is because my camera does not have those dimensions. Is there any other way to make the output from the camera larger?
cvNamedWindow("video", CV_WINDOW_AUTOSIZE);
IplImage *frame=0;
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
cvResizeWindow("video", 1920,1080);
Give you an example of using cvResize() to resize the image or frame.
IplImage *frame;
CvCapture *capture = cvCaptureFromCAM(0);
cvNamedWindow("capture", CV_WINDOW_AUTOSIZE);
while(1) {
frame = cvQueryFrame(capture);
IplImage *frame_resize = cvCreateImage(cvSize(1366, 768), frame -> depth, frame -> nChannels);
cvResize(frame, frame_resize, CV_INTER_LINEAR);
cvShowImage("capture", frame);
cvWaitKey(25);
}
One possibility is to use the cvResize() function to change the size of the frame.
However, an easier way is to get rid of the CV_WINDOW_AUTOSIZE flag -- without that the video will be displayed at the size of the window.
Something like this:
cvNamedWindow("video", 0);
cvResizeWindow("video", 1920,1080);
IplImage *frame=0;
while(true)
{
frame=cvQueryFrame(capture);
cvShowImage("video", frame);
int c = waitKey(10);
...
}
I am not sure of the cause of the flickering, as I could not replicate that issue on my system.
Therefore I cannot guarantee that the flickering will disappear for you (but at least the video should be the correct size).

How to obtain the floodfilled area?

Let me start by saying that I'm still a beginner using OpenCV. Some things might seem obvious and once I learn them hopefully they also become obvious to me.
My goal is to use the floodFill feature to generate a separate image containing only the filled area. I have looked into this post but I'm a bit lost on how to convert the filled mask into an actual BGRA image with the filled color. Besides that I also need to crop the newly filled image to contain only the filled area. I'm guessing OpenCV has some magical function that could do the trick.
Here is what I'm trying to achieve:
Original image:
Filled image:
Filled area only:
UPDATE 07/07/13
Was able to do a fill on a separate image using the following code. However, I still need to figure out the best approach to get only the filled area. Also, my floodfill solution has an issue with filling an image that contains alpha values...
static int floodFillImage (cv::Mat &image, int premultiplied, int x, int y, int color)
{
cv::Mat out;
// un-multiply color
unmultiplyRGBA2BGRA(image);
// convert to no alpha
cv::cvtColor(image, out, CV_BGRA2BGR);
// create our mask
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
// floodfill the mask
cv::floodFill(
out,
mask,
cv::Point(x,y),
255,
0,
cv::Scalar(),
cv::Scalar(),
+ (255 << 8) + cv::FLOODFILL_MASK_ONLY);
// set new image color
cv::Mat newImage(image.size(), image.type());
cv::Mat maskedImage(image.size(), image.type());
// set the solid color we will mask out of
newImage = cv::Scalar(ARGB_BLUE(color), ARGB_GREEN(color), ARGB_RED(color), ARGB_ALPHA(color));
// crop the 2 extra pixels w and h that were given before
cv::Mat maskROI = mask(cv::Rect(1,1,image.cols,image.rows));
// mask the solid color we want into new image
newImage.copyTo(maskedImage, maskROI);
// pre multiply the colors
premultiplyBGRA2RGBA(maskedImage, image);
return 0;
}
you can get the difference of those two images to get the different pixels.
pixels with no difference will be zero and other are positive value.
cv::Mat A, B, C;
A = getImageA();
B = getImageB();
C = A - B;
handle negative values in the case.(i presume not in your case)

View GPU Memory / View Texture2D memory space for debugging

I've got a question about a PixelShader I am trying to implement, and what I currently do (this is just for debugging, and trying to figure stuff out):
int3 loc;
loc.x = (int)(In.TextureUV.x * resolution_XY.x);
loc.y = (int)(In.TextureUV.x * resolution_XY.x);
loc.z = 0;
float4 r = g_txDiffuse.Load(loc);
return float4(r.x, r.y, r.z, 1);
The point is, this is always 0,0,0,1
The texture buffer is created:
D3D11_TEXTURE2D_DESC tDesc;
tDesc.Height = 480;
tDesc.Width = 640;
tDesc.Usage = D3D11_USAGE_DYNAMIC;
tDesc.MipLevels = 1;
tDesc.ArraySize = 1;
tDesc.SampleDesc.Count = 1;
tDesc.SampleDesc.Quality = 0;
tDesc.Format = DXGI_FORMAT_R8_UINT;
tDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
tDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
tDesc.MiscFlags = 0;
V_RETURN(pd3dDevice->CreateTexture2D(&tDesc, NULL, &g_pCurrentImage));
I upload the texture (which should be a live display at the end) via:
D3D11_MAPPED_SUBRESOURCE resource;
pd3dImmediateContext->Map(g_pCurrentImage, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
memcpy( resource.pData, g_Images.GetData(), g_Images.GetDataSize() );
pd3dImmediateContext->Unmap( g_pCurrentImage, 0 );
I've checked the resource.pData, the data in there is a valid 8bit monochrome image. I made sure the data coming from the camera is 8bit monochrome 640x480.
There's a few things I don't fully understand:
if I run the Map / memcpy / Unmap routine in every frame, the driver will ultimately crash, the system will be unresponsive. Is there a different way to update a complete texture every frame which should be done?
the texture I uploaded is 8bit, why is the Texture2D.load() a float4 return? Do I have to use a different method to access the texture data? I tried to .sample it, but that didn't work either. Would I have to use a int buffer or something instead?
is there a way to debug the GPU memory, to check if the memcpy worked in the first place?
The Map, memcpy, Unmap really ought not to crash unless2 you are trying to copy too much data into the texture. It would be interesting to know what "GetDataSize()" returns. Does it equal 307,200? If its more than that then there lies your problem.
Texture2D returns a float4 because thats what you've asked for. If you write float r = g_txDiffuse.Load( ... ). The 8-bits get extended to a normalised float as part of the load process. Are you sure, btw, that your calculation of "loc" is correct because as you have it now loc.x and loc.y will always be the same.
You can debug whats going on with DirectX using PIX. Its a great tool and I highly recommend you familiarise yourself with it.

Resources