Drawing on an image using emgu cv C++ - emgucv

I am having difficulties figuring out how to draw a simple rectangle on an image using emgu cv. I'm using VS 2010 Express. I have a User Interface in which I am displaying a live video feed in a picture box on a panel which I have created using the .net framework. Now I would like to draw a clear rectangle in the middle of this feed, as that is where my code is focused on, and I need the user to see how to line up the camera and the object of interest so that it is in the rectangle. This is what I have so far in terms of drawing the rectangle on the frames from the camera
cv::Scalar red(0,0,255);
System::Drawing::Rectangle Rect = System::Drawing::Rectangle(120, 160, 150, 150);
frameColorDisplay->Draw(Rect, red, 2);
and this is the error that I'm receiving
BAOTFISInterface.cpp(1067): error C2664: 'void Emgu::CV::Image<TColor,TDepth>::Draw(Emgu::CV::Seq<T> ^,Emgu::CV::Structure::Bgr,int)' : cannot convert parameter 1 from 'System::Drawing::Rectangle' to 'Emgu::CV::Seq<T> ^'
5> with
5> [
5> TColor=Emgu::CV::Structure::Bgr,
5> TDepth=unsigned char,
5> T=System::Drawing::Point
5> ]
5> and
5> [
5> T=System::Drawing::Point
5> ]
5> No user-defined-conversion operator available, or
5> No user-defined-conversion operator available that can perform this conversion, or
the operator cannot be called
I'm not sure why it's trying to convert from Rectangle to a Sequence? As far as I know, I'm calling the function properly according to the Emgu CV documentation. Does anyone have any insight into this issue?

You need a TColor not scalar:
From the Docs
C#
public virtual void Draw(
Rectangle rect,
TColor color,
int thickness
)

To find options for your TColor (as in GPPK's answer) look in EMGU.CV.Structure. For instance, for a grayscale value you can pass
new Gray(0)
for black or
new Gray(255)
for white. The expressions for HLS, HLV, etc. colors are only a little more complicated.

Related

What format should I use for a Opencv image if I need to access the underlaying data?

I've made a program that creates images using OpenCL and in the OpenCL code I have to access the underlaying data of the opencv-image and modify it directly but I don't know how the data is arranged internally.
I'm currently using CV_8U because the representation is really simple 0 black 255 white and everything in between but I want to add color and I don't know what format to use.
This is how I currently modify the image A[y*width + x] = 255;.
Since your A[y*width + x] = 255; works fine, then the underlaying image data A must be a 1D pixel array of size width * height, each element is a cv_8u (8 bit unsigned int).
The color values of a pixel, in the case of OpenCV, will be arranged B G R in memory. RGB order would be more common but OpenCV likes them BGR.
Your data ought to be CV_8UC3, which is the case if you use imread or VideoCapture. if it isn't that, the following information needs to be interpreted accordingly.
Your array index math needs to expand to account for the data's layout:
[(y*width + x)*3 + channel]
3 because 3 channels. channel is 0..2, x and y as you expect.
As mentioned in other answers, you'd need to convert this single-channel image to a 3-channel image to have color. The 3 channels are Blue, Green, Red (BGR).
OpenCV has a method that does just this, cv2.cvtColor(), this method takes an input image (in this case the single channel image that you have), and a conversion code (see here for more).
So the code would be like the following:
color_image = cv2.cvtColor(source_image, cv2.COLOR_GRAY2BGR)
Then you can modify the color by accessing each of the color channels, e.g.
color_image[y, x, 0] = 255 # this changes the first channel (Blue)

Draw dropshadow using Direct2D

I need to render a dropshadow of bitmap A onto bitmap B using Direct2D.
More specifically, A is 32-bit with alpha-transparency, i.e. some pixels can be transparent. B (also 32-bit) contains another image, i.e. it can't be assumed empty.
Can someone provide a working example of how to do that?
Preferably as a method that takes blur amount, distance, color and opacity of the dropshadow as parameters.
The C++ code I'm trying to convert is the below from MS. But I'm not sure about converting the C++ syntax and also the D2D2 libraries in Delphi XE2 seem to be incomplete.
ComPtr<ID2D1Effect> shadowEffect;
m_d2dContext->CreateEffect(CLSID_D2D1Shadow, &shadowEffect);
shadowEffect->SetInput(0, bitmap);
// Shadow is composited on top of a white surface to show opacity.
ComPtr<ID2D1Effect> floodEffect;
m_d2dContext->CreateEffect(CLSID_D2D1Flood, &floodEffect);
floodEffect->SetValue(D2D1_FLOOD_PROP_COLOR, D2D1::Vector4F(1.0f, 1.0f, 1.0f, 1.0f));
ComPtr<ID2D1Effect> affineTransformEffect;
m_d2dContext->CreateEffect(CLSID_D2D12DAffineTransform, &affineTransformEffect);
affineTransformEffect->SetInputEffect(0, shadowEffect.Get());
D2D1_MATRIX_3X2_F matrix = D2D1::Matrix3x2F::Translation(20, 20));
affineTransformEffect->SetValue(D2D1_2DAFFINETRANSFORM_PROP_TRANSFORM_MATRIX, matrix);
ComPtr<ID2D1Effect> compositeEffect;
m_d2dContext->CreateEffect(CLSID_D2D1Composite, &compositeEffect);
compositeEffect->SetInputEffect(0, floodEffect.Get());
compositeEffect->SetInputEffect(1, affineTransformEffect.Get());
compositeEffect->SetInput(2, bitmap);
m_d2dContext->BeginDraw();
m_d2dContext->DrawImage(compositeEffect.Get());
m_d2dContext->EndDraw();

Wavy text inside of UIBezierPath in iOS

I saw a trekking app on Android which built routes on selected criteria. It also could built routes for provided gpx-files. All those routes were very wavy. And above each highlighted route I could see its name — also as a very wavy text moving along with the highlighted path, repeating all the curls and waves. I wonder how it is possible to create the same effect in iOS.
What I have is a gpx-file. Shortly speaking, it's just a very long array of tuples:
typealias Coordinates = (Double, Double) // x and y
let points: [Coordinates] = [ (120, 120), (130, 135), (135, 125), (138, 122) ]
Coordinates are represented by pixels, and I use catmull rom interpolation algorithm to build UIBezierPath with smooth rounded corners.
I can draw a wavy text by changing angles for each letter in a playground. But calculating all that transformation stuff for an array of pixels looks too complicated.
Probably there's a better solution?

getRectSubPix and borderInterpolate in OpenCV

The documentation for OpenCVs getRectSubPix() function:
C++: void getRectSubPix(InputArray image, Size patchSize,
Point2f center, OutputArray patch, int patchType=-1 )
contains the statement:
While the center of the rectangle must be inside the image,
parts of the rectangle may be outside. In this case,
the replication border mode (see borderInterpolate() )
is used to extrapolate the pixel values outside of the image.
But I can't see a way to set the borderInterpolate mode in getRectSubPix. Many other OpenCV functions (boxFilter, copyMakeBorder, ...) allow you to pass in the borderInterpolate enum, but not getRectSubPix.
Is this just a documentation error?
The statement "replication border mode (see borderInterpolate() ) is used to extrapolate the pixel values", clearly states that it uses a predefined mode known as BORDER_REPLICATE to estimate the pixels outside the image boundary, You cannot use other Border methods like BORDER_REFLECT, BORDER_WRAP, BORDER_CONSTANT, etc.

Plot an array into bitmap in C/C++ for thermal printer

I am trying to accomplish something a bit backwards from everyone else. Given an array of sensor data, I wish to print a graph plot of it. My test bench uses a stepper motor to move the input shaft of a sensor, stop, get ADC value of sensor's voltage, repeat.
My current version 0.9 bench does not have a graphical output. The proper end solution will. Currently, I have 35 data points, and I'm looking to get 90 to 100. The results are simply stored in an int array. The index is linear, so it's not a complicated plot, but I'm having problems conceptualizing the plot from bottom-left to top-right to display to the operator. I figure on the TFT screen, I can literally translate an origin and then draw lines from point to point...
Worse, I want to also print out this to a thermal printer, so I'll need to translate this into a sub-384 pixel wide graph. I'm not too worried about the semantics of communicating the image to the printer, but how to convert the array to an image.
It gets better: I'm doing this on an Arduino Mega, so the libraries aren't very robust. At least it has a lot of RAM for the code. :/
Here's an example of when I take my data from the Arduino test and feed it into Excel. I'm not looking for color, but I'd like the graph to appear and this setup not be connected to a computer. Or the network. This is the ESC/POS printer, btw.
The algorithm for this took three main stages:
1) Translate the Y from top left to bottom left.
2) Break up the X into word:bit values.
3) Use Bresenham's algorithm to draw lines between the points. And then figure out how to make the line thicker.
For my exact case, the target bitmap is 384x384, so requires 19k of SRAM to store in memory. I had to ditch the "lame" Arduino Mega and upgrade to the ChipKIT uC32 to pull this off, 32k of RAM, 80 MHz cpu, & twice the I/O!
The way I figured out this was to base my logic on Adafruit's Thermal library for Arduino. In their examples, they include how to convert a 1-bit bitmap into a static array for printing. I used their GFX library to implement the setXY function as well as their GFX Bresenham's algorithm to draw lines between (X,Y)s using my setXY().
It all boiled down to the code in this function I wrote:
// *bitmap is global or class member pointer to byte array of size 384/8*384
// bytesPerRow is 384/8
void setXY(int x, int y) {
// integer divide by 8 (/8) because array size is byte or char
int xByte = x/8;
// modulus 8 (%8) to get the bit to set
uint8_t shifty = x%8;
// right shift because we start from the LEFT
int xVal = 0x80 >> shifty;
// inverts Y from bottom to start of array
int yRow = yMax - y;
// Get the actual byte in the array to manipulate
int offset = yRow*bytesPerRow + xByte;
// Use logical OR in case there is other data in the bitmap,
// such as a frame or a grid
*(bitmap+offset)|=xVal;
}
The big point is to remember with an array, we are starting at the top left of the bitmap, going right across the row, then down one Y row and repeating. The gotchya's are in translating the X into the word:bit combo. You've got to shift from the left (sort-of like translating the Y backwards). Another gotchya is one-off error in bookkeeping for the Y.
I put all of this in a class, which helped prevent me from making one big function to do it all and through better design made the implementation easier than I thought it would be.
Pic of the printout:
Write-up of the project is here.

Resources