When i create statndart detector...
static vector<float> detector = HOGDescriptor::getDefaultPeopleDetector();
if (!detector.size()) {
fprintf(stderr, "ERROR: getDefaultPeopleDetector returned NULL\n");
return -1;
}
hog.setSVMDetector(detector);
hog.detectMultiScale(img, rects);
...all works fine.
But!
When i create my own classifier using "Classifier Tool For OpenCV" (classifieropencv.codeplex.com) i can't to find object. I use all default parameters: winSize, blockSize, blockStride, cellSize and others. Why? Any one used this tool to create classifiers fot HOG-detection? Any one used HOGDescriptor to detect his own object (without getDefaultPeopleDetector )?
Thanks!
This tool is useful: "Classifier Tool For OpenCV" (classifieropencv.codeplex.com)
Parameters in this tool (when you create classifier) must be equal with parameters in your OpenCv code(when you use classifier).
Here is manual in russian, but it have many pictures and video, and is clear.
Related
I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.
When I try regexner it works as expected with the following settings and data;
props.setProperty("annotators", "tokenize, cleanxml, ssplit, pos, lemma, regexner");
Bachelor of Laws DEGREE
Bachelor of (Arts|Laws|Science|Engineering|Divinity) DEGREE
What I would like to do is that using TokenRegex. For example
Bachelor of Laws DEGREE
Bachelor of ([{tag:NNS}] [{tag:NNP}]) DEGREE
I read that to do this, I should use TokensregexNERAnnotator.
I tried to use it as follows, but it did not work.
Pipeline.addAnnotator(new TokensRegexNERAnnotator("expressions.txt", true));
Or I tried setting annotator in another way,
props.setProperty("annotators", "tokenize, cleanxml, ssplit, pos, lemma, tokenregexner");
props.setProperty("customAnnotatorClass.tokenregexner", "edu.stanford.nlp.pipeline.TokensRegexNERAnnotator");
I tried to different TokenRegex formats but either annotator could not find the expression or I got SyntaxException.
What is the proper way to use TokenRegex (query on tokens with tags) on NER data file ?
BTW I just see a comment in TokensRegexNERAnnotator.java file. Not sure if it is related pos tags does not work with RegexNerAnnotator.
if (entry.tokensRegex != null) {
// TODO: posTagPatterns...
pattern = TokenSequencePattern.compile(env, entry.tokensRegex);
}
First you need to make a TokensRegex rule file (sample_degree.rules). Here is an example:
ner = { type: "CLASS", value: "edu.stanford.nlp.ling.CoreAnnotations$NamedEntityTagAnnotation" }
{ pattern: (/Bachelor/ /of/ [{tag:NNP}]), action: Annotate($0, ner, "DEGREE") }
To explain the rule a bit, the pattern field is specifying what type of pattern to match. The action field is saying to annotate every token in the overall match (that is what $0 represents), annotate the ner field (note that we specified ner = ... in the rule file as well, and the third parameter is saying set the field to the String "DEGREE").
Then make this .props file (degree_example.props) for the command:
customAnnotatorClass.tokensregex = edu.stanford.nlp.pipeline.TokensRegexAnnotator
tokensregex.rules = sample_degree.rules
annotators = tokenize,ssplit,pos,lemma,ner,tokensregex
Then run this command:
java -Xmx8g edu.stanford.nlp.pipeline.StanfordCoreNLP -props degree_example.props -file sample-degree-sentence.txt -outputFormat text
You should see that the three tokens you wanted tagged as "DEGREE" will be tagged.
I think I will push a change to the code to make tokensregex link to the TokensRegexAnnotator so you won't have to specify it as a custom annotator.
But for now you need to add that line in the .props file.
This example should help in implementing this. Here are some more resources if you want to learn more:
http://nlp.stanford.edu/software/tokensregex.shtml#TokensRegexRules
http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ling/tokensregex/SequenceMatchRules.html
http://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ling/tokensregex/types/Expressions.html
I'm porting some old MDX code to SharpDX using Direct3D9 assemblies.
I was able to 'convert' most of the code to SharpDX but I'm stuck at the following:
Mesh result = Mesh.Cylinder(_device, _arrowRadius1, _arrowRadius2, _arrowLength, _arrowNumberOfSlices, _arrowNumberOfStacks);
Mesh result = Mesh.Box(_device, _axisLength, _axisThick, _axisThick);
Mesh.TextFromFont(_device, new System.Drawing.Font("Berlin Sans FB", 12), text, 5f, 0.2f);
The mesh class exists but does not contain the Cylinder or Box methods. I've gone through tons of documentation and could not find a solution.
Apart from the problem with the Mesh class I could not find matching classes and methods for the following in SharpDX:
using (Surface backbuffer = _device.GetBackBuffer(0, 0))
{
GraphicsStream stream = SurfaceLoader.SaveToStream(ImageFileFormat.Bmp, backbuffer);
return new Bitmap(stream);
}
GraphicStream and SurfaceLoader do not exist.
i had similar problem proting from old Managed Microsoft.DirectX to SharpDx9.
For Meshes we had to implement our own Mesh classes since there are no pritives like cylinder, sphere or box in SharpDx.Mesh (its just a mock class i guess).
But for SurfaceLoader check Surface class itself it has static methods that will probably match your needs. For example:
Surface.ToStream()
i choose to ask a question here well aware that i can infringe some rules of StackExchange maybe becouse this isn't the right place to ask that, but i saw a lot of question related to CERN ROOT. I know that here people that answer the questions prefer to show the way instead to give a cooked solution, but i need some help and i have no time to learn from the answers, i only want a solution for my problem. I apologize in advance!
Here is my problem: i have two .root files:
one of a spectrum ("sezione_misura_90.root"),
one from background ("sezione_fondo_90.root").
I have to subtract the second from the first and get a final histogram. Usually i open the file with the TBroswer and i have no idea how to implement a macro of a script to open a .root file or doing everything else, first of all becouse i hate ROOT and all programming related, and i have only a course where i am supposed to use that, without someone tell me how!!! Even the prof. don't know how to use...
If some one that read have a macro or a script ready to use, I will be forever indebted to him for sharing that with me. Thanks in advance!
EDIT
I write down a file named run.cxx with the following lines
int run()
{
// Open both files side-by-side
TFile* sezione_misura_90 = new TFile("sezione_misura_90.root");
TFile* sezione_fondo_90 = new TFile("sezione_fondo_90.root");
// Get the histograms from the file
// Since you didn't say from your post, I'm going to assume that
// the histograms are called "hist" and that they hold floating
// point values (meaning, they're TH1F histograms. The "F" means float)
TH1F* h_misura = (TH1F*) sezione_misura_90->Get("hist");
TH1F* h_fondo = (TH1F*) sezione_fondo_90->Get("hist");
// Now we add them together
TH1F* h_sum = h_misura->Add(*h_fondo, -1);
}
There was some typos like ( and ;, i correct them but i get back the following.
Error: illegal pointer to class object h_misura 0x0 139 run.cxx:21:
** Interpreter error recovered **
A simple way to accomplish this is to write a script that opens the two files, reads the histograms from the files, and subtracts them (which is the same as adding them using a factor of -1). This can be done using a block of code similar to the following:
{
// Open both files side-by-side
TFile* sezione_misura_90 = new TFile("sezione_misura_90.root");
TFile* sezione_fondo_90 = new TFile(("sezione_fondo_90.root");
// Get the histograms from the file
// Since you didn't say from your post, I'm going to assume that
// the histograms are called "hist" and that they hold floating
// point values (meaning, they're TH1F histograms. The "F" means float)
TH1F* h_misura = (TH1F*) sezione_misura_90->Get("hist");
TH1F* h_fondo = (TH1F*) sezione_fondo_90->Get("hist");
// Now we add them together
TH1F* h_sum = h_misura->Add(*h_fondo, -1);
}
At this point, h_sum should be the histogram you want. You can save it to a file for later reading, or you can draw it to the screen if you're running an interactive root session.
The above code can be run by doing one of the following:
An interactive root session just by typing root and then typing the above lines)
As a root script (by pasting them into a file which, for example, could be named "file.C" and typing "root file.C")
A larger program (by putting the above lines in a function and calling that function)
You can read more about the methods available for a Histogram in ROOT's documentation:
http://root.cern.ch/root/html/TH1.html#TH1:Add#1
Hope that helps.
I see at least two problems. One problem has to do with the way ROOT manages memory, more specifically ROOT objects in memory:
// Each ROOT object derives from a TNamed class,
// hence has a <name>, which ROOT uses internally
// to keep track of the objects
TH1F* h_misura = (TH1F*) sezione_misura_90->Get("hist");
// now you have a histogram named "hist" in memory;
//btw, better to name it something more unique, e.g. hist1, at least
TH1F* h_fondo = (TH1F*) sezione_fondo_90->Get("hist");
// And now, you are trying to get another histogram named "hist",
// which creates a problem: Two different histograms with the same
// name - you can't do that.
// At the very least ROOT is going to overwrite the first hist
// and replace it with the second, or bug out
Solution to problem one:
// Rename the "hist"s to something like "hist1" and "hist2"
TH1F* h_misura = (TH1F*) sezione_misura_90->Get("hist");
h_misura->SetName("hist1");
TH1F* h_fondo = (TH1F*) sezione_fondo_90->Get("hist");
h_fondo->SetName("hist2");
// now, you have to histograms in memory with unique names
Problem two: when you open a TFile with
// TFile * f = new TFile("file.root");
it opens it in a read-only mode, therefore you can't write to them if you want to save your sum of histograms. Instead do this:
TFile * f = TFile::Open("file.root", "write");
// and do a null pointer check
if (!f) { std::cout << "file not found" << std::endl; exit(1); }
// if you want to save the results to file f
// ...
f->cd();
hist->Write();
f->Close();
Are there OpenCV equivalents of the GLUT glutGetWindow()/glutSetWindow() functions, which allows the current active window to be identified and switched from your own codes?
Basically, I'd like to able to identify the current active window from a within a mouse callback function registered with all windows, and have it call another processing function with different parameters for each window.
Any help would be appreciated.
There's no function to do that in OpenCV, however, the signature of cvSetMouseCallback() allows you to register one callback per window.
You will have to register individual callbacks to achieve what you need to do.
Here is the complete list of features supported by the HIGHGUI module.
Another (hardcore) alternative is to dive into the native API of the OS you are working with and search for methods that accomplish this. The problem is that this solution is not cross-platform.
Actually, cvGetWindowHandle(const char* windowname) is available up in opencv/highgui/highgui_c.h. This is available up until openCV 4 when this answer was written.
I suggest that you add
#include <opencv/highgui/highgui_c.h>
and use
cvGetWindowHandle(window_name_.c_str())
Include <opencv / highgui / highgui_c.h> could be a solution, but it really won't let you turn to Opencv4 +.
For those of you who are still using Opencv in MFC DialogBox, there is a different solution
FindWindows returns the Parent Window handle, and MFC works with the child window, so you'll need FindWindow and FindWindowEx.
New source code for MFC and Opencv4+
namedWindow(windowname, WINDOW_AUTOSIZE);
////// This will work on opencv 4.X //////
HWND hParent = (HWND)FindWindow(NULL, windowname.c_str());
HWND hWnd = (HWND)FindWindowEx(hParent, NULL, L"HighGUI class", NULL);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
Maybe you're still in troubles because string to LPCWSTR conversion fails, and hParent returns NULL. There is many ways to convert string to LPCWSTR, but because you are using MFC, try
namedWindow(windowname, WINDOW_AUTOSIZE);
////// This will work on opencv 4.X //////
CString CstrWindowname = windowname.data();
HWND hParent = (HWND)FindWindow(NULL, CstrWindowname);
HWND hWnd = (HWND)FindWindowEx(hParent, NULL, L"HighGUI class", NULL);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
The new code should replace this old code
namedWindow(windowname, WINDOW_AUTOSIZE);
///// OLD version. Used on opencv 3.X on MFC Dialog Box /////
HWND hWnd = (HWND) cvGetWindowHandle(windowname.c_str());
HWND hParent = ::GetParent(hWnd);
::SetParent(hWnd, GetDlgItem(IDC_PICTURE)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
CWnd* pWnd = new CWnd();
pWnd->CWnd::Attach(hParent);
Try,
Well, there is no OpenCV API for retreiving focused window, but OS GUI Shell usually provides. Using this approach would be better because mouse callbacks can't detect ALT-TAB and programmatic focusing.
Here's some example code on python for windows that gets the job done:
import ctypes
import cv2
user32 = ctypes.windll.user32
def exists_cv_window(title):
# seems to work on python-opencv version 4.6.0
return cv2.getWindowProperty(title, cv2.WND_PROP_VISIBLE) != 0.0
def get_active_cv_window():
focused_window_handle = user32.GetForegroundWindow()
length = user32.GetWindowTextLengthW(focused_window_handle)
buffer = bytes([0]) * 2 * length
buff_pointer = ctypes.c_char_p(buffer)
user32.GetWindowTextW(window_handle, buff_pointer, length)
active_window_title = buffer.decode('utf-16')
if exists_cv_window(active_window_title):
return active_window_title
# example use case for the function
def main():
im1 = cv2.imread('cookie.png')
im2 = cv2.imread('cat.png')
cv2.imshow('figure 1', im1)
cv2.imshow('figure 2', im2)
while True:
key = cv2.waitKey(10)
if key == 23: # CTRL + W
title = get_active_cv_window()
if title is not None:
cv2.destroyWindow(title)
# in the example above the ability to target active window allows applying
# CTRL + W shortcut to a specific figure
It's a shame this is not part of OpenCV