I'm trying to convert a Java class to a C# one using EmguCV. It's for a class in Unsupervised Learning. The teacher made a program using OpenCV and Java. I have to convert it to C#.
The goal is to implement a simple Face Recognition algorithm.
The method I'm stuck at:
Mat sample = train.get(0).getData();
mean = Mat.zeros(/*6400*/sample.rows(), /*1*/sample.cols(), /*CvType.CV_64FC1*/sample.type());
// Calculating it by hand
train.forEach(person -> {
Mat data = person.getData();
for (int i = 0; i < mean.rows(); i++) {
double mv = mean.get(i, 0)[0]; // Gets the value of the cell in the first channel
double pv = data.get(i, 0)[0]; // Gets the value of the cell in the first channel
mv += pv;
mean.put(i, 0, mv); // *********** I'm stuck here ***********
}
});
So far, my C# equivalent is:
var sample = trainSet[0].Data;
mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
var data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
var meanValue = (double)mean.GetData().GetValue(i,0);
var personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
}
}
And I am not finding the put equivalent in C#. But, if I'm being honest, I'm not even sure the previous two lines in my C# equivalent are correct.
Can someone help me figure this one out?
You can convert it like this:
Mat sample = trainSet[0].Data;
Mat mean = Mat.Zeros(sample.Rows, sample.Cols, sample.Depth, sample.NumberOfChannels);
foreach (var person in trainSet)
{
Mat data = person.Data;
for (int i = 0; i < mean.Rows; i++)
{
double meanValue = (double)mean.GetData().GetValue(i, 0);
double personValue = (double)data.GetData().GetValue(i, 0);
meanValue += personValue;
double[] mva = new double[] { meanValue };
Marshal.Copy(mva, 0, mean.DataPointer + i * mean.Cols * mean.ElementSize, 1);
}
}
Related
I am trying to implement Variable Rate Shading in the app based on DirectX 11.
I am doing it this way:
UINT dwRtWidth = 2560;
UINT dwRtHeight = 1440;
D3D11_TEXTURE2D_DESC srcDesc;
ZeroMemory(&srcDesc, sizeof(srcDesc));
int sri_w = dwRtWidth / NV_VARIABLE_PIXEL_SHADING_TILE_WIDTH;
int sri_h = dwRtHeight / NV_VARIABLE_PIXEL_SHADING_TILE_HEIGHT;
srcDesc.Width = sri_w;
srcDesc.Height = sri_h;
srcDesc.ArraySize = 1;
srcDesc.Format = DXGI_FORMAT_R8_UINT;
srcDesc.SampleDesc.Count = 1;
srcDesc.SampleDesc.Quality = 0;
srcDesc.Usage = D3D11_USAGE_DEFAULT; //Optional
srcDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; //Optional
srcDesc.CPUAccessFlags = 0;
srcDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initialData;
UINT* data = (UINT*)malloc(sri_w * sri_h * sizeof(UINT));
for (int i = 0; i < sri_w * sri_h; i++)
data[i] = (UINT)0;
initialData.pSysMem = data;
initialData.SysMemPitch = sri_w;
//initialData.SysMemSlicePitch = 0;
HRESULT hr = s_device->CreateTexture2D(&srcDesc, &initialData, &pShadingRateSurface);
if (FAILED(hr))
{
LOG("Texture not created");
LOG(std::system_category().message(hr));
}
else
LOG("Texture created");
When I try to create texture with initial data, it is not being created and HRESULTS gives message: 'The parameter is incorrect'. Doesn't say which one.
When I create texture without initial data it's created successfully.
What's wrong with the initial data? I also tried to use unsigned char instead of UINT as it has 8 bits, but result was the same, texture was not created.
Please help.
Aftr some time I found a solution to the problem. I needed to add a line:
srcDesc.MipLevels = 1;
With this change the texture was finally created with initial data
I am re-writing the particle filter library of iOS in Swift from Objective C which is available on Bitbucket and I have a question regarding a syntax of Objective C which I cannot understand.
The code goes as follows:
- (void)setRssi:(NSInteger)rssi {
_rssi = rssi;
// Ignore zeros in average, StdDev -- we clear the value before setting it to
// prevent old values from hanging around if there's no reading
if (rssi == 0) {
self.meters = 0;
return;
}
self.meters = [self metersFromRssi:rssi];
NSInteger* pidx = self.rssiBuffer;
*(pidx+self.bufferIndex++) = rssi;
if (self.bufferIndex >= RSSIBUFFERSIZE) {
self.bufferIndex %= RSSIBUFFERSIZE;
self.bufferFull = YES;
}
if (self.bufferFull) {
// Only calculate trailing mean and Std Dev when we have enough data
double accumulator = 0;
for (NSInteger i = 0; i < RSSIBUFFERSIZE; i++) {
accumulator += *(pidx+i);
}
self.meanRssi = accumulator / RSSIBUFFERSIZE;
self.meanMeters = [self metersFromRssi:self.meanRssi];
accumulator = 0;
for (NSInteger i = 0; i < RSSIBUFFERSIZE; i++) {
NSInteger difference = *(pidx+i) - self.meanRssi;
accumulator += difference*difference;
}
self.stdDeviationRssi = sqrt( accumulator / RSSIBUFFERSIZE);
self.meanMetersVariance = ABS(
[self metersFromRssi:self.meanRssi]
- [self metersFromRssi:self.meanRssi+self.stdDeviationRssi]
);
}
}
The class continues with more code and functions which are not important and what I do not understand are these two lines
NSInteger* pidx = self.rssiBuffer;
*(pidx+self.bufferIndex++) = rssi;
Variable pidx is initialized to the size of a buffer which was previously defined and then in the next line the size of that buffer and buffer plus one is equal to the RSSI variable which is passed as a parameter in the function.
I assume that * has something to do with reference but I just can't figure out the purpose of this line. Variable pidx is used only in this function for calculating trailing mean and standard deviation.
Let explain those code:
NSInteger* pidx = self.rssiBuffer; means that you are getting pointer of the first value of the buffer.
*(pidx+self.bufferIndex++) = rssi; means that you are setting the value of the buffer at index 0+self.bufferIndex to rssiand then increase bufferIndex by 1. Thanks to #Jakub Vano point it out.
In C++, it will look like that
int self.rssiBuffer[1000]; // I assume we have buffer like that
self.rssiBuffer[self.bufferIndex++] = rssi
I was troubling with this operation. I can't get it through. Where am I missing?
vector<Mat> blobC;
for(unsigned int i = 0; i < blobCFinal.size(); i++)
{
blobC.at(i) = blobCFinal.at(i);
}
where
vector<IplImage*> blobCFinal;
If I'm not mistaken usual way of converting normal type is like this,
IplImage* blobCFinal;
Mat blobC(blobCFinal);
Ans: Thanks to #rotating_image, probably this will work
vector<Mat> blobC;
for(unsigned int i = 0; i < blobCFinal.size(); i++)
{
Mat dummy = Mat(blobCFinal[i]);
blobC.push_back(dummy);
}
Try this...
vector<Mat> blobC;
vector<IplImage*> blobCFinal;
//some processing
for(unsigned int i = 0; i < blobCFinal.size(); i++)
{
Mat dummy = Mat(blobCFinal[i]);
blobC[i] = dummy.clone();
}
In Weka, class StringToWordVector defines a method called setNormalizeDocLength. It normalizes word frequencies of a document. My questions are:
what is meant by "normalizing word frequency of a document"?
How Weka does this?
A practical example will help me best. Thanks in advance.
Looking in the Weka source, this is the method that does the normalising:
private void normalizeInstance(Instance inst, int firstCopy) throws Exception
{
double docLength = 0;
if (m_AvgDocLength < 0)
{
throw new Exception("Average document length not set.");
}
// Compute length of document vector
for(int j=0; j<inst.numValues(); j++)
{
if(inst.index(j)>=firstCopy)
{
docLength += inst.valueSparse(j) * inst.valueSparse(j);
}
}
docLength = Math.sqrt(docLength);
// Normalize document vector
for(int j=0; j<inst.numValues(); j++)
{
if(inst.index(j)>=firstCopy)
{
double val = inst.valueSparse(j) * m_AvgDocLength / docLength;
inst.setValueSparse(j, val);
if (val == 0)
{
System.err.println("setting value "+inst.index(j)+" to zero.");
j--;
}
}
}
}
It looks like the most relevant part is
double val = inst.valueSparse(j) * m_AvgDocLength / docLength;
inst.setValueSparse(j, val);
So it looks like the normalisation is value = currentValue * averageDocumentLength / actualDocumentLength.
ArrayList a = new ArrayList();
for(int i = 0; i < a.size(), i++)
{
float f = float(a.get(i)); // ERROR : cant convert Object to float
}
Processing is a simpler java, so you can use the java syntax:
ArrayList<Float> a = new ArrayList<Float>();
a.add(1.0f); // "Autobox" a float into a Float object, adding it to the array
for (int i = 0; i < a.size(); i++)
{
float f = a.get(i); // "Unbox" the Float object
}
Lookup autoboxing/unboxing.
Must not be .NET, otherwise would be i < a.count;
float f = (float)(a.get(i)); ??