What's the reason for this time difference? - opencv

there is some problem with my code
In follow code:
GainDetailMatI is Mat type with 9792*2448 matrix
ContrastGainBound4096x,ContrastGainLayerI is int
Platform: Android 4.4, NDK gcc 4.9
A:
Mat plus = ContrastGainLayerI * min(ContrastGainBound4096x, max(0, GainDetailMatI - 4096.0));
B:
Mat t=max(0, GainDetailMatI - 4096.0);
Mat plus = ContrastGainLayerI * min(ContrastGainBound4096x, t);
A use 13 millisec more than B.
I close gcc optimize by set APP_OPTIM := debug at Application.mk
Is there anyone know the reason?
I think maybe max(0, GainDetailMatI - 4096.0) return with type MatExpr
And t=max(0, GainDetailMatI - 4096.0); convert MatExpr to Mat
Maybe this is the reason?
Thanks a lot!

In example B you first store the object in t, retrieving it to use in the second part of your code. In example A you skip the storing and retrieving making the code more efficient. While this shows that dumping all your code on one line often makes it more efficient, keep in mind that readablility has ALOT of value. More info on Java performance can be found on the wiki. https://en.wikipedia.org/wiki/Java_performance#Compressed_Oops

Related

c++ vecotr with iterator can't be destroyed?

Recently I have faced a problem when the program running the memory keep increasing, and when program is closed the memory would restore normal level. Obviously, it's a memory leak. After some work, I have located the code responsible, but I don't know why? The program's work flow is simple:
first use lidar api to get point cloud and image data;
then transport to next tbb flow graph to process these data;
finally use open3d api to visualzie them.
In the first step, the lidar itself's api use asio to asynchronously invoke some callback function to transport data, so I create some tbb concurrent_queue to store these data, and a align function to match cloud and image with timestamp. The problem is in the align function. In the function, I create a vector<shared_ptr<open3d::..::PointCloud>> and use iterator to store point cloud elements. However, I found when the function complete, the shared_ptr use count don't reduce . Similar but simpler example code like this:
std::pair<std::shared_ptr<int>, int> helper() {
auto a = std::make_shared<int>(90);
auto c = 100;
std::vector<std::pair<std::shared_ptr<int>, int>> container;
container.reserve(5);
auto iter = container.begin();
for (int i = 0; i < 3; i++) {
*iter = std::make_pair(a, c);
iter++;
}
return *(iter-1);
}
int main() {
auto b = helper();
std::cout << "shared_ptr use count: " << std::get<0>(b).use_count() << std::endl;
return 0;
}
Ubuntu 20.04 + gcc 9.4, the print result is shared_ptr use count: 4.
Why the vector can't be auto destroyed when function is completed? Hope someone kindly explain this problem.
Thanks #Retired Ninja! The root of the problem is vector.reserve just reserve capacity not physical space. So the vector space after reserve is 0. The following iterator operation is assumed point to some undifined memory. While the result can be transport to main function with no value error, the shared_ptr use count can't reduce to 1 after function call.
To solve the problem, One can just modify the reserve to resize, which can change physical space of the vector and iterator point to defined memory space. Or avoid use iterator, just use push_back and return back().

no operator [] match these operands

I am adapting an old code which uses cvMat. I use the constructor from cvMat :
Mat A(B); // B is a cvMat
When I write A[i][j], I get the error no operator [] match these operands.
Why? For information: B is a single channel float matrix (from a MLData object read from a csv file).
The documentation lists the at operator as being used to access a member.
A.at<int>(i,j); //Or whatever type you are storing.
first, you should have a look at the most basic opencv tutorials
so, if you have a 3channel, bgr image (the most common case), you will have to access it like:
Vec3b & pixel = A.at<Vec3b>(y,x); // we're in row,col world, here !
pixel = Vec3b(17,18,19); // at() returns a reference, so you can *set* that, too.
the 1channel (grayscale) version would look like this:
uchar & pixel = A.at<uchar>(y,x);
since you mention float images:
float & pixel = A.at<float>(y,x);
you can't choose the type at will, you have to use, what's inside the Mat, so try to query A.type() before.

mlpack sparse coding solution not found

I am trying to learn how to use the Sparse Coding algorithm with the mlpack library. When I call Encode() on my instance of mlpack::sparse_coding:SparseCoding, I get the error
[WARN] There are 63 inactive atoms. They will be reinitialized randomly.
error: solve(): solution not found
Is it simply that the algorithm cannot learn a latent representation of the data. Or perhaps it is my usage? The relevant section follows
EDIT: One line was modified to fix an unrelated error, but the original error remains.
double* Application::GetSparseCodes(arma::mat* trainingExample, int atomCount)
{
double* latentRep = new double[atomCount];
mlpack::sparse_coding::SparseCoding<mlpack::sparse_coding::DataDependentRandomInitializer> sc(*trainingExample, Utils::ATOM_COUNT, 1.0);
sc.Encode(Utils::MAX_ITERATIONS);
arma::mat& latentRepMat = sc.Codes();
for (int i = 0; i < atomCount; i++)
latentRep[i] = latentRepMat.at(i, 0);
return latentRep;
}
Some relevant parameters
const static int IMAGE_WIDTH = 20;
const static int IMAGE_HEIGHT = 20;
const static int PIXEL_COUNT = IMAGE_WIDTH * IMAGE_HEIGHT;
const static int ATOM_COUNT = 64;
const static int MAX_ITERATIONS = 100000;
This could be one of a handful of issues but given the description it's a little difficult to tell which of these it is (or if it is something else entirely). However, these three ideas should provide a good place to start:
Matrices in mlpack are column-major. That means each observation should represent a column. If you use mlpack::data::Load() to load, e.g., a CSV file (which are generally one row per observation), it will automatically transpose the dataset. SparseCoding will act oddly if you pass it transposed data. See also http://www.mlpack.org/doxygen.php?doc=matrices.html.
If there are 63 inactive atoms, then only one atom is actually active (given that ATOM_COUNT is 64). This means that the algorithm has found that the best way to represent the dictionary (at a given step) uses only one atom. This could happen if the matrix you are passing consists of all zeros.
mlpack will provide verbose output, which may also be helpful for debugging. Usually this is used by using mlpack's CLI class to parse command-line input, but you can enable verbose output with mlpack::Log::Info.ignoreInput = false. You may obtain a lot of output that way, but it will give a better look at what is going on...
The mlpack project has its own mailing list where you may be likely to get a quicker or more comprehensive response, by the way.

Loading/Storing to XMFLOAT4 faster than using XMVECTOR?

I'm going through the DirectX Math/XNA Math library, and I got curious when I read about the alignment requirements for XMVECTOR (Now DirectX::XMVECTOR), and how it is expected of you to use XMFLOAT* for members instead, using XMLoad* and XMStore* when performing mathematical operations. I was specifically curious about the tradeoffs, so I did an experiment, as I'm sure many others have, and tested to see exactly how much you lose having to load and store the vectors for each operation. This is the resulting code:
#include <Windows.h>
#include <chrono>
#include <cstdint>
#include <DirectXMath.h>
#include <iostream>
using std::chrono::high_resolution_clock;
#define TEST_COUNT 1000000000l
int main(void)
{
DirectX::XMVECTOR v1 = DirectX::XMVectorSet(1, 2, 3, 4);
DirectX::XMVECTOR v2 = DirectX::XMVectorSet(2, 3, 4, 5);
DirectX::XMFLOAT4 x{ 1, 2, 3, 4 };
DirectX::XMFLOAT4 y{ 2, 3, 4, 5 };
std::chrono::system_clock::time_point start, end;
std::chrono::milliseconds duration;
// Test with just the XMVECTOR
start = high_resolution_clock::now();
for (uint64_t i = 0; i < TEST_COUNT; i++)
{
v1 = DirectX::XMVectorAdd(v1, v2);
}
end = high_resolution_clock::now();
duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
DirectX::XMFLOAT4 z;
DirectX::XMStoreFloat4(&z, v1);
std::cout << std::endl << "z = " << z.x << ", " << z.y << ", " << z.z << std::endl;
std::cout << duration.count() << " milliseconds" << std::endl;
// Now try with load/store
start = high_resolution_clock::now();
for (uint64_t i = 0; i < TEST_COUNT; i++)
{
v1 = DirectX::XMLoadFloat4(&x);
v2 = DirectX::XMLoadFloat4(&y);
v1 = DirectX::XMVectorAdd(v1, v2);
DirectX::XMStoreFloat4(&x, v1);
}
end = high_resolution_clock::now();
duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
std::cout << std::endl << "x = " << x.x << ", " << x.y << ", " << x.z << std::endl;
std::cout << duration.count() << " milliseconds" << std::endl;
}
Running a debug build yields the output:
z = 3.35544e+007, 6.71089e+007, 6.71089e+007
25817 milliseconds
x = 3.35544e+007, 6.71089e+007, 6.71089e+007
84344 milliseconds
Okay, so about thrice as slow, but does anyone really take perf tests on debug builds seriously? Here are the results when I do a release build:
z = 3.35544e+007, 6.71089e+007, 6.71089e+007
1980 milliseconds
x = 3.35544e+007, 6.71089e+007, 6.71089e+007
670 milliseconds
Like magic, XMFLOAT4 runs almost three times faster! Somehow the tables have turned. Looking at the code, this makes no sense to me; the second part runs a superset of the commands that the first part runs! There must be something going wrong, or something I am not taking into account. It is difficult to believe that the compiler was able to optimize the second part nine-fold over the much simpler, and theoretically more efficient first part. The only reasonable explanations I have involve either (1) cache behavior, (2) some crazy out of order execution that XMVECTOR can't take advantage of, (3) the compiler is making some insane optimizations, or (4) using XMVECTOR directly has some implicit inefficiency that was able to be optimized away when using XMFLOAT4. That is, the default way the compiler loads and stores XMVECTORs from memory is less efficient than XMLoad* and XMStore*. I attempted to inspect the disassembly, but I'm not all that familiar with X86 and/or SSE2 and Visual Studio does some crazy optimizations making it difficult to follow along with the source code. I also tried the Visual Studio performance analysis tool, but that didn't help as I can't figure out how to make it show the disassembly instead of the code. The only useful information I get out of that is that the first call to XMVectorAdd accounts for ~48.6% of all cycles while the second call to XMVectorAdd accounts for ~4.4% of all cycles.
EDIT:
After some more debugging, here is the assembly for the code that gets run inside of the loop. For the first part:
002912E0 movups xmm1,xmmword ptr [esp+18h] <-- HERE
002912E5 add ecx,1
002912E8 movaps xmm0,xmm2 <-- HERE
002912EB adc esi,0
002912EE addps xmm0,xmm1
002912F1 movups xmmword ptr [esp+18h],xmm0 <-- HERE
002912F6 jne main+60h (0291300h)
002912F8 cmp ecx,3B9ACA00h
002912FE jb main+40h (02912E0h)
And for the second part:
00291400 add ecx,1
00291403 addps xmm0,xmm1
00291406 adc esi,0
00291409 jne main+173h (0291413h)
0029140B cmp ecx,3B9ACA00h
00291411 jb main+160h (0291400h)
Note that these two loops are indeed nearly identical. The only difference is that the first for loop appears to be the one doing the loading and storing! It would appear as though Visual Studio made a ton of optimizations since x and y were on the stack. Changing them both to be on the heap (thus the writes must happen), and the machine code is now identical. Is this generally the case? Is there really no negative side effect to using the storage classes? Other than the fully optimized versions I suppose.
If you define
DirectX::XMVECTOR v3 = DirectX::XMVectorSet(2, 3, 4, 5);
and use v3 instead v1 as a result:
...
for (uint64_t i = 0; i < TEST_COUNT; i++)
{
v3 = DirectX::XMVectorAdd(v1, v2);
}
you got code faster then 2-nd part code using XMLoadFloat4 and XMStoreFloat4
Firstly, don't use Visual Studio's "high-resolution clock" for perf timing. You should use QueryPerformanceCounter instead. See Connect.
SIMD performance is difficult to measure in these micro tests because the overhead of loading up vector data can often dominate with such trivial ALU usage. You really need to do something substantial with the data to see the benefits. Also keep in mind that depending on your compiler settings, the compiler itself may be using the 'scalar' SIMD functionality or even auto-vectoring such trivial code loops.
You are also seeing some issues with the way you are generating your test data. You should create something larger than a single vector on the heap and use that as your source/dest.
PS: The best way to create 'static' XMVECTOR data is to use the XMVECTORF32 type.
static const DirectX::XMVECTORF32 v1 = { 1, 2, 3, 4 };
Note that if you want to have the load/save conversions between XMVECTOR and XMFLOATx to be "automatic", take a look at SimpleMath in the DirectX Tool Kit. You just use types like SimpleMath::Vector4 in your data structures, and the implicit conversion operators take care of calling XMLoadFloat4 / XMStoreFloat4 for you.

How to read vertices from vertex buffer in Direct3d11

I have a question regarding vertex buffers. How does one read the vertices from the vertex buffer in D3D11? I want to get a particular vertex's position for calculations, if this approach is wrong, how would one do it? The following code does not (obviously) work.
VERTEX* vert;
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBufferSphere, NULL, D3D11_MAP_READ, NULL, &ms);
vert = (VERTEX*) ms.pData;
devcon->Unmap(pVBufferSphere, NULL);
Thanks.
Where your code is wrong:
You asking GPU to give you an address to its memory(Map()),
Storing this adress (operator=()),
Then saying: "Thanks, I don't need it anymore" (Unmap()).
After unmap, you can't really say where your pointer now points. It can point to memory location where already allocated another stuff or at memory of your girlfriend's laptop (just kidding =) ).
You must copy data (all or it's part), not pointer in between Map() Unmap(): use memcopy, for loop, anything. Put it in array, std::vector, BST, everything.
Typical mistakes that newcomers can made here:
Not to check HRESULT return value from ID3D11DeviceContext::Map method. If map fails it can return whatever pointer it likes. Dereferencing such pointer leads to undefined behavior. So, better check any DirectX function return value.
Not to check D3D11 debug output. It can clearly say what's wrong and what to do in plain good English language (clearly better than my English =) ). So, you can fix bug almost instantly.
You can only read from ID3D11Buffer if it was created with D3D11_CPU_ACCESS_READ CPU access flag which means that you must also set D3D11_USAGE_STAGING usage fag.
How do we usualy read from buffer:
We don't use staging buffers for rendering/calculations: it's slow.
Instead we copy from main buffer (non-staging and non-readable by CPU) to staging one (ID3D11DeviceContext::CopyResource() or ID3D11DeviceContext::CopySubresourceRegion()), and then copying data to system memory (memcopy()).
We don't do this too much in release builds, it will harm performance.
There are two main real-life usages of staging buffers: debugging (see if buffer contains wrong data and fix some bug in algorithm) and reading final non-pixel data (for example if you calculating scientific data in Compute shader).
In most cases you can avoid staging buffers at all by well-designing your code. Think as if CPU<->GPU was connected only one way: CPU->GPU.
The following code only get the address of the mapped resource, you didn't read anything before Unmap.
vert = (VERTEX*) ms.pData;
If you want to read data from the mapped resource, first allocate enough memory, then use memcpy to copy the data, I don't know your VERTEX structure, so I suppose vert is void*, you can convert it yourself
vert = new BYTE[ms.DepthPitch];
memcpy(vert, ms.pData, ms.DepthPitch];
Drop's answer was helpful. I figured that the reason why I wasn't able to read the buffer was because I didn't have the CPU_ACCESS_FLAG set to D3D11_CPU_ACCESS_READ before. Here
D3D11_BUFFER_DESC bufferDesc;
ZeroMemory(&bufferDesc, sizeof(bufferDesc));
bufferDesc.ByteWidth = iNumElements * sizeof(T);
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
bufferDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ;
bufferDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED;
bufferDesc.StructureByteStride = sizeof(T);
And then to read data I did
const ID3D11Device& device = *DXUTGetD3D11Device();
ID3D11DeviceContext& deviceContext = *DXUTGetD3D11DeviceContext();
D3D11_MAPPED_SUBRESOURCE ms;
HRESULT hr = deviceContext.Map(g_pParticles, 0, D3D11_MAP_READ, 0, &ms);
Particle* p = (Particle*)malloc(sizeof(Particle*) * g_iNumParticles);
ZeroMemory(p, sizeof(Particle*) * g_iNumParticles);
memccpy(p, ms.pData, 0, sizeof(ms.pData));
deviceContext.Unmap(g_pParticles, 0);
delete[] p;
I agree it's a performance decline, I wanted to do this, just to be able to debug the values!
Thanks anyway! =)

Resources