Why the initialization of weights in darknet? - machine-learning

there!
I am studying Mr. Redmon's darknet code from https://github.com/pjreddie/darknet
I found the initialization of weights of a connected layer is like below:
// file: src/connected_layer.c
// function: make_connected_layer
float scale = sqrt(2./inputs);
for(i = 0; i < outputs*inputs; ++i){
l.weights[i] = scale*rand_uniform(-1, 1);
}
and the initialization of weights of a convolutional layer is like below:
// file: src/convolutional_layer.c
// function: make_convolutional_layer
float scale = sqrt(2./(size*size*c/l.groups));
for(i = 0; i < l.nweights; ++i) {
l.weights[i] = scale*rand_normal();
}
Could you tell me what the principle is behind these code, please? Links to resources such as related papers are also OK.
Thank you a lot!

Related

Autodiff for Jacobian derivative with respect to individual joint angles

I am trying to compute $\partial{J}{q_i}$ in drake C++ for manipulator and as per my search, the best approach seems to be using autodiff function. I was not able to fully understand autodiff approach from the resources that I found, so I apologize if my approach is not clear enough. I have used my understanding from some already asked questions mentioned on the forum regarding auto diff as well as https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html as reference.
As I want to calculate $\partial{J}{q_i}$, the return type will be a tensor i.e. 3 * 7 * 7(or 6 * 7 * 7 depending on the spatial jacobian). I can think of using std::vectorEigen::MatrixXd to allocate the output or alternatively just doing one $q_i$ at a time and computing the respective jacobian for the auto diff. In either case, I was struggling to pass it in the initializing the jacobian function.
I did the following to initialize autodiff
std::unique_ptr<multibody::MultibodyPlant<AutoDiffXd>> mplant_autodiff = systems::System<double>::ToAutoDiffXd(mplant);
std::unique_ptr<systems::Context<AutoDiffXd>> mContext_autodiff = mplant_autodiff->CreateDefaultContext();
mContext_autodiff->SetTimeStateAndParametersFrom(*mContext);
const multibody::Frame<AutoDiffXd>* mFrame_EE_autodiff = &mplant_autodiff->GetBodyByName(mEE_link).body_frame();
const multibody::Frame<AutoDiffXd>* mWorld_Frame_autodiff = &(mplant_autodiff->world_frame());
//Initialize the q as autodiff vector
drake::AutoDiffVecXd q_autodiff = drake::math::InitializeAutoDiff(mq_robot);
MatrixX<AutoDiffXd> mJacobian_autodiff; // Linear Jacobian matrix.
mplant_autodiff->SetPositions(context_autodiff.get(), q_autodiff);
mplant_autodiff->CalcJacobianTranslationalVelocity(*mContext_autodiff,
multibody::JacobianWrtVariable::kQDot,
*mFrame_EE_autodiff,
Eigen::Vector3d::Zero(),
*mWorld_Frame_autodiff,
*mWorld_Frame_autodiff,
&mJacobian_autodiff
);
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it. In addition, I get error messages when I try to call the jacobian matrix. Is there a way to address this problem both for $\partial{J}{q_i}$ for each q_i and changing q_i in a for loop or directly getting the result in a tensor. My apologies if I am doing something total tangent to the correct approach. I thank you in anticipation.
However, as far as I understand, InitializeAutoDiff initializes to the identity matrix, whereas I want to $\partial{J}{q_i}$, so is there is a better way to do it
That is correct. When you call InitializeAutoDiff and compute mJacobian_autodiff, you get a matrix of AutoDiffXd. Each AutoDiffXd has a value() function that stores the double value, and a derivatives() storing the gradient as an Eigen::VectorXd. We have
mJacobian(i, j).value() = J(i, j)
mJacobian_autodiff(i, j).derivatives()(k) = ∂J(i, j)/∂q(k)
So if you want to create a std::vecot<Eigen::MatrixXd> such that the k'th entry of this vector stores the matrix ∂J/∂q(k), then here is a code
std::vector<Eigen::MatrixXd> dJdq(q_autodiff.rows());
for (int i = 0; i < q_autodiff.rows(); ++i) {
dJdq[i].resize(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
}
for (int i = 0; i < q_autodiff.rows(); ++i) {
// dJidq stores the gradient of the ∂J.col(i)/∂q, namely dJidq(j, k) = ∂J(j, i)/∂q(k)
auto dJidq = ExtractGradient(mJacobian_autodiff.col(i));
for (int j = 0; j < static_cast<int>(dJdq.size()); ++j) {
dJdq[j].col(i) = dJidq.col(j);
}
}
Compute ∂J/∂q(i) for a single i
If you do not want to compute ∂J/∂q(i) for all i, but only for one specific i, you can change the initialization of q_autodiff from InitializeAutoDiff to this
AutoDiffVecXd q_autodiff(q.rows());
for (int k = 0; k < q_autodiff.rows(); ++k) {
q_autodiff(k).value() = q(k)
q_autodiff(k).derivatives() = Vector1d::Zero();
if (k == i) {
q_autodiff(k).derivatives()(0) = 1;
}
}
namely q_autodiff stores the gradient ∂q/∂q(i), which is 0 for all k != i and 1 when k == i. And then you can compute mJacobian_autodiff using your current code. Now mJacobian_autodiff(m, n).derivatives() store the gradient of ∂J(m, m)/∂q(i) for that specific i. You can extract this gradient as
Eigen::Matrix dJdqi(mJacobian_autodiff.rows(), mJacobian_autodiff.cols());
for (int m = 0; m < dJdqi.rows(); ++m) {
for (int n = 0; n < dJdqi.cols(); ++n) {
dJdqi(m, n) = mJacobian_autodiff(m, n).derivatives()(0);
}
}

Transform a point from map A to map B

I´m trying to transform a point from one map to another. I´ve tried to use some OpenCV sample code for getAffineTransform(), getPerspectiveTransform(), warpAffine() and findHomography(), but there´re always some kind of gaps in my transformation mesh. The feature points are usually detected on very different positions, so I need a good interpolation method, I think.
About the maps:
Both maps are images which are containing human body parts and human skin. I´m using the OpenCV feature detection/matching algorithmns to get a couple of equal points in both maps. The tricky thing is they´re containing arms and feets, too. Feature points on arms/feets can have much bigger offsets than the points on the torso.
The goal:
I want to transform any point on map A as good as possible to the equivalent position on map B.
My current approach is to find the three most clostest points to my original point on map A and construct a triangle. Afterwards I transform this triangle to the same three feature points on map B. That´s working nice if I have a lot of close feature point surrounding my original point. But on larger areas without feature points I got some problems with the interpolation.
Is this a good way to do so? Or is there a much better solution?
My favorite one would be the contruction of a complete transformation map for both images, but I´m not sure how to do this. Is it possible at all?
Thanks a lot for any advice!
Simple sketch of the transformation (I´m trying to find the points X1 to X3 from the left image in the right image):
Sketch of a sample transformation
Sample for homography (OpenCVSharp):
Mat imgA = new Mat(#"d:\Mesh\Left2.jpg", ImreadModes.Color);
Mat imgB = new Mat(#"d:\Mesh\Right2.jpg", ImreadModes.Color);
Cv2.Resize(imgA, imgA, new Size(512, 341));
Cv2.Resize(imgB, imgB, new Size(512, 341));
SURF detector = SURF.Create(500.0);
KeyPoint[] keypointsA = detector.Detect(imgA);
KeyPoint[] keypointsB = detector.Detect(imgB);
SIFT extractor = SIFT.Create();
Mat descriptorsA = new Mat();
Mat descriptorsB = new Mat();
extractor.Compute(imgA, ref keypointsA, descriptorsA);
extractor.Compute(imgB, ref keypointsB, descriptorsB);
BFMatcher matcher = new BFMatcher(NormTypes.L2, true);
DMatch[] matches = matcher.Match(descriptorsA, descriptorsB);
double minDistance = 10000.0;
double maxDistance = 0.0;
for (int i = 0; i < matches.Length; ++i)
{
double distance = matches[i].Distance;
if (distance < minDistance)
{
minDistance = distance;
}
if (distance > maxDistance)
{
maxDistance = distance;
}
}
List<DMatch> goodMatches = new List<DMatch>();
for (int i = 0; i < matches.Length; ++i)
{
if (matches[i].Distance <= 3.0 * minDistance &&
Math.Abs(keypointsA[matches[i].QueryIdx].Pt.Y - keypointsB[matches[i].TrainIdx].Pt.Y) < 30)
{
goodMatches.Add(matches[i]);
}
}
Mat output = new Mat();
Cv2.DrawMatches(imgA, keypointsA, imgB, keypointsB, goodMatches.ToArray(), output);
List<Point2f> goodA = new List<Point2f>();
List<Point2f> goodB = new List<Point2f>();
for (int i = 0; i < goodMatches.Count; i++)
{
goodA.Add(keypointsA[goodMatches[i].QueryIdx].Pt);
goodB.Add(keypointsB[goodMatches[i].TrainIdx].Pt);
}
InputArray goodInputA = InputArray.Create<Point2f>(goodA);
InputArray goodInputB = InputArray.Create<Point2f>(goodB);
Mat h = Cv2.FindHomography(goodInputA, goodInputB);
Point2f centerA = new Point2f(imgA.Cols / 2.0f, imgA.Rows / 2.0f);
output.DrawMarker((int)centerA.X, (int)centerA.Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Point2f[] transformedPoints = Cv2.PerspectiveTransform(new Point2f[] { centerA }, h);
output.DrawMarker((int)transformedPoints[0].X + imgA.Cols, (int)transformedPoints[0].Y, Scalar.Red, MarkerStyle.Cross, 50, LineTypes.Link8, 5);
Code snippet for perspective transform (different approach, OpenCVSharp):
pointsA[0] = new Point(trisA[i].Item0, trisA[i].Item1);
pointsA[1] = new Point(trisA[i].Item2, trisA[i].Item3);
pointsA[2] = new Point(trisA[i].Item4, trisA[i].Item5);
pointsB[0] = new Point(trisB[i].Item0, trisB[i].Item1);
pointsB[1] = new Point(trisB[i].Item2, trisB[i].Item3);
pointsB[2] = new Point(trisB[i].Item4, trisB[i].Item5);
Mat transformation = Cv2.GetAffineTransform(pointsA, pointsB);
InputArray inputSource = InputArray.Create<Point2f>(new Point2f[] { new Point2f(10f, 50f) });
Mat outputMat = new Mat();
Cv2.PerspectiveTransform(inputSource, outputMat, transformation);
Mat.Indexer<Point2f> indexer = outputMat.GetGenericIndexer<Point2f>();
var target = indexer[0, 0];

ID3D11DeviceContext::DrawIndexed() Failed

my program is Directx Program that draws a container cube within it smaller cubes....these smaller cubes fall by time i hope you understand what i mean...
The program isn't complete yet ...it should draws the container only ....but it draws nothing ...only the background color is visible... i only included what i think is needed ...
this is the routines that initialize the program
bool Game::init(HINSTANCE hinst,HWND _hw){
Directx11 ::init(hinst , _hw);
return LoadContent();}
Directx11::init()
bool Directx11::init(HINSTANCE hinst,HWND hw){
_hinst=hinst;_hwnd=hw;
RECT rc;
GetClientRect(_hwnd,&rc);
height= rc.bottom - rc.top;
width = rc.right - rc.left;
UINT flags=0;
#ifdef _DEBUG
flags |=D3D11_CREATE_DEVICE_DEBUG;
#endif
HR(D3D11CreateDevice(0,_driverType,0,flags,0,0,D3D11_SDK_VERSION,&d3dDevice,&_featureLevel,&d3dDeviceContext));
if (d3dDevice == 0 || d3dDeviceContext == 0)
return 0;
DXGI_SWAP_CHAIN_DESC sdesc;
ZeroMemory(&sdesc,sizeof(DXGI_SWAP_CHAIN_DESC));
sdesc.Windowed=true;
sdesc.BufferCount=1;
sdesc.BufferDesc.Format=DXGI_FORMAT_R8G8B8A8_UNORM;
sdesc.BufferDesc.Height=height;
sdesc.BufferDesc.Width=width;
sdesc.BufferDesc.Scaling=DXGI_MODE_SCALING_UNSPECIFIED;
sdesc.BufferDesc.ScanlineOrdering=DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sdesc.OutputWindow=_hwnd;
sdesc.BufferDesc.RefreshRate.Denominator=1;
sdesc.BufferDesc.RefreshRate.Numerator=60;
sdesc.Flags=0;
sdesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
if (m4xMsaaEnable)
{
sdesc.SampleDesc.Count=4;
sdesc.SampleDesc.Quality=m4xMsaaQuality-1;
}
else
{
sdesc.SampleDesc.Count=1;
sdesc.SampleDesc.Quality=0;
}
IDXGIDevice *Device=0;
HR(d3dDevice->QueryInterface(__uuidof(IDXGIDevice),reinterpret_cast <void**> (&Device)));
IDXGIAdapter*Ad=0;
HR(Device->GetParent(__uuidof(IDXGIAdapter),reinterpret_cast <void**> (&Ad)));
IDXGIFactory* fac=0;
HR(Ad->GetParent(__uuidof(IDXGIFactory),reinterpret_cast <void**> (&fac)));
fac->CreateSwapChain(d3dDevice,&sdesc,&swapchain);
ReleaseCOM(Device);
ReleaseCOM(Ad);
ReleaseCOM(fac);
ID3D11Texture2D *back = 0;
HR(swapchain->GetBuffer(0,__uuidof(ID3D11Texture2D),reinterpret_cast <void**> (&back)));
HR(d3dDevice->CreateRenderTargetView(back,0,&RenderTarget));
D3D11_TEXTURE2D_DESC Tdesc;
ZeroMemory(&Tdesc,sizeof(D3D11_TEXTURE2D_DESC));
Tdesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
Tdesc.ArraySize = 1;
Tdesc.Format= DXGI_FORMAT_D24_UNORM_S8_UINT;
Tdesc.Height= height;
Tdesc.Width = width;
Tdesc.Usage = D3D11_USAGE_DEFAULT;
Tdesc.MipLevels=1;
if (m4xMsaaEnable)
{
Tdesc.SampleDesc.Count=4;
Tdesc.SampleDesc.Quality=m4xMsaaQuality-1;
}
else
{
Tdesc.SampleDesc.Count=1;
Tdesc.SampleDesc.Quality=0;
}
HR(d3dDevice->CreateTexture2D(&Tdesc,0,&depthview));
HR(d3dDevice->CreateDepthStencilView(depthview,0,&depth));
d3dDeviceContext->OMSetRenderTargets(1,&RenderTarget,depth);
D3D11_VIEWPORT vp;
vp.TopLeftX=0.0f;
vp.TopLeftY=0.0f;
vp.Width = static_cast <float> (width);
vp.Height= static_cast <float> (height);
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
d3dDeviceContext -> RSSetViewports(1,&vp);
return true;
SetBuild() Prepare the matrices inside the container for the smaller cubes ....i didnt program it to draw the smaller cubes yet
and this the function that draws the scene
void Game::Render(){
d3dDeviceContext->ClearRenderTargetView(RenderTarget,reinterpret_cast <const float*> (&Colors::LightSteelBlue));
d3dDeviceContext->ClearDepthStencilView(depth,D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL,1.0f,0);
d3dDeviceContext-> IASetInputLayout(_layout);
d3dDeviceContext-> IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dDeviceContext->IASetIndexBuffer(indices,DXGI_FORMAT_R32_UINT,0);
UINT strides=sizeof(Vertex),off=0;
d3dDeviceContext->IASetVertexBuffers(0,1,&vertices,&strides,&off);
D3DX11_TECHNIQUE_DESC des;
Tech->GetDesc(&des);
Floor * Lookup; /*is a variable to Lookup inside the matrices structure (Floor Contains XMMATRX Piese[9])*/
std::vector<XMFLOAT4X4> filled; // saves the matrices of the smaller cubes
XMMATRIX V=XMLoadFloat4x4(&View),P = XMLoadFloat4x4(&Proj);
XMMATRIX vp = V * P;XMMATRIX wvp;
for (UINT i = 0; i < des.Passes; i++)
{
d3dDeviceContext->RSSetState(BuildRast);
wvp = XMLoadFloat4x4(&(B.Memory[0].Pieces[0])) * vp; // Loading The Matrix at translation(0,0,0)
HR(ShadeMat->SetMatrix(reinterpret_cast<float*> ( &wvp)));
HR(Tech->GetPassByIndex(i)->Apply(0,d3dDeviceContext));
d3dDeviceContext->DrawIndexed(build_ind_count,build_ind_index,build_vers_index);
d3dDeviceContext->RSSetState(PieseRast);
UINT r1=B.GetSize(),r2=filled.size();
for (UINT j = 0; j < r1; j++)
{
Lookup = &B.Memory[j];
for (UINT r = 0; r < Lookup->filledindeces.size(); r++)
{
filled.push_back(Lookup->Pieces[Lookup->filledindeces[r]]);
}
}
for (UINT j = 0; j < r2; j++)
{
ShadeMat->SetMatrix( reinterpret_cast<const float*> (&filled[i]));
Tech->GetPassByIndex(i)->Apply(0,d3dDeviceContext);
d3dDeviceContext->DrawIndexed(piese_ind_count,piese_ind_index,piese_vers_index);
}
}
HR(swapchain->Present(0,0));}
thanks in Advance
One bug in your program appears to be that you're using i, the index of the current pass, as an index into the filled vector, when you should apparently be using j.
Another apparent bug is that in the loop where you are supposed to be iterating over the elements of filled, you're not iterating over all of them. The value r2 is set to the size of filled before you append anything to it during that pass. During the first pass this means that nothing will be drawn by this loop. If your technique only has one pass then this means that the second DrawIndexed call in your code will never be executed.
It also appears you should be only adding matrices to filled once, regardless of the number of the passes the technique has. You should consider if your code is actually meant to work with techniques with multiple passes.

OpenLayers: Zoom multiple layers to best common extent?

Is there an easy way (other than getting layer extents separately and doing the calculation) to group the layers and zoom to the extent that is best for displaying shapes on all of the grouped layers?
Solution:
bounds = #get('siblingsLayer').getDataExtent()
bounds.extend(#get('vectorLayer').getDataExtent())
bounds.extend(#get('parentLayer').getDataExtent())
#get('map').zoomToExtent(bounds)
This one worked for me.
Code:
var allLayers = map.getLayers();
var length = allLayers.getLength();
var layerArray = ['Layer1', 'Layer2', ..., 'LayerN'];
var extent = ol.extent.createEmpty();
for (var i = 0; i < length; i++) {
var existingLayer = allLayers.item(i);
for (var j = 0; j < layerArray.length; j++) {
if (existingLayer.get('title') == layerArray[j]) {
ol.extent.extend(extent,existingLayer.getSource().getExtent());
}
}
}
map.getView().fit(extent, map.getSize());
Example:
I have added two layers and pointed to one layer among them.
After calling the API, my view updated and focused to both the layers:

Histogram Smoothing

I have a probably pretty simple question but I am still not sure!
Actually I only want to smooth a histogram, and I am not sure which of the following to methods is correct. Would I do it like this:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
vector<double> tmpVect(histogram->size());
for (unsigned int i = 0; i < histogram->size(); i++)
tmpVect[i] = (*histogram)[i];
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += tmpVect[bin-1+i]*mask[i];
}
(*histogram)[bin] = smoothedValue;
}
Or would you usually do it like this?:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += (*histogram)[bin-1+i]*mask[i];
}
(*histogram)[bin] = smoothedValue;
}
My Questin is: Is it resonable to copy the histogram in a extra vector first so that when I smooth at bin i I can use the original i-1 value or would I simply do smoothedValue += (*histogram)[bin-1+i]*mask[i];, so that I use the already smoothed i-1 value instead the original one.
Regards & Thanks for a reply.
Your intuition is right: you need a temporary vector. Otherwise, you will end up using partly old values, and partly new values, and the result will not be correct. Try it yourself on paper with a simple example.
There are two ways you can write this algorithm:
Copy the data to a temporary vector first; then read from that one, and write to histogram. This is what you did in your first code fragment.
Read from histogram and write to a temporary vector; then copy from the temporary vector back to histogram.
To prevent needless copying of data, you can use vector::swap. This is an extremely fast operation that swaps the contents of two vectors. Using strategy 2 above, this would result in:
vector<double> mask(3);
mask[0] = 0.25; mask[1] = 0.5; mask[2] = 0.25;
vector<double> newHistogram(histogram->size());
for (int bin = 1; bin < histogram->size()-1; bin++) {
double smoothedValue = 0;
for (int i = 0; i < mask.size(); i++) {
smoothedValue += (*histogram)[bin-1+i]*mask[i];
}
newHistogram[bin] = smoothedValue;
}
histogram->swap(newHistogram);

Resources