How to set up a Histogram counter in Prometheus to watch 256 values? - histogram

I am making a mоnіtоrіng sуstem and using Prоmеthеus, prometheus-cpp lib and Grаfаnа. I need to watch a function that returns one of 256 values:
int result function();
result = [ 0...255]
The Hіstоgrаm counter seems to work best for this. But for this counter, I need to set a vector that defines the boundaries, that is:
std::vector<double> bucket_sec = {0.0, 1.0, 1.1, 2.0, 2.1, 3.0, 3.1, 4.0, .....254.1, 255.0}
Is this the right way?
Do I need to make such a long border vector?
Or is there another way that is more correct?

Related

How do you put a sim_float4x4 matrix into a metal buffer in metal in swift?

Hi I am trying to program an app that will display simple 3d models in iOS on Xcode and I have run into a small problem but I can not find a solution to this problem in Apples Documentation or in any forums on the internet I have looked in. I have an big array with vertices for triangles in 3 Dimensions that I want to transform into world space in the rendering process in metal. I read in an article online that in order to tell metal to tell the graphics processor to transform the vertices in the rendering process you need to put this matrix in a metal buffer and then tell the rendering process to use this buffer with the matrix in it in this line of code:
renderEncoder.setVertexBuffer(ROTMATRIX, offset: 0, index: 1)
if "ROTMATRIX" is the name of the metal buffer that contains the models rotation matrix. The problem is that I do not know how to put the matrix inside this buffer. I constructed a matrix for the model called MODMAT like this:
var A = simd_float4(1, 0, 0, 0)
var B = simd_float4(0, 0, 0, 0)
var C = simd_float4(0, 0, 1, 0)
var D = simd_float4(0, 0, 0, 1)
var MODMAT = float4x4([A, B, C, D])
I tried to put the matrix MODMAT in ROTMATRIX in this line of code:
ROTMATRIX.contents().copyMemory(from: MODMAT, byteCount: 64)
But the compiler in Xcode says that it "Cannot convert value of type 'float4x4' (aka 'simd_float4x4') to expected argument type 'UnsafeRawPointer'". So I need to provide it with the unsafe raw pointer to the matrix MODMAT. So is it possible to create this kind of pointer to a Matrix in Swift and if not how should I modify ROTMATRIX in the correct way?
Best Regards Simon
contents returns an UnsafeMutableRawPointer. You can use either storeBytes(of:toByteOffset:as:) or storeBytes(of:as:) to store a simd_float4x4 to this pointer. In fact, you can use this to store any value of a trivial (basically, values that can be copied bit for bit without any refcounting and so on) type.
Refer to documentation page for UnsafeMutableRawPointer and contents

How to create 3d mesh vertices in Gideros

I'm using Lua for the first time, and of course need to check around to learn how to implement certain code.
To create a vertex in Gideros, there's this code:
mesh:setVertex(index, x, y)
However, I would also like to use the z coordinate.
I've been checking around, but haven't found any help. Does anyone know if Gideros has a method for this, or are there any tips and tricks on setting the z coordinates?
First of all these functions are not provided by Lua, but by the Gideros Lua API.
There are no meshes or things like that in native Lua.
Referring to the reference Gideros Lua API reference manual would give you some valuable hints:
http://docs.giderosmobile.com/reference/gideros/Mesh#Mesh
Mesh can be 2D or 3D, the latter expects an additionnal Z coordinate
in its vertices.
http://docs.giderosmobile.com/reference/gideros/Mesh/new
Mesh.new([is3d])
Parameters:
is3d: (boolean) Specifies that this mesh
expect Z coordinate in its vertex array and is thus a 3D mesh
So in order to create a 3d mesh you have to do something like:
local myMesh = Mesh.new(true)
Although the manual does not say that you can use a z coordinate in setVertex
http://docs.giderosmobile.com/reference/gideros/Mesh/setVertex
It is very likely that you can do that.
So let's have a look at Gideros source code:
https://github.com/gideros/gideros/blob/1d4894fb5d39ef6c2375e7e3819cfc836da7672b/luabinding/meshbinder.cpp#L96-L109
int MeshBinder::setVertex(lua_State *L)
{
Binder binder(L);
GMesh *mesh = static_cast<GMesh*>(binder.getInstance("Mesh", 1));
int i = luaL_checkinteger(L, 2) - 1;
float x = luaL_checknumber(L, 3);
float y = luaL_checknumber(L, 4);
float z = luaL_optnumber(L, 5, 0.0);
mesh->setVertex(i, x, y, z);
return 0;
}
Here you can see that you can indeed provide a z coordinate and that it will be used.
So
local myMesh = Mesh.new(true)
myMesh:SetVertex(1, 100, 20, 40)
should work just fine.
You could have simply tried that btw. It's for free, it doesn't hurt and it's the best way to learn!

Transforming MPSNNImageNode using Metal Performance Shader

I am currently working on replicating YOLOv2 (not tiny) on iOS (Swift4) using MPS.
A problem is that it is hard for me to implement space_to_depth function (https://www.tensorflow.org/api_docs/python/tf/space_to_depth) and concatenation of two results from convolutions (13x13x256 + 13x13x1024 -> 13x13x1280). Could you give me some advice on making these parts? My codes are below.
...
let conv19 = MPSCNNConvolutionNode(source: conv18.resultImage,
weights: DataSource("conv19", 3, 3, 1024, 1024))
let conv20 = MPSCNNConvolutionNode(source: conv19.resultImage,
weights: DataSource("conv20", 3, 3, 1024, 1024))
let conv21 = MPSCNNConvolutionNode(source: conv13.resultImage,
weights: DataSource("conv21", 1, 1, 512, 64))
/*****
1. space_to_depth with conv21
2. concatenate the result of conv20(13x13x1024) to the result of 1 (13x13x256)
I need your help to implement this part!
******/
I believe space_to_depth can be expressed in form of a convolution:
For instance, for an input with dimension [1,2,2,1], Use 4 convolution kernels that each output one number to one channel, ie. [[1,0],[0,0]] [[0,1],[0,0]] [[0,0],[1,0]] [[0,0],[0,1]], this should put all input numbers from spatial dimension to depth dimension.
MPS actually has a concat node. See here: https://developer.apple.com/documentation/metalperformanceshaders/mpsnnconcatenationnode
You can use it like this:
concatNode = [[MPSNNConcatenationNode alloc] initWithSources:#[layerA.resultImage, layerB.resultImage]];
If you are working with the high level interface and the MPSNNGraph, you should just use a MPSNNConcatenationNode, as described by Tianyu Liu above.
If you are working with the low level interface, manhandling the MPSKernels around yourself, then this is done by:
Create a 1280 channel destination image to hold the result
Run the first filter as normal to produce the first 256 channels of the result
Run the second filter to produce the remaining channels, with the destinationFeatureChannelOffset set to 256.
That should be enough in all cases, except when the data is not the product of a MPSKernel. In that case, you'll need to copy it in yourself or use something like a linear neuron (a=1,b=0) to do it.

Weighted Moving Avarage in Gretl

I have a question on gretl and how I can compute the filter of moving avarage.
I have a time series and I want to calculate the weighted moving avarage centered in 5 with these weights: 0.15, 0.2, 0.3, 0.2, 0.15.
In the main page of gretl we have the Variabile window where I can select Filter but there's no option for what I want to do, only, for example, simple moving avarage.
In R I would do something like this:
c<-as.vector()
for (in in 3:(T-2)){
c<-rbind(c, 0.15*x[i-2]+0.2*x[i-1]+0.3*x[i]+0.2*x[i+1]+0.15*x[i+2]}
where x is my time seriee and T is the number of observations.
But my questions are:
Does it exist an user-friendly way to do it in gretl?
If not, what is the best way to do it in the console? Does it exist a specific function?
Well I don't know what exactly you call user friendly, but since you want to have those specific weights, I guess there's no way around typing in some numbers, right?
So if I understand you correctly, and given your series x (in a dataset which is declared and recognized as a time series), then you simply would need to type the formula:
series weighma = 0.15 * x(+2) + 0.2 * x(+1) + 0.3 * x + 0.2 * x(-1) + 0.15 * x(-2)
(Instead of 'series' you could also type in 'genr' or just omit it, but I recommend this explicit variant. The same goes for the + signs inside the parentheses to indicate leads instead of lags.)
The name 'weighma' is of course arbitrary.
There are at least two places where you could type in that formula: Either choose Add /Define new variable from the menus, which gives you a dialog window with a formula field, or open the gretl console (or a script editor window).
A solution which would perhaps be more flexible in a script could use a gretl list of variables and the 'lincomb' function, something like this:
maxlead = 2
matrix weights = {0.15, 0.2, 0.3, 0.2, 0.15}
list xx = lags( nelem(weights), x(maxlead + 1) )
series weighma = lincomb(xx, weights)
The correct maxlead value could also be inferred from the length of the weights vector under the assumption of a centered MA, but I leave it at that.

OpenCV Threshold Type

I have a question about OpenCV's example on Basic Thresholding as provided in the link below:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/threshold/threshold.html#goal
I am slowly beginning to understand the code and have tried out an example too. However I am confused about a part of the code regarding thresholding operations. How does the thresholding function know which threshold operation to use?
This is where it is called:
threshold( src_gray, dst, threshold_value, max_BINARY_value,threshold_type);
I get that the last parameter "threshold_type is how it knows which threshold operation to use(eg. binary, binary inverted, truncated etc.) However in the code, this is all that is assigned to threshold_type:
int threshold_type = 3
As it is only assigned an int value of 3. How does the Threshold function know what operation to give it? Could someone explain it to me?
You should avoid using numeric literals to call the method of OpenCV instead use the constant variable defined in the opencv namespace, However it won't create any difference in output, but it makes the code more readable, So deciphered set of inputs to the cv::threshold() method are:
THRESH_BINARY = 0,
THRESH_BINARY_INV = 1,
THRESH_TRUNC = 2,
THRESH_TOZERO = 3,
THRESH_TOZERO_INV = 4,
THRESH_MASK = 7,
THRESH_OTSU = 8,
THRESH_TRIANGLE = 16
According to this table you are using thresholdType == THRESH_TOZERO

Resources