I'm trying to learn the basics of elliptical curve cryptography
and would like to know the difference between scalar and non
scalar.
The test example that comes with the library creates key pairs
performs signing and verification both for scalar and non-scalar,
what is the difference ?
The function in question is:
void ed25519_add_scalar(unsigned char *public_key, unsigned char *private_key, const unsigned char *scalar)
Related
My code works when I draw on MTLTexture with rgba32Float pixel format, I can take then CVPixelBuffer out of it.
But FlutterTexture requires bgra8Unorm format. I do not want to convert CVPixelBuffer due to performance overhead.
So I'm trying to render on MTLTexture with bgra8Unorm pixel format, but the following fragment shader code won't compile:
fragment vector_uchar4 fragmentShader2(Vertex interpolated [[stage_in]]) {
return 0xFFFFFFFF;
}
With error: Invalid return type 'vector_uchar4' for fragment function
I've tried to replace it with uint type, but it crashes with error:
Fatal error: 'try!' expression unexpectedly raised an error:
Error Domain=AGXMetalA11 Code=3
"output of type uint is not compatible with a MTLPixelFormatBGRA8Unorm color attachement."
UserInfo={NSLocalizedDescription=output of type uint is not compatible with a MTLPixelFormatBGRA8Unorm color attachement.}
If I use vector_float4 or vector_half4 return type my texture and buffers are empty.
Which return type I have to use for bgra8Unorm pixel format and get non empty image? Is it possible with metal at all?
I've found answer on page 30 of Metal Shading Language specification
And finally this code draws image as expected:
fragment float4 fragmentShader2(Vertex interpolated [[stage_in]]) {
// ...
rgba8unorm<float4> rgba;
rgba = float4(color.r, color.g, color.b, 1.0);
return rgba;
}
If someone can explain what is happening under the hood, I would really like to not waste bounty.
It depends on many different factors. In most cases you should use float4 or half4.
All modern apple GPUs that support metal designed to perform calculation on ( 32-bit or 64-bit) floating point data. It's how GPUs works, this means that any read operation calculated by the shader on the Float, Snorm, Unorm formats will be performed on 32-bit or 64-bit floating point, regardless of the original input format.
On any writing operation shader performs conversion from 32-bit or 64-bit floating point to target format. For conversion rules please see Metal Shading Language specification page 217.
Any metal formats that use the Float, Snorm, Unorm suffix are floating-point formats, while Uint and Sint are unsigned and signed integer.
Float - A floating-point value in any of the representations defined by metal.
Unorm - A floating-point value in range [0.0, 1.0].
Snorm - A floating-point value in range [-1.0, 1.0].
Uint - A unsigned integer.
Sint - A signed integer.
I am looking for the floating point version of uint32_t or int64_t or something similar to solve the following problem:
I am trying to work with OpenCV and accessing an element of a matrix of type CV_32F requires a call of the form
matrix.at<float>(cv::Point(2, 3)) = 17.0
which implicitly requires that float is a 32-bit datatype.
When I develop Image Processing Program to use OpenCV, I can usually see 'IPL_DEPTH_8U' or 'IPL_DEPTH_16U'
But, I don't know what does that mean.
What is the meaning of 'Depth' in the context of Image Processing?
Depth is the "precision" of each pixel. Typically it can be 8/24/32 bit for displaying, but any precision for computations.
Instead of precision you can also call it the data type of the pixel. The more bits per element, the better to represent different colors or intensities.
Your examples mean:
8U : 8 bit per element (maybe 8 bit per channel if multiple channels) of unsigned integer type. So probably you can access elements as unsigned char values, because that's 8 bit unsigned type.
16U : 16 bit per element => unsigned short is typically the 16 bit unsigned integer type on your system.
In OpenCV you typically have those types:
8UC3 : 8 bit unsigned and 3 channels => 24 bit per pixel in total.
8UC1 : 8 bit unsigned with a single channel
32S: 32 bit integer type => int
32F: 32 bit floating point => float
64F: 64 bit floating point => double
hope this helps
I have to use kmeans in my future work, I know it is available in OpenCV as they have a documentation page on it.
I cannot make sense of the format displayed, it has also not been explained in the given details below (it appears to be details related to OpenCV 1.1). I mean, with the C++ line:
double kmeans(InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers=noArray() )
what datatype is data, vector or matrix? which is the input matrix, which will be the output?
I am used to reading documentation like this where it is clearly stated which is the input/output/flag etc and what data types they are.
C++: void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const
I would really appreciate if someone could give a short example of kmeans being used.
P.S. the input matrix I have ready to be used for kmeans is the one produces by DescriptorExtractor::compute
Thank you
You can find examples of using most of OpenCV's functions in folder samples. In your situation take a look at these two:
kmeans.cpp
matcher_simple.cpp
I want to compare two images (number plate images). I already separated each character from number plate using ROI command. Now, I want to compare this with the stored templates or characters to recognize the character. I want to know how to compare their similarity.I am new to openCV. I am using the standard number plates.
Opencv implements the template matching function. Here is the prototype:
void matchTemplate(const Mat& image, const Mat& templ, Mat& result, int method);
Methods of comparison are mostly based on sum of squared differences with differents normalization terms.
In case of colored images each sum in the denominator is done over all of the channels (and separate mean values are used for each channel).
Use the OpenCV function minMaxLoc to find the maximum and minimum values.
try cvMatchTemplate
void cvMatchTemplate(const CvArr* image, const CvArr* templ, CvArr* result, int method);
http://opencv.willowgarage.com/documentation/c/object_detection.html