How to calculate 16 bit checksum value using javascript big endian format - checksum

i want to calculate the 16 bit checksum value in big endian format for bluetooth data transmission. i have passed the data to send is 123456 as per documentation. and got the checksum 100219 but in the documentation the checksum value is 0219. how to calculate same like that.
Calculating a 16 bit checksum?. i am following this link but not get output as expected as implemented in javascript
// ASCII only
function stringToBytes(string) {
var array = new Uint8Array(string.length);
for (var i = 0, l = string.length; i < l; i++) {
array[i] = string.charCodeAt(i);
}
return array.buffer;
}
datalength = dec2hex(msg.length);
megPassedToChecksum = 'ABAB'+ datalength + msg ;
var bytes = stringToBytes(megPassedToChecksum);
var byteArray = new Uint8Array(bytes);
var checksum = new Uint16Array();
checksum = 0;
val = [];
length = byteArray.length;
var even_length = length - (length % 2); // Round down to multiple of 2
for (var i = 0; i < even_length; i += 2) {
var val = byteArray[i] + 256 * byteArray[i + 1];
checksum += val;
}
if (i < length) {
checksum += byteArray[i];
}

Related

how to loop through each rows and column values in google sheet using app script?

am trying to loop through all the amount values in google sheet using app script , but am when i use for loop am only able to get "aoumnt 1" column values only ,
var sheetSource1 = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
for (var i = 1; i <= 3; i++) {
var activecell = sheetSource1.getRange(i + 2, 2).getValue();
Logger.log(activecell);
}
when i run Logger.log()
i get like below
860
650
420
but i want to reesult like
860
650
420
760
550
525
how to achive this result.?
So, when you iterate over using getValue you need to Loop twice:
function myFunction() {
var sheetSource1 = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
for (var i = 1; i <= 2; i++) {
for (var j = 1; j <= 3; j++) {
var activecell = sheetSource1.getRange(j + 3, i + 1).getValue();
Logger.log(activecell);
}
}
}
But a nested for is not the best perfomance or practice.
Then, we can use getValues() to return an array of arrays (multi dimensional 2d array). And from there iterate over the entire range as an unique source:
function myFunctionT() {
var data = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
let numCols = 2;
let numRows = 3;
var range = data.getRange(4, 2, numRows, numCols).getValues(); // <- using getValues(), and getRange can have multiple types of parameters to get ranges
let col1 = [];
let col2 = [];
for (var i = 0; i < numRows; i++) {
var row = range[i];
var firstColValue = row[0];
var secondColValue = row[1];
col1.push(firstColValue);
col2.push(secondColValue);
}
let list = [...col1, ...col2];
Logger.log(list)
}

ML.NET machine learning model not producing the same results using the same data

We are running the code below for a number of different days. There is a set of data used for each day but for a given day the data does not change.
If we run the algorithm a number of times for one day then the cluster results created are consistent.
If we run the algorithm a number of times for a number of days then sometimes the cluster results created are NOT consistent for one (or more) days.
for (int i = 1; i <= 4; i++)
{
MLContext mlContext = new MLContext(seed: 100);
IDataView trainingData = mlContext.Data.LoadFromEnumerable<Data>(filteredData);
var options = new KMeansTrainer.Options
{
NumberOfClusters = i,
NumberOfThreads = (i == 1) ? 1 : 50,
InitializationAlgorithm = InitializationAlgorithm.Random
};
// Set up a learning pipeline
// Step 1: concatenate input features into a single column
var pipeline = mlContext.Transforms.Concatenate(
"Features",
"Value")
// Step 2: use the k-means clustering algorithm
.Append(mlContext.Clustering.Trainers.KMeans(options));
try
{
using (TransformerChain<ClusteringPredictionTransformer<KMeansModelParameters>> model = pipeline.Fit(trainingData))
{
VBuffer<float>[] centroids = null;
var last = model.LastTransformer;
last.Model.GetClusterCentroids(ref centroids, out int k);
#if DUMP_METRICS
// Evaluate the overall metrics
IDataView transformedData = model.Transform(trainingData);
ClusteringMetrics metrics = mlContext.Clustering.Evaluate(transformedData, null, "Score", "Features");
Debug.WriteLine($"Average Distance: " + $"{metrics.AverageDistance:F2}");
#endif
// Get all centroids
ClusterInfo clusterInfo = new ClusterInfo();
clusterInfo.NumberOfClusters = k;
List<CenterInfo> centerInfos = new List<CenterInfo>(k);
for (int j = 0; j < k; j++)
{
CenterInfo centerInfo = new CenterInfo();
centerInfo.Value = centroids[j].GetValues().ToArray().FirstOrDefault();
centerInfo.WithinSS = 0;
centerInfos.Add(centerInfo);
}
// For each reading in the data find out which value is closest
for (int y = 0; y < filteredData.Count; y++)
{
float value = filteredData[y].Value;
List<double> distances = new List<double>();
for (int j = 0; j < k; j++) distances.Add(Math.Pow(value - centerInfos[j].Value, 2));
double minDistance = distances.Min();
int index = distances.FindIndex(a => a == minDistance);
Debug.Assert(value == data[y + indexRange.StartIndex]);
centerInfos[index].AddSample(new Sample<float>(data.FlagEncodedTimeAtIndex(y + indexRange.StartIndex), value));
centerInfos[index].WithinSS += minDistance;
}
Debug.Assert(centerInfos.Sum(a => a.NumberOfSamples) == filteredData.Count());
clusterInfo.TotalWithinSS = centerInfos.Sum(a => a.WithinSS);
clusterInfo.CenterInfos = centerInfos.OrderBy(a => a.WithinSS).ToList();
clusterInfos.Add(clusterInfo);
#if EXTRA_LOGGING
foreach (CenterInfo centerInfo in centerInfos)
{
Debug.WriteLine("Centre = " + $"{centerInfo.Value:F2}" + ", No. Samples = " + $"{centerInfo.NumberOfSamples}" + ", WithinSS = " + $"{centerInfo.WithinSS:F2}");
}
#endif
}
}
catch (InvalidOperationException e)
{
if (s_log.IsErrorEnabled) s_log.ErrorFormat("Error calculating cluster values ('{0}').", e.Message);
}
}
There is clearly some sort of reset that needs to be done in the code but I cannot see what I am missing.

How to generate very sharp color scale for below zero and above zero?

I'm encountering a big problem when using the number 0 (zero) as a factor for the colors to generate scales, the numbers close to 0 (zero) end up becoming almost white, impossible to see a difference.
The idea is that above 0 (zero) it starts green and gets even stronger and below 0 (zero) starting with a red one and getting stronger.
I really need any number, even if it's 0.000001 already has a visible green and the -0.000001 has a visible red.
Link to SpreadSheet:
https://docs.google.com/spreadsheets/d/1uN5rDEeR10m3EFw29vM_nVXGMqhLcNilYrFOQfcC97s/edit?usp=sharing
Note to help with image translation and visualization:
Número = Number
Nenhum = None
Valor Máx. = Max Value
Valor Min. = Min Value
Current Result / Expected Result
After reading your new comments I understand that these are the requisites:
The values above zero should be green (with increased intensity the further beyond zero).
The values below zero should be red (with increased intensity the further beyond zero).
Values near zero should be coloured (not almost white).
Given those requisites, I developed an Apps Script project that would be useful in your scenario. This is the full project:
function onOpen() {
var ui = SpreadsheetApp.getUi();
ui.createMenu("Extra").addItem("Generate gradient", "parseData").addToUi();
}
function parseData() {
var darkestGreen = "#009000";
var lighestGreen = "#B8F4B8";
var darkestRed = "#893F45";
var lighestRed = "#FEBFC4";
var range = SpreadsheetApp.getActiveRange();
var data = range.getValues();
var biggestPositive = Math.max.apply(null, data);
var biggestNegative = Math.min.apply(null, data);
var greenPalette = colourPalette(darkestGreen, lighestGreen, biggestPositive);
var redPalette = colourPalette(darkestRed, lighestRed, Math.abs(
biggestNegative) + 1);
var fullPalette = [];
for (var i = 0; i < data.length; i++) {
if (data[i] > 0) {
var cellColour = [];
cellColour[0] = greenPalette[data[i] - 1];
fullPalette.push(cellColour);
} else if (data[i] < 0) {
var cellColour = [];
cellColour[0] = redPalette[Math.abs(data[i]) - 1];
fullPalette.push(cellColour);
} else if (data[i] == 0) {
var cellColour = [];
cellColour[0] = null;
fullPalette.push(cellColour);
}
}
range.setBackgrounds(fullPalette);
}
function colourPalette(darkestColour, lightestColour, colourSteps) {
var firstColour = hexToRGB(darkestColour);
var lastColour = hexToRGB(lightestColour);
var blending = 0.0;
var gradientColours = [];
for (i = 0; i < colourSteps; i++) {
var colour = [];
blending += (1.0 / colourSteps);
colour[0] = firstColour[0] * blending + (1 - blending) * lastColour[0];
colour[1] = firstColour[1] * blending + (1 - blending) * lastColour[1];
colour[2] = firstColour[2] * blending + (1 - blending) * lastColour[2];
gradientColours.push(rgbToHex(colour));
}
return gradientColours;
}
function hexToRGB(hex) {
var colour = [];
colour[0] = parseInt((removeNumeralSymbol(hex)).substring(0, 2), 16);
colour[1] = parseInt((removeNumeralSymbol(hex)).substring(2, 4), 16);
colour[2] = parseInt((removeNumeralSymbol(hex)).substring(4, 6), 16);
return colour;
}
function removeNumeralSymbol(hex) {
return (hex.charAt(0) == '#') ? hex.substring(1, 7) : hex
}
function rgbToHex(rgb) {
return "#" + hex(rgb[0]) + hex(rgb[1]) + hex(rgb[2]);
}
function hex(c) {
var pool = "0123456789abcdef";
var integer = parseInt(c);
if (integer == 0 || isNaN(c)) {
return "00";
}
integer = Math.round(Math.min(Math.max(0, integer), 255));
return pool.charAt((integer - integer % 16) / 16) + pool.charAt(integer % 16);
}
First of all the script will use the Ui class to show a customised menu called Extra. That menu calls the main function parseData, that reads the whole selection data with getValues. That function holds the darkest/lightest green/red colours. I used some colours for my example, but I advise you to edit them as you wish. Based on those colours, the function colourPalette will use graphical linear interpolation between the two colours (lightest and darkest). That interpolation will return an array with colours from darkest to lightest, with as many in-betweens as the maximum integer in the column. Please notice how the function uses many minimal functions to run repetitive tasks (converting from hexadecimal to RGB, formatting, etc…). When the palette is ready, the main function will create an array with all the used colours (meaning that it will skip unused colours, to give sharp contrast between big and small numbers). Finally, it will apply the palette using the setBackgrounds method. Here you can see some sample results:
In that picture you can see one set of colours per column. Varying between random small and big numbers, numerical series and mixed small/big numbers. Please feel free to ask any doubt about this approach.
A very small improvement to acques-Guzel Heron
I made it skip all non numeric values, beforehand it just errored out.
I added an option in the menu to use a custom range.
Thank you very much acques-Guzel Heron
function onOpen() {
const ui = SpreadsheetApp.getUi();
ui.createMenu('Extra')
.addItem('Generate gradient', 'parseData')
.addItem('Custom Range', 'customRange')
.addToUi();
}
function parseData(customRange = null) {
const darkestGreen = '#009000';
const lighestGreen = '#B8F4B8';
const darkestRed = '#893F45';
const lighestRed = '#FEBFC4';
let range = SpreadsheetApp.getActiveRange();
if (customRange) {
range = SpreadsheetApp.getActiveSpreadsheet().getRange(customRange);
}
const data = range.getValues();
const biggestPositive = Math.max.apply(null, data.filter(a => !isNaN([a])));
const biggestNegative = Math.min.apply(null, data.filter(a => !isNaN([a])));
const greenPalette = colorPalette(darkestGreen, lighestGreen, biggestPositive);
const redPalette = colorPalette(darkestRed, lighestRed, Math.abs(biggestNegative) + 1);
const fullPalette = [];
for (const datum of data) {
if (datum > 0) {
fullPalette.push([greenPalette[datum - 1]]);
} else if (datum < 0) {
fullPalette.push([redPalette[Math.abs(datum) - 1]]);
} else if (datum == 0 || isNaN(datum)) {
fullPalette.push(['#ffffff']);
}
}
range.setBackgrounds(fullPalette);
}
function customRange() {
const ui = SpreadsheetApp.getUi();
result = ui.prompt("Please enter a range");
parseData(result.getResponseText());
}
function colorPalette(darkestColor, lightestColor, colorSteps) {
const firstColor = hexToRGB(darkestColor);
const lastColor = hexToRGB(lightestColor);
let blending = 0;
const gradientColors = [];
for (i = 0; i < colorSteps; i++) {
const color = [];
blending += (1 / colorSteps);
color[0] = firstColor[0] * blending + (1 - blending) * lastColor[0];
color[1] = firstColor[1] * blending + (1 - blending) * lastColor[1];
color[2] = firstColor[2] * blending + (1 - blending) * lastColor[2];
gradientColors.push(rgbToHex(color));
}
return gradientColors;
}
function hexToRGB(hex) {
const color = [];
color[0] = Number.parseInt((removeNumeralSymbol(hex)).slice(0, 2), 16);
color[1] = Number.parseInt((removeNumeralSymbol(hex)).slice(2, 4), 16);
color[2] = Number.parseInt((removeNumeralSymbol(hex)).slice(4, 6), 16);
return color;
}
function removeNumeralSymbol(hex) {
return (hex.charAt(0) == '#') ? hex.slice(1, 7) : hex;
}
function rgbToHex(rgb) {
return '#' + hex(rgb[0]) + hex(rgb[1]) + hex(rgb[2]);
}
function hex(c) {
const pool = '0123456789abcdef';
let integer = Number.parseInt(c, 10);
if (integer === 0 || isNaN(c)) {
return '00';
}
integer = Math.round(Math.min(Math.max(0, integer), 255));
return pool.charAt((integer - integer % 16) / 16) + pool.charAt(integer % 16);
}

How to swap bit U with bit V in YUV format

I want to swap the U and V bit in YUV format, from NV12
YYYYYYYY UVUV // each letter presents a bit
to NV21
YYYYYYYY VUVU
I leave the Y planar alone, and handle the U and V planar by the function below
uchar swap(uchar in) {
uchar out = ((in >> 1) & 0x55) | ((in << 1) & 0xaa);
return out;
}
But I cannot get the desired result, the colour of the output image still not correct.
How can I swap U and V planar correctly?
Found the problem. UV should be manipulated in byte format, not bit.
byte[] yuv = // ...
final int length = yuv.length;
for (int i1 = 0; i1 < length; i1 += 2) {
if (i1 >= width * height) {
byte tmp = yuv[i1];
yuv[i1] = yuv[i1+1];
yuv[i1+1] = tmp;
}
}
try this method (-_-)
IFrameCallback iFrameCallback = new IFrameCallback() {
#Override
public void onFrame(ByteBuffer frame) {
//get nv12 data
byte[] b = new byte[frame.remaining()];
frame.get(b);
//nv12 data to nv21
NV12ToNV21(b, 1280, 720);
//send NV21 data
BVPU.InputVideoData(nv21, nv21.length,
System.currentTimeMillis() * 1000, 1280, 720);
}
};
byte[] nv21;
private void NV12ToNV21(byte[] data, int width, int height) {
nv21 = new byte[data.length];
int framesize = width * height;
int i = 0, j = 0;
System.arraycopy(data, 0, nv21, 0, framesize);
for (i = 0; i < framesize; i++) {
nv21[i] = data[i];
}
for (j = 0; j < framesize / 2; j += 2) {
nv21[framesize + j - 1] = data[j + framesize];
}
for (j = 0; j < framesize / 2; j += 2) {
nv21[framesize + j] = data[j + framesize - 1];
}
}

How do I convert bitmap format of a UIImage?

I need to convert my bitmap from the normal camera format of kCVPixelFormatType_32BGRA to the kCVPixelFormatType_24RGB format so it can be consumed by a 3rd party library.
How can this be done?
My c# code looks like this in an effort of doing it directly with the byte data:
byte[] sourceBytes = UIImageTransformations.BytesFromImage(sourceImage);
// final source is to be RGB
byte[] finalBytes = new byte[(int)(sourceBytes.Length * .75)];
int length = sourceBytes.Length;
int finalByte = 0;
for (int i = 0; i < length; i += 4)
{
byte blue = sourceBytes[i];
byte green = sourceBytes[i + 1];
byte red = sourceBytes[i + 2];
finalBytes[finalByte] = red;
finalBytes[finalByte + 1] = green;
finalBytes[finalByte + 2] = blue;
finalByte += 3;
}
UIImage finalImage = UIImageTransformations.ImageFromBytes(finalBytes);
However I'm finding that my sourceBytes length is not always divisible by 4 which doesn't make any sense to me.

Resources