Octave script falling into error when processing data with sampling frequency above 50kS/s - histogram

I'm working with an Octave script to process data files with high sample rates (up to 200kS/s collected over 3 minutes). The code runs into issues when processing any files with a sample rate above 50kS/s, regardless of size or number of samples but functions correctly otherwise.
The error I receive when attempting to run the code with files above 50kS/s is called from the hist function:
error: x(0): subscripts must be either integers 1 to (2^63)-1 or logicals
error:
I have narrowed the cause to the following section of code (note that FS is the detected sampling frequency):
FILTER_ORDER = 1;
FILTER_CUTOFF = 1 / (2 * pi * 300e-3);
[b_lp, a_lp] = butter(FILTER_ORDER, FILTER_CUTOFF / (FS / 2), 'low');
%
s = SCALING_FACTOR * filter(b_lp, a_lp, u_q) ;
P = s ;
%
tau = 20;
transient = tau * FS; % index after the transient
Pmax = max(P(transient:end));
%
s = s(transient:end);
%
NUMOF_CLASSES = 10000; % number of bins used for the histogram
[bin_cnt, cpf.magnitude] = hist(s, NUMOF_CLASSES); % sorts data into the number of bins specified
I can try to provide more information if required, I'm not very familiar with Octave.

Related

Uniswap Liquidity Calculation issue on Arbitrum Chain

According to the "Liqudity Math in uniswap v3", the liqudity of a position should be:
L = amt0 * (sqrt(upper) * sqrt(cprice)) / (sqrt(upper) - sqrt(cprice))
or
L = amt1 / (sqrt(cprice) - sqrt(lower))
I tried to calculate the liquidity of the below position on Arbitrum:
The nft token ID of the position is 69171, so I can get the liqudity by calling the contract(0xC36442b4a4522E871399CD717aBDD847Ab11FE88) on https://arbiscan.io
You can see it shows the liqudity is 50242219347523, and we can do more unit convertion:
Now I try to calcuate this number with the uniswap V3 math:
This is the output:
As we can see, the code output is very similar to the contract output, but if we look carefully, we will find the unit seems to be different. I know the unit of the contract ouput should be 'wei', but I don't know what the unit of the code results is. Can anybody help? Thanks.
I checked that position and pool. Best to query the current price from the pool's contract, for quick look the UI can be found at https://info.uniswap.org/#/arbitrum/pools/0x2f5e87c9312fa29aed5c179e456625d79015299c
The current price is shown as 11.9011 ETH per BTC, and there are 0.3122 BTC and 1.466 ETH in the pool. This gives:
price = 11.9011 * (1e18 / 1e8)
x = 0.3122 * 1e8
y = 1.466 * 1e18
The tick range of the position No. 69171 is 253300 to 259900. Use these values to calculate sp = sqrt(price) and the square roots of the price range boundaries, and from them, the liquidity:
sp = price ** 0.5
sa = 1.0001 ** (253300 // 2)
sb = 1.0001 ** (259900 // 2)
Lx = x * sp * sb / (sb - sp)
Ly = y / (sp - sa)
L = min(Lx, Ly)
The result Lx is 49905251975363.266 and Ly is 51071435112054.96. The Etherscan info shows liquidity L=50242219347523, in between these two values, which have a few % difference. A few % is an acceptable error given the imprecise input values used in this calculation; the UI shows the price and amount values in a rounded format.

Re-implementing Muller and Mueller clock recovery with control_loop

I'm currently implementing symbol time recovery blocks. The idea is to be able to choose different TEDs (Gardner, Zero-crossing, Early-Late, Maximum-likelihood etc). In blocks like M&M recovery, the gain parameters of the loop are expressed explicitly (gain_omega and gain_mu) which can be difficult to get right. The contro_loop class is, however, more convenient (loop characteristics can be specified by "loop bandwidth" and "damping factor"(zeta)). So my first test started with the re-implementation of the MM Clock Recovery with a control loop. The work function of this block is shown below (Comments are mine)
clock_recovery_mm_ff_impl::general_work(int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu); //Interpolation
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample; // Error calculation
d_last_sample = out[oo];
//Loop filtering
d_omega = d_omega + d_gain_omega * mm_val; //Frequency
d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim); //Bound the frequency
d_mu = d_mu + d_omega + d_gain_mu * mm_val; //Phase
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu); // Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
Here is my code. First, the control loop is initialized the constructor
loop(new gr::blocks::control_loop(0.02,(1 + d_omega_relative_limit)*omega,
(1 - d_omega_relative_limit)*omega))
First of all I would like to eliminate a couple of doubts that I have regarding pll (the control_loop above) in symbol timing recovery particularly phase and frequency ranges (that are in turn used for wrapping). Taking an analogy from Costas loop : carrier phase is wrapped between -2pi and +2pi and the frequency offset is tracked between -1 and +1. It is quite straightforward to see why. Unfortunately I can't get my head around phase and frequency tracking in symbol recovery. From the m&m block, frequency is tracked between (1+omega_relative_limit) and (1 - omega_relative_limit)*omega where omega is simply the number of samples per symbol. Phase is tracked between 0 and omega. I dont understand why this is so and why the m&m block doesn't wrap it. Any ideas here will be appreciated.
And here is my work function
debug_time_recovery_pam_test_1_impl::general_work (int noutput_items,
gr_vector_int &ninput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
// Tell runtime system how many output items we produced.
const float *in = (const float *)input_items[0];
float *out = (float *)output_items[0];
int ii = 0; // input index
int oo = 0; // output index
int ni = ninput_items[0] - d_interp->ntaps(); // don't use more input than this
float mm_val;
while(oo < noutput_items && ii < ni ) {
// produce output sample
out[oo] = d_interp->interpolate(&in[ii], d_mu);
//Calculating error
mm_val = slice(d_last_sample) * out[oo] - slice(out[oo]) * d_last_sample;
d_last_sample = out[oo];
//Loop filtering
loop->advance_loop(mm_val); // Filter the error
loop->frequency_limit(); //Stop frequency from wandering too far
//Loop phase and frequency
d_omega = loop->get_frequency();
d_mu = loop->get_phase();
//d_omega = d_omega + d_gain_omega * mm_val;
//d_omega = d_omega_mid + gr::branchless_clip(d_omega-d_omega_mid, d_omega_lim);
//d_mu = d_mu + d_omega + d_gain_mu * mm_val;
ii += (int)floor(d_mu); // Basepoint index
d_mu = d_mu - floor(d_mu);//Fractional interval
oo++;
}
consume_each(ii);
return oo;
}
I have tried to use the block in a GFSK demodulator and I got this error
python: /build/gnuradio-bJXzXK/gnuradio-3.7.9.1/gnuradio-runtime/include/gnuradio/buffer.h:177: unsigned int gr::buffer::index_add(unsigned int, unsigned int): Assertion `s < d_bufsize' failed.
The first google search regarding this error suggests that im somehow "abusing" the scheduler since this error comes somewhere below the API. I think my calculation of d_omega and d_mu from the control loop is a bit naive but unfortunately I don't know any other way of doing so. Another alternative will be to use a modulo-1 counter (incrementing or decrementing) but I want to explore this option first.

Nueral Network for Linear Regression: prediction different every time

I have 200 training examples. I have run linear regression with 6 features on this dataset and it works fine, so I want to run nueral networs on it too.
Problem: each time I run the program, the prediction (pred) is different, vastly different!
input_layer_size = 6;
hidden_layer_size = 3;
num_labels = 1;
% Load Training Data
load('capitaldata.mat');
% example size
m = size(X, 1);
% initialize theta
initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
% Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
% find optimal theta
options = optimset('MaxIter', 50);
% set regularization parameter
lambda = 1;
% Create "short hand" for the cost function to be minimized
costFunction = #(p) nnCostFunctionLinear(p, input_layer_size, hidden_layer_size, num_labels, X, y, lambda);
% Now, costFunction is a function that takes in only one argument (the neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
% Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), num_labels, (hidden_layer_size + 1));
% test case
test = [18 279 86 59 23 16];
pred = predict(Theta1, Theta2, test);
display(pred);
Functions that are called by the above program:
1) randInitializeWeights.m
function W = randInitializeWeights(L_in, L_out)
W = zeros(L_out, 1 + L_in);
epsilon_init = 0.12;
W = rand(L_out , 1 + L_in) * 2 * epsilon_init - epsilon_init;
end;
2) nnCostFunctionLinear.m should be right since the test result is correct. Let me know if you would like to see it too.
I suspect that the problem is the dataset size, the number of features, or the initialize weights.
Thank you in advance for your help!
As a test, you can seed the random number generator with the same value each time to give the same sequence of random numbers each time. Search for
random seed
and the name of the software you are using to find how to set the seed for the random number generator.

Comparing Runtimes - Theoretical to Actual

Firstly, sorry for the long post, but I must be detailed in my explanation here. So here's what I have. I have code that measures the runtime of mergesort and radix sort algorithms for four different sizes of data.
Mergesort runtimes:
N = 10; runtime = 3499 nanoseconds
N = 100; runtime = 39600 nanoseconds
N = 1000; runtime = 470199 nanoseconds
N = 10000; runtime = 6227399 nanoseconds
Radixsort runtimes:
N = 10; runtime = 19200 nanoseconds
N = 100; runtime = 135099 nanoseconds
N = 1000; runtime = 1317799 nanoseconds
N = 10000; runtime = 14208600 nanoseconds
I have also measured the runtime of a single operation to be roughly 1000 nanoseconds on this machine. This was recommended by the professor as a means help convert theoretical runtimes to something that we can compare to the actual runtimes. For mergesort, I have O(n log(n)) as the runtime, and for radixsort I have O(nk), although I'm not entirely sure what the k represents. He suggested we do the following conversion, so I've done it for each one of the mergesorts. I don't know how to do this for radixsort, as I don't know how to factor in the 'k'. My understanding is that 'k' basically refers to the number of digits, but you can essentially stick with whichever will be larger (N or k), so since my N is always larger than k in the cases I'm working with, I'm just going to consider Radix as O(N). K is limited to six digits at the most, where N begins at 10 at the lowest value.
1000ns * theoreticalruntime
For example, 1000ns * 10 log2(10)
Mergesort:
N = 10; 33219.3 nanoseconds
N = 100; 664385.6 nanoseconds
N = 1000; 9.96578428 * 10^6 nanoseconds
N = 10000; 1.3287712379549449 * 10^8 nanoseconds
Radixsort: (1000ns per operation * N)
N = 10; 10000
N = 100; 100000
N = 1000; 1000000
N = 10000; 10000000
So here's where my issue comes in. One, I don't know how to do this calculation for the radixsort theoretical runtime. Two, I don't know exactly how to compare these values using a graph (the requirement).
In class, he was discussing using logs to "normalize" the data. The Y-axis would be N and the X-axis would be time, but he was talking about being able to use logs to change the N values from 10, 100, 1000, and 10000 to where they would show up as N = 1, 2, 3, 4. I have no idea how to do this, and I don't really know what I'd be plotting on the graph. If there's a better place I could be asking this, please point me in that direction. Time runs short.

How to calculate MAPE for Training/Test set in application of Neural Network in MATLAB efficiently?

I've been using MATLAB for my time series dataset (for an electricity dataset) as a part of my course. It consists of 40,000+ samples. After the formation of neural network, I wanted to test its accuracy. I've been curious more on MAPE(mean absolute percentage error) and RMS(Root Mean Square) errors. To calculate them, I've used following lines of code.
mape_res = zeros(N_TRAIN);
mse_res = zeros(N_TRAIN);
for i_train = 1:N_TRAIN
Inp = inputs_consumption(i_train );
Actual_Output = targets_consumption( i_train + 1 );
Observed_Output = sim( ann, Inp );
mape_res(i_train) = abs(Observed_Output - Actual_Output)/Actual_Output;
mse_res(i_train) = Observed_Output - Actual_Output;
end
mape = sum(mape_res)/N_TRAIN;
mse = sum(power(mse_res,2))/N_TRAIN;
sprintf( 'The MSE on training is %g', mse )
sprintf( 'The MAPE on training is %g', mape )
The problem with above coding is that, for a large dataset(40K samples), it takes almost 15 minutes to iterate through all those loops and it's quite a long waiting for getting result for the error rate; Isn't there any other efficient way to calculate them?
You could always do a rolling average that gets updated each iteration, as follows:
mape_res = abs(Observed_Output - Actual_Output) / Actual_Output;
mse_res = Observed_Output - Actual_Output;
alpha = 1 / i_train;
mape = mape * (1 - alpha) + mape_res * alpha;
mse = mes * (1 - alpha) + power(mse_res,2) * alpha;
Then you could either display the resulting values each iteration, use them for stopping criteria if the desired error rate is reached, or both. This also has the added benefit of not requiring the initialization and population of the mape_res and mse_res vectors unless they happen to be needed elsewhere...
Edit: Do make sure to initialize the mape and mse values to zero prior to entering the for loop :)

Resources