Given any binary, for example <<1, 0, 110, 64>>, how can we determine if a particular bit is set?
Say we wish to determine if bit-1 and bit-2 are set, one would expect this to work, but it doesn't:
<<bit1::bits-size(1), bit2::bits-size(1), _rest::bits>> = <<1, 0, 110, 64>>
Gives:
iex(5)> {bit1, bit2}
{<<0::size(1)>>, <<0::size(1)>>}
Correct ANSWER (from Igor and other comments):
<<_::bits-6, bit2::bits-1, bit1::bits-1, num::bits>> = <<1, 0, 110, 64>>
Gives the expected answer:
{bit1,bit2} = {1, 0}
Background
I'm building a parser to handle this: https://msdn.microsoft.com/en-us/library/vs/alm/dd943386(v=office.12).aspx
Using this C# code as a template I get the correct result: <<1, 0, 110, 64>> = 2.4
https://github.com/ChiangHanLung/PIC_VDS/blob/f96afdd3863f5ce1df237b2784040624bc88b16b/Reference_DLL_SourceCode/NPOI/HSSF/Util/RKUtil.cs#L33-L74
My equivalent Elixir implementation of the above works as expected, but i believe using bit-string parsing should be possible (and cleaner)
def rk_number(data) do
# IO.puts " ** rk-data: #{inspect data}"
n0 = :binary.decode_unsigned(data, :little)
n1 = n0 >>> 2
n2 =
if (n0 &&& 0x2) == 0x2 do # bit-2, is an int
<<v::little-signed-32>> = <<n1::little-32>>
v
else
n3 = n1 <<< 34
<<v::little-float-64>> = <<n3::little-64>>
v
end
if (n0 &&& 0x1) == 0x1 do # bit-1, div by 100
n2 / 100
else
n2
end
end
That's because every number in <<1, 0, 110, 64>> representation has size 8 by default.
That's why
<<bit1::bits-size(1), bit2::bits-size(1), _rest::bits>> = <<1, 0, 110, 64>>
{bit1, bit2} = {<<0::size(1)>>, <<0::size(1)>>}
Because 2 first bits in 1 of size 8 (00000001) equals 0.
But
<<bit1::bits-size(8), bit2::bits-size(8), _rest::bits>> = <<1, 0, 110, 64>>
{bit1, bit2} = {<<1>>, <<0>>}
Or
<<bit1::bits-size(1), bit2::bits-size(1), _rest::bits>> = <<1::size(1), 0::size(1), 110, 64>>
{bit1, bit2} = {<<1::size(1)>>, <<0::size(1)>>}
If there's an integer and you're trying to get first two bits of it, you may try something like this:
<<bit1::bits-size(1), bit2::bits-size(1), _rest::bits>> = :binary.encode_unsigned(your_integer)
I've got the answer, after consider one of the comments above:
<<_::bits-6, bit2::bits-1, bit1::bits-1, num::bits>> = <<1, 0, 110, 64>>
{bit1, bit2} = {1, 0}
which gives the expected result
Related
I'm currently trying to make my own FM synthesizer, but instead of modulating sine waves, I modulate wavetables, or, one cycle waveforms.
I did the phase accumulator, but the problem is when I do a pitch change, like a vibrato for exemple, I have several clicks. This program was written in Nim, a compiled language, for performances and being easy to use.
import sdl2
import sdl2/audio
import math
# My callback user data
type
cbData = object
audioData: array[32, uint8]
offset: int32
stride: int32
# the playback frequency
let
FREQ = 44100
# the number of samples in the buffer
let
SAMPLES = 4096
# The phase, from the phase accumulator
var phase = 0
# The playback pitch, here, an A-3 note
var pitch = 440.0
# My phase accumulator function, it uses a sawtooth wave to determine which sample should I output to the buffer
proc phaseAccumulator(x: int, pitch: float = 440.0): int =
var lenght = 32 # Lenght of the wavetable
var pitch = pitch # Which pitch to play
var a = -((pitch * x.float * PI)/(FREQ.float * PI))
var o = (-lenght.float * ((2 * (a - floor(a)) - 1)) / 2) + (lenght.float / 2)
echo (o).int
# Some clipping
if(o > 31):
o = 31
return floor(o).int
proc myCallback(userdata: pointer, stream: ptr uint8, len: cint) {.cdecl.} =
# Casting raw pointer to cbData pointer and dereferencing it
var data = cast[ptr cbData](userData)[]
# Applying a vibrato!
pitch = pitch + (sin(phase.float) * 4)
# Filling the buffer
for x in countup(0, (len div 4)):
var ind = phaseAccumulator(phase, pitch) # getting the index of the wavetable to play with the phase accumulator
inc phase # and increasing the phase accumulator
# Getting the sample value at the index given by the phase accumulator,
# casting to float and dividing it by 255 since it was an 8-bits number.
var sampleValue = data.audioData[ind].float / 255.0
# putting the sample into the buffer
cast[ptr UncheckedArray[float32]](stream)[x] = sampleValue
proc main(): void =
var wavetable2 = [31.uint8, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
# We will use this wavetable
var wavetable = [23.uint8, 31, 31, 28, 27, 26, 19, 15, 19, 23, 17, 10, 9, 11, 8, 10, 20, 29, 28, 22, 19, 17, 9, 5, 8, 13, 8, 3, 2, 6, 5, 9 ]
# Amplyfying it!
for i in wavetable.mitems:
# i = i - 16
i = i * 3
if(init(INIT_AUDIO).int < 0):
echo("FATAL ERROR")
var myData: cbData
myData.audioData = wavetable
myData.offset = 0 # Unused here
myData.stride = 0.int32 # Unused here
# SDL2 audio specs
var mySpecs: AudioSpec
mySpecs.format = AUDIO_F32
mySpecs.freq = FREQ.cint
mySpecs.channels = 1
mySpecs.samples = SAMPLES.uint16
mySpecs.userdata = nil
mySpecs.padding = 0
mySpecs.callback = myCallback
mySpecs.userdata = myData.addr
# Turing audio on
discard openAudio(mySpecs.addr, nil)
pauseAudio(0)
while true:
# Infinite loop to avoid audio shutting down. I do an operation here because
# Nim language force me to do something inside a loop statement...
var u = 0
# Starting the program
main()
I used a saw function to build the phase accumulator :
44100 is the playback frequency, l the lenght of the wavetable and f the pitch.
How can I get rid of clicks? and is there a better way to achieve a phase accumulator? thanks for your answer.
Find the minimum sum of elements with one element from each row. I think the answer is
-214, but z3py returns unsat. What is wrong?
from z3 import Solver, Int, ForAll, Or
ARR = [
[36, 12, 90, 88, 82],
[-92, 50, 40, 31, 43],
[81, 28, -26, 8, -59],
[18, -99, -70, -33, 58],
[44, -33, 24, -92, -68],
]
s = Solver()
xs = [Int(f"x_{i}") for i, row in enumerate(ARR)]
ys = [Int(f"y_{i}") for i, row in enumerate(ARR)]
for x, y, row in zip(xs, ys, ARR):
s.add(Or(*[x == val for val in row]))
s.add(Or(*[y == val for val in row]))
s.add(ForAll(ys, sum(xs) <= sum(ys)))
print(s.check()) # unsat
Your encoding isn't quite correct. If you stick the following line in your program:
print(s.sexpr())
You'll see that it prints, amongst other things:
(assert (forall ((y_0 Int) (y_1 Int) (y_2 Int) (y_3 Int) (y_4 Int))
(<= (+ 0 x_0 x_1 x_2 x_3 x_4) (+ 0 y_0 y_1 y_2 y_3 y_4))))
And this is the reason why it is unsat. This is a quantified formula, and thus it says it is only satisfiable if the formula is true for all values y_0 .. y_4. This is obviously not true, and hence the unsat result.
Instead of this formulation, you should use z3's optimization engine. Pick one variable from each row, add them, and minimize that result. Something like this:
from z3 import *
ARR = [
[36, 12, 90, 88, 82],
[-92, 50, 40, 31, 43],
[81, 28, -26, 8, -59],
[18, -99, -70, -33, 58],
[44, -33, 24, -92, -68],
]
o = Optimize()
es = [Int(f"e_{i}") for i, row in enumerate(ARR)]
for e, row in zip (es, ARR):
o.add(Or(*[e == val for val in row]))
minTotal = Int("minTotal")
o.add(minTotal == sum(es))
o.minimize(minTotal)
print(o.check())
print(o.model())
When I run this, I get:
sat
[e_0 = 12,
e_3 = -99,
e_2 = -59,
e_1 = -92,
e_4 = -92,
minTotal = -330]
That is, solver picks 12 from the first row, -92 from the second, -59 from the third, -99 from the fourth, and -92 from the last row; for a minimum sum of -330.
It's easy to see that this is the correct solution since the solver picks minimum element from each row, and thus their sum will be minimal as well. (I'm not sure why you were expecting -214 to be the answer.)
I need help. I wrote the code and checked it a hundred times. But I can't find the error. All code before the while loop works without errors. Error in the loop itself. When you run it, you get an infinite loop.
I would be grateful if you could tell me where I made the mistake and why it turns out to be an infinite loop.
class Semis {
int i;
double k;
Semis(this.i, this.k);
}
void main() {
var p = [
0, 1, 5, 8, 9, 10, 17, 17, 20, 24, // 0X's
30, 32, 35, 39, 43, 43, 45, 49, 50, 54, // 1X's
57, 60, 65, 68, 70, 74, 80, 81, 84, 85, // 2X's
87, 91, 95, 99, 101, 104, 107, 112, 115, 116, // 3X's
119, 121, 125, 129, 131, 134, 135, 140, 143, 145, // 4X's
151
];
Function cutLog = (List p, int n) {
// Some array to store calculated values
num sum = 0;
int iter = 0;
int stock = n;
List<Semis> pL = [];
var map = Map.fromIterable(p,
key: (index) => p.indexOf(index),
value: (item) => item / (p.indexOf(item) > 0 ? p.indexOf(item) : 1));
var sortedMap = Map.fromEntries(
map.entries.toList()..sort((e1, e2) => e2.value.compareTo(e1.value)));
sortedMap.forEach(
(i, k) => pL.isEmpty || pL.last.i > i ? pL.add(Semis(i, k)) : null);
while (stock > 0) {
if ((stock - pL[iter].i) > 0) {
sum = sum + p[pL[iter].i];
stock = stock - pL[iter].i;
} else
iter++;
}
return sum; // Good luck intern!
};
print(cutLog(p, 5));
}
You get an infinite loop because the condition of the loop never fails.
The condition is that stock > 0. However, what you do in the loop is:
If stock minus some value is >0, you decrement stock. Therefore stock remains higher than 0
Else you increment the iterator.
You never actually allow stock to be decremented enough so that it becomes 0. I think your if comparison should be made using >= 0, if that seems logical for your algorithm. If not, then you probably need to rework it more.
If you look at the list of Semis you create, its last element is Semis(0, 0.0).
That means that your loop, will eventually reach this, and then
if ((stock - pL[iter].i) > 0) {
sum = sum + p[pL[iter].i];
stock = stock - pL[iter].i;
will do nothing because pL[iter].i is zero.
You probably need to bail out of the loop at this point.
Your loop will, as Lyubomir Vasilev says, never have a false condition because stock never reaches zero. Your iter is increment, and if it hadn't been for the Semis(0, _) entry, you would eventually have run past the end of the pL array and gotten an index-out-of-range error. With that zero-value in the loop, you will run forever.
First of all I´m new to Machine Learning.
I am trying to predict the price of second hand cars. This cars have makes and models, so I used a MultiLabelBinarizer to make a sparse matrix, to handle the categorical attributes, here's the code:
from sklearn.preprocessing import MultiLabelBinarizer
encoder = MultiLabelBinarizer()
make_cat_1hot = encoder.fit_transform(make_cat)
model_cat_1hot = encoder.fit_transform(model_cat)
type_cat_1hot = encoder.fit_transform(type_cat)
print(type(make_cat_1hot))
carInfoModHot = carsInfoMod.copy()
carInfoModHot["makeHot"] = make_cat_1hot.tolist()
carInfoModHot["modelHot"] = model_cat_1hot.tolist()
carInfoModHot["typeHot"] = type_cat_1hot.tolist()
doors km make year makeHot modelHot
5.0 78779 Mercedes 2012 [0, 0, 0, 0, 1, 0, 0, 0, ...[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, ...
5.0 25463 Bmw 2015 [0, 1, 0, 0, 0, 0, 0, ... [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, ...
Then I used it to make a prediction and get the mean square error with a Linear Regression:
lr = linear_model.LinearRegression()
carsInfoTrainHot = carInfoModHot.drop(["price"], axis=1) # drop labels for training set
df1 = carsInfoTrainHot.iloc[:30000, :]
carsLabels1 = carsInfoMod.iloc[:30000, 3]
print(carsInfoTrainHot.head())
df2 = carsInfoTrainHot.iloc[30001:60000, :]
carsLabels2 = carsInfoMod.iloc[30001:60000, 3]
df3 = carsInfoTrainHot.iloc[60001:, :]
carsLabels3 = carsInfoMod.iloc[60001:, 3]
lr.fit(df1, carsLabels1)
print(carsInfoTrainHot.shape)
carPrediction = lr.predict(df2)
lin_mse = mean_squared_error(carsLabels2, carPrediction)
lin_rmse = np.sqrt(lin_mse)
But I get this error:
ValueError Traceback (most recent call
last) in ()
12 carsLabels3 = carsInfoMod.iloc[60001:, 3]
13
---> 14 lr.fit(df1, carsLabels1)
15 print(carsInfoTrainHot.shape)
16 carPrediction = lr.predict(df2)
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/base.py
in fit(self, X, y, sample_weight)
510 n_jobs_ = self.n_jobs
511 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'],
--> 512 y_numeric=True, multi_output=True)
513
514 if sample_weight is not None and np.atleast_1d(sample_weight).ndim > 1:
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py
in check_X_y(X, y, accept_sparse, dtype, order, copy,
force_all_finite, ensure_2d, allow_nd, multi_output,
ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype,
estimator)
519 X = check_array(X, accept_sparse, dtype, order, copy, force_all_finite,
520 ensure_2d, allow_nd, ensure_min_samples,
--> 521 ensure_min_features, warn_on_dtype, estimator)
522 if multi_output:
523 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False,
/home/vagrant/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py
in check_array(array, accept_sparse, dtype, order, copy,
force_all_finite, ensure_2d, allow_nd, ensure_min_samples,
ensure_min_features, warn_on_dtype, estimator)
400 # make sure we actually converted to numeric:
401 if dtype_numeric and array.dtype.kind == "O":
--> 402 array = array.astype(np.float64)
403 if not allow_nd and array.ndim >= 3:
404 raise ValueError("Found array with dim %d. %s expected <= 2."
ValueError: setting an array element with a sequence.
From what I understand is that I´m inserting an array in the categorical attributes, but how else can I change the categorical values to a sparse matrix?
Thanks.
Here is what I want to implement f(x) with tensorflow
input x = (x1,x2,x3,x4,x5,x6,x7,x8,x9)
define f(x) = f1(x1,x2,x3,x4,x5) + f2(x5,x6,x7,x8,x9)
where
f1(x1,x2,x3,x4,x5) = {1 if
(x1,x2,x3,x4,x5)=(0,0,0,0,0),
g1(x1,x2,x3,x4,x5) otherwise}
f2(x5,x6,x7,x8,x9) = {1 if
(x5,x6,x7,x8,x9)=(0,0,0,0,0),
g2(x5,x6,x7,x8,x9) otherwise}
This is my tensorflow code
import tensorflow as tf
import numpy as np
ph = tf.placeholder(dtype=tf.float32, shape=[None, 9])
x1 = tf.slice(ph, [0, 0], [-1, 5])
x2 = tf.slice(ph, [0, 4], [-1, 5])
fixed1 = tf.placeholder(dtype=tf.float32, shape=[1, 5])
fixed2 = tf.placeholder(dtype=tf.float32, shape=[1, 5])
# MLP 1
w1 = tf.Variable(tf.ones([5, 1]))
g1 = tf.matmul(x1, w1)
# MLP 2
w2 = tf.Variable(-tf.ones([5, 1]))
g2 = tf.matmul(x2, w2)
check1 = tf.reduce_all(tf.equal(x1, fixed1), axis=1, keep_dims=True)
check2 = tf.reduce_all(tf.equal(x2, fixed2), axis=1, keep_dims=True)
#### with Problem
f1 = tf.cond(check1,
lambda: tf.constant([2], dtype=tf.float32), lambda: g1)
f2 = tf.cond(check2,
lambda: tf.constant([1], dtype=tf.float32), lambda: g2)
####
f = tf.add(f1, f2)
x = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0, 0, 0, 0],
[9, 0, 0, 0, 0, 0, 0, 0, 0]])
fixed = np.array([[0, 0, 0, 0, 0]])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print('(1)\n', sess.run(check1, feed_dict={ph: x, fixed1: fixed, fixed2: fixed}))
print('(2)\n', sess.run(check2, feed_dict={ph: x, fixed1: fixed, fixed2: fixed}))
print('(3)\n', sess.run(f, feed_dict={ph: x, fixed1: fixed, fixed2: fixed}))
print('(4)\n', sess.run(f1, feed_dict={ph: x, fixed1: fixed, fixed2: fixed}))
print('(5)\n', sess.run(f2, feed_dict={ph: x, fixed1: fixed, fixed2: fixed}))
In this case,
check1 is [[ True], [ True], [False], [False], [False]] with shape (5, 1)
check2 is [[ True], [False], [ True], [ True], [ True]] with shape (5, 1)
I expect result of f is [[3], [1], [2], [3], [10]]
but seems like tf.cond() can not handle input as boolean tensors with shape (5, 1)
Could you advice how to implement f(x) with tensorflow, please.
This is Error message what i received
Traceback (most recent call last): File
"C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py",
line 670, in _call_cpp_shape_fn_impl
status) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\contextlib.py",
line 66, in exit
next(self.gen) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py",
line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape
must be rank 0 but is rank 2 for 'cond/Switch' (op: 'Switch') with
input shapes: [?,1], [?,1].
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"C:/Users/hong/Dropbox/MLILAB/Research/GM-MLP/code/tensorflow_cond.py",
line 23, in
lambda: tf.constant([2], dtype=tf.float32), lambda: g1) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py",
line 1765, in cond
p_2, p_1 = switch(pred, pred) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py",
line 318, in switch
return gen_control_flow_ops._switch(data, pred, name=name) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_control_flow_ops.py",
line 368, in _switch
result = _op_def_lib.apply_op("Switch", data=data, pred=pred, name=name) File
"C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py",
line 759, in apply_op
op_def=op_def) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py",
line 2242, in create_op
set_shapes_for_outputs(ret) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py",
line 1617, in set_shapes_for_outputs
shapes = shape_func(op) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py",
line 1568, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py",
line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn) File "C:\Users\hong\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\common_shapes.py",
line 675, in _call_cpp_shape_fn_impl
raise ValueError(err.message) ValueError: Shape must be rank 0 but is rank 2 for 'cond/Switch' (op: 'Switch') with input shapes: [?,1],
[?,1].
Process finished with exit code 1
I think you need tf.where, not tf.cond.
See the answer to this question: How to use tf.cond for batch processing