I am experimenting with Flutter and need to make a plugin package for Android and iOS, and have started with Android. The Android Java code I need to communicate with uses a byte array (byte[]) both as input and as return type for some of its methods. What does this map to in Dart?
Here is the standard type mapping for platform channels:
https://flutter.io/platform-channels/#codec
On Android, byte[] maps to Uint8List.
Dart has a dart:typed_data core library exactly for this purpose:
https://api.dartlang.org/stable/1.24.3/dart-typed_data/dart-typed_data-library.html
I'm not 100% sure of how this maps to the Flutter plugin model, though I suspect a Flutter user or developer can fill us in :)
You can use List<int> like so:
List<int> data = [102, 111, 114, 116, 121, 45, 116, 119, 111, 0];
Or Uint8List like this:
// import 'dart:typed_data';
Uint8List data = Uint8List.fromList([102, 111, 114, 116, 121, 45, 116, 119, 111, 0]);
Also check out ByteBuilder, ByteData, and ByteBuffer for more byte manipulation options. Read Working with bytes in Dart for more info.
Related
I'm trying to launch dask.cluster.Kmeans with the huge amount of data.
Working with CPU is OK since i wrap numpy arrays with dask.array.
Working with GPU doesn't seem to be possible due to not implemented functionalities in cupy.
I've tried to reproduce Mattew Rocklin example (https://blog.dask.org/2019/01/03/dask-array-gpus-first-steps) on generating random dask array from CuPy random generator - and it works, but it's not the case I want to use.
Wrapping cupy with dask.array - doesn't work.
>>> import dask.array as da
>>> import cupy as cp
>>> da.from_array(cp.arange(100000)).sum().compute()
I expect the sum of this array but get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/base.py", line 175, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/base.py", line 446, in compute
results = schedule(dsk, keys, **kwargs)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/threaded.py", line 82, in get
**kwargs
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/local.py", line 491, in get_async
raise_exception(exc, tb)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/compatibility.py", line 130, in reraise
raise exc
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/local.py", line 233, in execute_task
result = _execute_task(task, data)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/core.py", line 119, in _execute_task
return func(*args2)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/array/core.py", line 100, in getter
c = np.asarray(c)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: object __array__ method not producing an array
So how could I manage the work with CuPy through the dask array?
When creating the Dask Array from a CuPy array, you need to supply da.from_array the keyword argument asarray=False. So your code would look like the following.
>>> import dask.array as da
>>> import cupy as cp
>>> da.from_array(cp.arange(100000), asarray=False).sum().compute()
I have a bitstring of 4 bytes. I want the equivalent decimal number in elixir <<91, 84, 107, 24>>. This bitstring is basically a representation of epoch 1532259096. I googled a lot and could not find something useful related to this in elixir.
Note: Ultimately I want datetime object from this, if I can skip converting to number thats wonderful
You can use the binary pattern <<n::32>> to extract a big endian 32 bit unsigned integer from a 4 byte binary:
iex(1)> <<n::32>> = <<91, 84, 107, 24>>
<<91, 84, 107, 24>>
iex(2)> n
1532259096
I'm working through Head First Python, and there's an example:
from datetime import datetime
odds = [ 1, 3, 5, 7, 9, 11, 13, 15, 17, 19,
21, 23, 25, 27, 29, 31, 33, 35, 37, 39,
41, 43, 45, 47, 49, 51, 53, 55, 57, 59 ]
right_this_minute = datetime.today().minute
#if right_this_minute in odds:
#print("This minute seems a little odd.")
#else:
#print("Not an odd minute.")
Now if I substitute "import datetime" for the "from datetime import datetime", the interpreter gives me an error:
right_this_minute = datetime.today().minute
AttributeError: module 'datetime' has no attribute 'today'
I don't understand why the "from datetime import datetime" works, but "import datetime" does not. I've gone through a number of stackoverflow Q&A's about this, but I'm obviously missing something.
Any suggestions would be greatly appreciated.
First of all, there are two "things" called datetime: the module and a class defined by the module.
The two import options you use have different behaviours.
When you run:
from datetime import datetime
the first is the module, the second is the class. Python imports only one class (datetime) from the module. From then on, Python will understand datetime to refer to the class.
When you run:
import datetime
you import the whole module, so Python will understand datetime to be the module. To access class datetime, you need to use datetime.datetime.
I am trying to build a little file and email search engine. I'd like also to use more advanced search queries for the full text search. Hence I am looking at lucene indexes. From what I have seen, there are two approaches - node_auto_index and apoc.index.addNode.
Setting the index up works fine, and indexing nodes with small properties works. When trying to index nodes with properties that are larger then 32k, neo4j fails (and get's into an unusable state).
The error message boils down to:
WARNING: Failed to invoke procedure apoc.index.addNode: Caused by:
java.lang.IllegalArgumentException: Document contains at least one
immense term in field="text_e" (whose UTF8 encoding is longer than the
max length 32766), all of which were skipped. Please correct the
analyzer to not produce such terms. The prefix of the first immense
term is: '[110, 101, 111, 32, 110, 101, 111, 32, 110, 101, 111, 32,
110, 101, 111, 32, 110, 101, 111, 32, 110, 101, 111, 32, 110, 101,
111, 32, 110, 101]...', original message: bytes can be at most 32766
in length; got 40000
I have checked this on 3.1.2 and 3.1.0+ apoc 3.1.0.3
A much longer description of the problem can be found at https://baach.de/Members/jhb/neo4j-full-text-indexing.
Is there any way to fix this? E.g. have I done anything wrong, or is there something to configure?
Thx a lot!
neo4j does not support index values that are longer then ~32k because of underlying lucene limitation.
For some details around that area You can look at:
https://github.com/neo4j/neo4j/pull/6213 and https://github.com/neo4j/neo4j/pull/8404.
You need to split such longer values into multiple terms.
I'm currently trying to port some JAGS models to STAN. I get some strange errors "stan::prob::exponential_log(N4stan5agrad3varE): Random variable is nan:0, but must not be nan!" and to debug those I would like to know the the values of some local parameters.
In JAGS I can set up monitors for any variable. STAN only monitors parameters. But parameter cannot have assignments (if I understand it correctly).
So how do I monitor intermediate variables?
I also paste the model code, in case someone sees a stupid mistake I make. Note, however, that I'm aware that the same model can be formulated as the CDF of a double exponential (with two rates). This is a simplified form of what I plan.
Model:
data {
int y[11]; //
int reps[11];
real soas[11];
}
parameters {
real<lower=0.001,upper=0.200> v1;
real<lower=0.001,upper=0.200> v2;
}
model {
int dif[11,96];
real cf[11];
real p[11];
real t1[11,96];
real t2[11,96];
for (i in 1:11){
for (r in 1:reps[i]){
t1[i,r] ~ exponential(v1);
t2[i,r] ~ exponential(v2);
dif[i,r] <- (t1[i,r]+soas[i]<=(t2[i,r]));
}
cf[i] <- sum(dif[i]);
p[i] <-cf[i]/reps[i];
y[i] ~ binomial(reps[i],p[i]);
}
}
Here is some dummy data:
psy_dat = {
'soas' : numpy.array(range(-100,101,20)),
'y' : [47, 46, 62, 50, 59, 47, 36, 13, 7, 2, 1],
'reps' : [48, 48, 64, 64, 92, 92, 92, 64, 64, 48, 48]
}
In this particular case, the problem is that t1 (and t2 for that matter) are initialized to NaN and not changed to anything before you utilize the exponential likelihood. My guess is that t1 and t2 need to be in a generated quantities block if you intend to draw them from their posterior predictive distribution.
To answer your question as stated, you can use a print() statement within the model block to debug a problematic Stan program. And if you really want to store intermediates, then you need to declare and define them within a transformed parameters block of the Stan program.