How to create a dask-array from CuPy array? - dask

I'm trying to launch dask.cluster.Kmeans with the huge amount of data.
Working with CPU is OK since i wrap numpy arrays with dask.array.
Working with GPU doesn't seem to be possible due to not implemented functionalities in cupy.
I've tried to reproduce Mattew Rocklin example (https://blog.dask.org/2019/01/03/dask-array-gpus-first-steps) on generating random dask array from CuPy random generator - and it works, but it's not the case I want to use.
Wrapping cupy with dask.array - doesn't work.
>>> import dask.array as da
>>> import cupy as cp
>>> da.from_array(cp.arange(100000)).sum().compute()
I expect the sum of this array but get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/base.py", line 175, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/base.py", line 446, in compute
results = schedule(dsk, keys, **kwargs)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/threaded.py", line 82, in get
**kwargs
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/local.py", line 491, in get_async
raise_exception(exc, tb)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/compatibility.py", line 130, in reraise
raise exc
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/local.py", line 233, in execute_task
result = _execute_task(task, data)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/core.py", line 119, in _execute_task
return func(*args2)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/dask/array/core.py", line 100, in getter
c = np.asarray(c)
File "/home/ubuntu/miniconda3/envs/cupy/lib/python3.6/site-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: object __array__ method not producing an array
So how could I manage the work with CuPy through the dask array?

When creating the Dask Array from a CuPy array, you need to supply da.from_array the keyword argument asarray=False. So your code would look like the following.
>>> import dask.array as da
>>> import cupy as cp
>>> da.from_array(cp.arange(100000), asarray=False).sum().compute()

Related

Converting h5 to coreMl (IOS)

I'm currently working in a collaboration. My task is to convert an h5-file, which was generated by a neural network with tensorflow, to an coreML. Additionally I should implement it to my Xcode Project.
The Input is a two dimensional array of 21 Floats:
input = [[0.5, 0.4, ...]]
The output should be a Float between 0 and 1.
I've tried a lot but as far as I know the main issue is that coreML supports just the classification of a picture. I didn't find any clue how to convert an h5 to an coreML with this specific type of input and output as mentioned. Can anybody help?
Thanks a lot!
Edit
This is my code. I'm confused because once I read that I just have to name the input and output instead of defining the variable as an MLMultiArray. I guess this is my main issue. But didn't catch how to define the input as an MLMultiArray.
from keras.models import load_model
import coremltools
coreml_model = coremltools.converters.keras.convert('modelv.h5',
input_names=['data'],
output_names=['output'],
)
coreml_model.save('PredictionModel.mlmodel')
When I run the code I'm getting this following message from the compiler.
runfile('/Path/Neuronal Network')
Traceback (most recent call last):
File "/Path/ Neuronal Network/Converter.py", line 20, in <module>
output_names='output',
File "/path/", line 804, in convert
use_float_arraytype=use_float_arraytype)
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_keras_converter.py", line 585, in convertToSpec
use_float_arraytype=use_float_arraytype)
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_keras2_converter.py", line 328, in _convert
graph.build()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_topology2.py", line 740, in build
self.make_input_layers()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/coremltools/converters/keras/_topology2.py", line 169, in make_input_layers
if isinstance(kl, InputLayer) and kl.input == ts:
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 765, in __bool__
self._disallow_bool_casting()
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 534, in _disallow_bool_casting
self._disallow_in_graph_mode("using a `tf.Tensor` as a Python `bool`")
File "/Path/opt/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 523, in _disallow_in_graph_mode
" this function with #tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.

IndexError when iterating my dataset using Dataloader in PyTorch

I iterated my dataset using Dataloader in PyTorch 0.2 like these:
dataloader = torch.utils.data.DataLoader(...)
data_iter = iter(dataloader)
data = data_iter.next()
but IndexError was raised.
Traceback (most recent call last):
File "main.py", line 193, in <module>
data_target = data_target_iter.next()
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 201, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
IndexError: Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 40, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 40, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/asr4/zhuminxian/adversarial/code/dataset/data_loader.py", line 33, in __getitem__
return self.X_train[idx], self.y_train[idx]
IndexError: index 4196 is out of bounds for axis 0 with size 4135
I am wondering why the index was out of bounds. Is it the bug of Pytorch?
I tried to run my code again, the same error raised, but at different iteration and with different out-of-bound index.
My guess is that your data.Dataset.__len__ was not overloaded properly and in-fact len(dataloader.dataset) returns a number larger than len(self.X_train).
Check your implementation of the underlying dataset in '/home/asr4/zhuminxian/adversarial/code/dataset/data_loader.py'.

Dask - Drop duplicate index MemoryError

I'm getting a MemoryError when I try to drop duplicate timestamps on a large dataframe with the following code.
import dask.dataframe as dd
path = f's3://{container_name}/*'
ddf = dd.read_parquet(path, storage_options=opts, engine='fastparquet')
ddf = ddf.reset_index().drop_duplicates(subset='timestamp_utc').set_index('timestamp_utc')
...
Profiling shows that it was using up about 14GB of RAM on a dataset of 265MB of gzipped parquet files containing about 40 million rows of data.
Is there an alternative way I can drop duplicate indexes on my data without Dask using so much memory?
The traceback below
Traceback (most recent call last):
File "/anaconda/envs/surb/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/anaconda/envs/surb/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/chengkai/surbana_lift/src/consolidate_data.py", line 62, in <module>
consolidate_data()
File "/home/chengkai/surbana_lift/src/consolidate_data.py", line 37, in consolidate_data
ddf = ddf.reset_index().drop_duplicates(subset='timestamp_utc').set_index('timestamp_utc')
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/core.py", line 2524, in set_index
divisions=divisions, **kwargs)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/shuffle.py", line 64, in set_index
divisions, sizes, mins, maxes = base.compute(divisions, sizes, mins, maxes)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/base.py", line 407, in compute
results = get(dsk, keys, **kwargs)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/threaded.py", line 75, in get
pack_exception=pack_exception, **kwargs)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 521, in get_async
raise_exception(exc, tb)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/compatibility.py", line 67, in reraise
raise exc
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 290, in execute_task
result = _execute_task(task, data)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 270, in _execute_task
args2 = [_execute_task(a, cache) for a in args]
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 270, in <listcomp>
args2 = [_execute_task(a, cache) for a in args]
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 267, in _execute_task
return [_execute_task(a, cache) for a in arg]
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 267, in <listcomp>
return [_execute_task(a, cache) for a in arg]
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 271, in _execute_task
return func(*args2)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/core.py", line 69, in _concat
return args[0] if not args2 else methods.concat(args2, uniform=True)
File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/methods.py", line 329, in concat
out = pd.concat(dfs3, join=join)
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 226, in concat
return op.get_result()
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 423, in get_result
copy=self.copy)
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/internals.py", line 5418, in concatenate_block_manage
rs
[ju.block for ju in join_units], placement=placement)
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/internals.py", line 2984, in concat_same_type
axis=self.ndim - 1)
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/dtypes/concat.py", line 461, in _concat_datetime
return _concat_datetimetz(to_concat)
File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/dtypes/concat.py", line 506, in _concat_datetimetz
new_values = np.concatenate([x.asi8 for x in to_concat])
MemoryError
It is not too surprising that the data becomes very big in memory. Parquet is a pretty efficient format in terms of space, especially with gzip compression, and strings all become python objects (so expensive in memory).
In addition, you have a number of worker threads operating on parts of the overall dataframe. That involves data copying, intermediates, and concatenation of results; the latter being pretty inefficient in pandas.
One suggestion: instead of reset_index, you can remove one step by specifying index=False to read_parquet.
Next suggestion: limit the number of threads you use to a smaller number than the default, which is probably your number of CPU cores. The easiest way to do that is to use the distributed client in-process
from dask.distributed import Client
c = Client(processes=False, threads_per_worker=4)
It may be better to set the index first, and then do the drop_duplicated with map_partitions to minimise cross-partition communication.
df.map_partitions(lambda d: d.drop_duplicates(subset='timestamp_utc'))

Multihot encoding in tensoflow (google cloud machine learning, tf estimator api)

I have a feature like a post tag. So for each observation the post_tag feature might be a selection of tags like "oscars,brad-pitt,awards". I'd like to be able to pass this as a feature to a tensorflow model build using the estimator api running on google cloud machine learning (as per this example but adapted for my own problem).
I'm just not sure how to transform this into a multi-hot encoded feature in tensorflow. I'm trying to get something similar to MultiLabelBinarizer in sklearn ideally.
I think this is sort of related but not quite what i need.
So say i have data like:
id,post_tag
1,[oscars,brad-pitt,awards]
2,[oscars,film,reviews]
3,[matt-damon,bourne]
I want to featurize it, as part of preprocessing within tensorflow, as:
id,post_tag_oscars,post_tag_brad_pitt,post_tag_awards,post_tag_film,post_tag_reviews,post_tag_matt_damon,post_tag_bourne
1,1,1,1,0,0,0,0
2,1,0,0,1,1,0,0
3,0,0,0,0,0,1,1
Update
If i have post_tag_list be a string like "oscars,brad-pitt,awards" in the input csv. And if i try then do:
INPUT_COLUMNS = [
...
tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer('post_tag_list',
tf.range(0, 10, dtype=tf.int64),
tf.string, tf.int64),
default_value=10, name='post_tag_list'),
...]
I get this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/andrew_maguire/localDev/codeBase/pmc-analytical-data-mart/clickmodel/trainer/task.py", line 4, in <module>
import model
File "trainer/model.py", line 49, in <module>
default_value=10, name='post_tag_list'),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 276, in __init__
super(HashTable, self).__init__(table_ref, default_value, initializer)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 162, in __init__
self._init = initializer.initialize(self)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 348, in initialize
table.table_ref, self._keys, self._values, name=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_lookup_ops.py", line 205, in _initialize_table_v2
values=values, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 0 for 'key_value_init' (op: 'InitializeTableV2') with input shapes: [], [], [10].
If i was to pad each post_tag_list to be like "oscars,brad-pitt,awards,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER" so it's always 10 long. Would that be a potential solution here.
Or do i need to in some way know the size of all post tags i might ever be passing in here (kinda ill defined as new ones created all the time).
Have you tried tf.contrib.lookup.Hashtable?
Here is an example usage from my own use: https://github.com/TensorLab/tensorfx/blob/master/src/data/_transforms.py#L160 and a made up example snippet based on that:
import tensorflow as tf
session = tf.InteractiveSession()
entries = ['red', 'blue', 'green']
table = tf.contrib.lookup.HashTable(
tf.contrib.lookup.KeyValueTensorInitializer(entries,
tf.range(0, len(entries), dtype=tf.int64),
tf.string, tf.int64),
default_value=len(entries), name='entries')
tf.tables_initializer().run()
value = tf.constant([['blue', 'red'], ['green', 'red']])
print(table.lookup(value).eval())
I believe lookup works for both regular tensors and SparseTensors (you might end up with the latter given your variable length list of values).
There are a couple of issues to tackle here. First, is the question about a tag set which keeps growing. You would also like to know how to parse variable-length data from CSV.
To handle a growing tag set, you'll need to use an OOV or feature hashing. Nikhil showed the latter, so I'll show the former.
How to parse variable-length data from CSV
Let's suppose the column with variable length data uses | as a separator, e.g.
csv = [
"1,oscars|brad-pitt|awards",
"2,oscars|film|reviews",
"3,matt-damon|bourne",
]
You can use code like this to convert those to a SparseTensor.
import tensorflow as tf
# Purposefully omitting "bourne" to demonstrate OOV mappings.
TAG_SET = ["oscars", "brad-pitt", "awards", "film", "reviews", "matt-damon"]
NUM_OOV = 1
def sparse_from_csv(csv):
ids, post_tags_str = tf.decode_csv(csv, [[-1], [""]])
table = tf.contrib.lookup.index_table_from_tensor(
mapping=TAG_SET, num_oov_buckets=NUM_OOV, default_value=-1)
split_tags = tf.string_split(post_tags_str, "|")
return ids, tf.SparseTensor(
indices=split_tags.indices,
values=table.lookup(split_tags.values),
dense_shape=split_tags.dense_shape)
# Optionally create an embedding for this.
TAG_EMBEDDING_DIM = 3
ids, tags = sparse_from_csv(csv)
embedding_params = tf.Variable(tf.truncated_normal([len(TAG_SET) + NUM_OOV, TAG_EMBEDDING_DIM]))
embedded_tags = tf.nn.embedding_lookup_sparse(embedding_params, sp_ids=tags, sp_weights=None)
# Test it out
with tf.Session() as s:
s.run([tf.global_variables_initializer(), tf.tables_initializer()])
print(s.run([ids, embedded_tags]))
You'll see output like so (since the embedding is random, exact numbers will change):
[array([1, 2, 3], dtype=int32), array([[ 0.16852427, 0.26074541, -0.4237918 ],
[-0.38550434, 0.32314634, 0.858069 ],
[ 0.19339906, -0.24429649, -0.08393878]], dtype=float32)]
You can see that each column in the CSV is represented as an ndarray, where the tags are now 3-dimensional embeddings.

TypeError: Value passed to parameter 'input' has DataType string not in list of allowed values: int32, int64, complex64, float32, float64, bool, int8

I was trying to use tensorflow. The input attributes are similar to census example except that the LABEL Column is a continuous value. I executed the below command:
test-server#:~/aaaml-samples/arbitrator$ gcloud ml-engine local train --module-name trainer.task --package-path trainer/ -- --train-files $TRAIN_DATA --eval-files $EVAL_DATA --train-steps 1000 --job-dir
$MODEL_DIR
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.data.csv']
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.test.csv']
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.test.csv']
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/madhukar_mhraju/aaaml-samples/arbitrator/trainer/task.py", line 193, in <module>
learn_runner.run(generate_experiment_fn(**arguments), job_dir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 106, in run
return task()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 465, in train_and_evaluate
export_results = self._maybe_export(eval_result)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 484, in _maybe_export
compat.as_bytes(strategy.name))))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/export_strategy.py", line 32, in export
return self.export_fn(estimator, export_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/utils/saved_model_export_utils.py", line 283, in export_fn
exports_to_keep=exports_to_keep)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/framework/python/framework/experimental.py", line 64, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1264, in export_savedmodel
model_fn_lib.ModeKeys.INFER)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1133, in _call_model_fn
model_fn_results = self._model_fn(features, labels, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 268, in _dnn_linear_combined_model_fn
scope=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 531, in weighted_sum_from_feature_columns
transformed_tensor = transformer.transform(column)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 879, in transform
feature_column.insert_transformed_feature(self._columns_to_tensors)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column.py", line 528, in insert_transformed_feature
sparse_values = string_ops.as_string(input_tensor.values)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_string_ops.py", line 51, in as_string
width=width, fill=fill, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 585, in apply_op
param_name=input_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 61, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'input' has DataType string not
in list of allowed values: int32, int64, complex64, float32, float64,
bool, int8
Am new to tensorflow. I understand that this issue is occurring while processing the evaluation file(aaa.test.csv). The evaluation file data and format is correctly defined. And also the column data type have been mapped correctly as well.But i am not sure why the error is occurring.
1) The training data csv had column headings in them. When I generated the data, i was reordering them randomly, which resulted in the column headings being moved to somewhere in the middle. Hence the type error. It was difficult to find out as the training data was huge.

Resources