C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
C:\Users\Viktor\miniconda3\lib\site-packages\torch\utils\data_utils\collate.py:172: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\utils\tensor_numpy.cpp:205.)
return collate([torch.as_tensor(b) for b in batch], collate_fn_map=collate_fn_map)
the indiex is : 0 rest is: torch.Size([64, 240, 320, 3]) torch.Size([64, 240, 320, 3])
Traceback (most recent call last):
File "c:\Users\Viktor\Desktop\Infrarens.py", line 174, in
outputs = model(inputs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "c:\Users\Viktor\Desktop\Infrarens.py", line 135, in forward
x = self.encoder(x)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Viktor\miniconda3\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 1, 3, 3], expected input[64, 240, 320, 3] to have 1 channels, but got 240 channels instead
Im trying to train a Unet on a image set, I don't know how to interprate this output
Related
1: problem:
I have the need to use a custom data set in a tff simulation. I have built on the tff/python/research/compression example "run_experiment.py".
The error:
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-47998fd56829>", line 1, in <module>
runfile('B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py', args=['--experiment_name=temp', '--client_batch_size=20', '--client_optimizer=sgd', '--client_learning_rate=0.2', '--server_optimizer=sgd', '--server_learning_rate=1.0', '--total_rounds=200', '--rounds_per_eval=1', '--rounds_per_checkpoint=50', '--rounds_per_profile=0', '--root_output_dir=B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/logs/fed_out/'], wdir='B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection')
File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 292, in <module>
app.run(main)
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 285, in main
train_main()
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 244, in train_main
input_spec=input_spec),
File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/train_v04.py", line 193, in model_builder
metrics=[tf.keras.metrics.Accuracy()]
File "B:\tools and software\Anaconda\envs\bookProjects\lib\site-packages\tensorflow_federated\python\learning\keras_utils.py", line 125, in from_keras_model
if len(input_spec) != 2:
TypeError: object of type 'TensorSpec' has no len()
highlighting: TypeError: object of type 'TensorSpec' has no len()
2: have tried:
I have looked at the response to: TensorFlow Federated: How can I write an Input Spec for a model with more than one input
describing what would be needed to produce a custom input spec for.
I might be miss understanding input spec.
If I don't need to do this, and there is a better way, please tell.
3: source:
df = get_train_data(sysarg)
x_train, x_opt, x_test = np.split(df.sample(frac=1,
random_state=17),
[int(1 / 3 * len(df)), int(2 / 3 * len(df))])
x_train, x_opt, x_test = create_scalar(x_opt, x_test, x_train)
input_spec = tf.nest.map_structure(tf.TensorSpec.from_tensor, tf.convert_to_tensor(x_train))
TFF's models declare a slightly different input specification than you may be expecting; they generally are expecting both the x and the y values as parameters (IE, data and labels). It is unfortunate that you're hitting that AttributeError, as the ValueError TFF would be raising is probably more helpful in this case. Inlining the operative parts of the message here:
The top-level structure in `input_spec` must contain exactly two elements,
as it must specify type information for both inputs to and predictions from the model.
The TLDR in your particular example is: if you have access to the labels as well (y_train below), simply change your input_spec definition to:
input_spec = tf.nest.map_structure(
tf.TensorSpec.from_tensor,
[tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train)])
I have a feature like a post tag. So for each observation the post_tag feature might be a selection of tags like "oscars,brad-pitt,awards". I'd like to be able to pass this as a feature to a tensorflow model build using the estimator api running on google cloud machine learning (as per this example but adapted for my own problem).
I'm just not sure how to transform this into a multi-hot encoded feature in tensorflow. I'm trying to get something similar to MultiLabelBinarizer in sklearn ideally.
I think this is sort of related but not quite what i need.
So say i have data like:
id,post_tag
1,[oscars,brad-pitt,awards]
2,[oscars,film,reviews]
3,[matt-damon,bourne]
I want to featurize it, as part of preprocessing within tensorflow, as:
id,post_tag_oscars,post_tag_brad_pitt,post_tag_awards,post_tag_film,post_tag_reviews,post_tag_matt_damon,post_tag_bourne
1,1,1,1,0,0,0,0
2,1,0,0,1,1,0,0
3,0,0,0,0,0,1,1
Update
If i have post_tag_list be a string like "oscars,brad-pitt,awards" in the input csv. And if i try then do:
INPUT_COLUMNS = [
...
tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer('post_tag_list',
tf.range(0, 10, dtype=tf.int64),
tf.string, tf.int64),
default_value=10, name='post_tag_list'),
...]
I get this error:
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/andrew_maguire/localDev/codeBase/pmc-analytical-data-mart/clickmodel/trainer/task.py", line 4, in <module>
import model
File "trainer/model.py", line 49, in <module>
default_value=10, name='post_tag_list'),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 276, in __init__
super(HashTable, self).__init__(table_ref, default_value, initializer)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 162, in __init__
self._init = initializer.initialize(self)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/lookup_ops.py", line 348, in initialize
table.table_ref, self._keys, self._values, name=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_lookup_ops.py", line 205, in _initialize_table_v2
values=values, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2632, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Shape must be rank 1 but is rank 0 for 'key_value_init' (op: 'InitializeTableV2') with input shapes: [], [], [10].
If i was to pad each post_tag_list to be like "oscars,brad-pitt,awards,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER,OTHER" so it's always 10 long. Would that be a potential solution here.
Or do i need to in some way know the size of all post tags i might ever be passing in here (kinda ill defined as new ones created all the time).
Have you tried tf.contrib.lookup.Hashtable?
Here is an example usage from my own use: https://github.com/TensorLab/tensorfx/blob/master/src/data/_transforms.py#L160 and a made up example snippet based on that:
import tensorflow as tf
session = tf.InteractiveSession()
entries = ['red', 'blue', 'green']
table = tf.contrib.lookup.HashTable(
tf.contrib.lookup.KeyValueTensorInitializer(entries,
tf.range(0, len(entries), dtype=tf.int64),
tf.string, tf.int64),
default_value=len(entries), name='entries')
tf.tables_initializer().run()
value = tf.constant([['blue', 'red'], ['green', 'red']])
print(table.lookup(value).eval())
I believe lookup works for both regular tensors and SparseTensors (you might end up with the latter given your variable length list of values).
There are a couple of issues to tackle here. First, is the question about a tag set which keeps growing. You would also like to know how to parse variable-length data from CSV.
To handle a growing tag set, you'll need to use an OOV or feature hashing. Nikhil showed the latter, so I'll show the former.
How to parse variable-length data from CSV
Let's suppose the column with variable length data uses | as a separator, e.g.
csv = [
"1,oscars|brad-pitt|awards",
"2,oscars|film|reviews",
"3,matt-damon|bourne",
]
You can use code like this to convert those to a SparseTensor.
import tensorflow as tf
# Purposefully omitting "bourne" to demonstrate OOV mappings.
TAG_SET = ["oscars", "brad-pitt", "awards", "film", "reviews", "matt-damon"]
NUM_OOV = 1
def sparse_from_csv(csv):
ids, post_tags_str = tf.decode_csv(csv, [[-1], [""]])
table = tf.contrib.lookup.index_table_from_tensor(
mapping=TAG_SET, num_oov_buckets=NUM_OOV, default_value=-1)
split_tags = tf.string_split(post_tags_str, "|")
return ids, tf.SparseTensor(
indices=split_tags.indices,
values=table.lookup(split_tags.values),
dense_shape=split_tags.dense_shape)
# Optionally create an embedding for this.
TAG_EMBEDDING_DIM = 3
ids, tags = sparse_from_csv(csv)
embedding_params = tf.Variable(tf.truncated_normal([len(TAG_SET) + NUM_OOV, TAG_EMBEDDING_DIM]))
embedded_tags = tf.nn.embedding_lookup_sparse(embedding_params, sp_ids=tags, sp_weights=None)
# Test it out
with tf.Session() as s:
s.run([tf.global_variables_initializer(), tf.tables_initializer()])
print(s.run([ids, embedded_tags]))
You'll see output like so (since the embedding is random, exact numbers will change):
[array([1, 2, 3], dtype=int32), array([[ 0.16852427, 0.26074541, -0.4237918 ],
[-0.38550434, 0.32314634, 0.858069 ],
[ 0.19339906, -0.24429649, -0.08393878]], dtype=float32)]
You can see that each column in the CSV is represented as an ndarray, where the tags are now 3-dimensional embeddings.
Is there any way in Keras to specify a loss function which does not need to be passed target data?
I attempted to specify a loss function which omitted the y_true parameter like so:
def custom_loss(y_pred):
But I got the following error:
Traceback (most recent call last):
File "siamese.py", line 234, in <module>
model.compile(loss=custom_loss,optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 911, in compile
sample_weight, mask)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 436, in weighted
score_array = fn(y_true, y_pred)
TypeError: custom_loss() takes exactly 1 argument (2 given)
I then tried to call fit() without specifying any target data:
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
But it looks like not passing any target data causes an error:
Traceback (most recent call last):
File "siamese.py", line 264, in <module>
model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1435, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1322, in _standardize_user_data
in zip(y, sample_weights, class_weights, self._feed_sample_weight_modes)]
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 577, in _standardize_weights
return np.ones((y.shape[0],), dtype=K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'
I could manually create dummy data in the same shape as my neural net's output but this seems extremely messy. Is there a simple way to specify an unsupervised loss function in Keras that I am missing?
I think the best solution is customizing the training instead of using the model.fit method.
The complete walkthrough is published in the Tensorflow tutorials page.
Write your loss function as if it had two arguments:
y_true
y_pred
If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.
def custom_loss(y_true, y_pred):
# do things with y_pred
return loss
Adding custom arguments
You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions
def custom_loss(y_pred, margin):
# do things with y_pred
return loss
but use it like
model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)
I was trying to use tensorflow. The input attributes are similar to census example except that the LABEL Column is a continuous value. I executed the below command:
test-server#:~/aaaml-samples/arbitrator$ gcloud ml-engine local train --module-name trainer.task --package-path trainer/ -- --train-files $TRAIN_DATA --eval-files $EVAL_DATA --train-steps 1000 --job-dir
$MODEL_DIR
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.data.csv']
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.test.csv']
Filename: ['/home/madhukar_mhraju/aaaml-samples/arbitrator/data/aaa.test.csv']
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/madhukar_mhraju/aaaml-samples/arbitrator/trainer/task.py", line 193, in <module>
learn_runner.run(generate_experiment_fn(**arguments), job_dir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 106, in run
return task()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 465, in train_and_evaluate
export_results = self._maybe_export(eval_result)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 484, in _maybe_export
compat.as_bytes(strategy.name))))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/export_strategy.py", line 32, in export
return self.export_fn(estimator, export_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/utils/saved_model_export_utils.py", line 283, in export_fn
exports_to_keep=exports_to_keep)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/framework/python/framework/experimental.py", line 64, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1264, in export_savedmodel
model_fn_lib.ModeKeys.INFER)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1133, in _call_model_fn
model_fn_results = self._model_fn(features, labels, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 268, in _dnn_linear_combined_model_fn
scope=scope)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 531, in weighted_sum_from_feature_columns
transformed_tensor = transformer.transform(column)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 879, in transform
feature_column.insert_transformed_feature(self._columns_to_tensors)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/feature_column.py", line 528, in insert_transformed_feature
sparse_values = string_ops.as_string(input_tensor.values)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_string_ops.py", line 51, in as_string
width=width, fill=fill, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 585, in apply_op
param_name=input_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 61, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'input' has DataType string not
in list of allowed values: int32, int64, complex64, float32, float64,
bool, int8
Am new to tensorflow. I understand that this issue is occurring while processing the evaluation file(aaa.test.csv). The evaluation file data and format is correctly defined. And also the column data type have been mapped correctly as well.But i am not sure why the error is occurring.
1) The training data csv had column headings in them. When I generated the data, i was reordering them randomly, which resulted in the column headings being moved to somewhere in the middle. Hence the type error. It was difficult to find out as the training data was huge.
I'm trying to train an MLP classifier for the XOR problem using sknn.mlp
from sknn.mlp import Classifier, Layer
X=numpy.array([[0,1],[0,0],[1,0]])
print X.shape
y=numpy.array([[1],[0],[1]])
print y.shape
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=1)],n_iter=100)
nn.fit(X,y)
This results in:
No handlers could be found for logger "sknn"
Traceback (most recent call last):
File "xorclassifier.py", line 10, in <module>
nn.fit(X,y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 343, in fit
return super(Classifier, self)._fit(X, yp)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 179, in _fit
X, y = self._initialize(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 37, in _initialize
self._create_specs(X, y)
File "/usr/local/lib/python2.7/site-packages/sknn/mlp.py", line 64, in _create_specs
"Mismatch between dataset size and units in output layer."
AssertionError: Mismatch between dataset size and units in output layer.
Scikit seems to turn your y vector into a binary vector of shape (n_samples,n_classes). n_classes is in your case two. So try
nn=Classifier(layers=[Layer("Sigmoid",units=2),Layer("Sigmoid",units=2)],n_iter=100)