Unexpected exception happened during extracting attributes Open Vino - machine-learning

I was trying to convert a Caffe model using mo_caffe.py script. I always get errors like below, but in random nodes (all have the common "BatchNorm" op). Trained the model using Nvidia-Digits (https://github.com/NVIDIA/DIGITS)
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/deploy.caffemodel
- Path for generated IR: /dldt/model-optimizer/.
- IR output name: deploy
- Log level: INFO
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
Caffe specific parameters:
- Path to Python Caffe* parser generated from caffe.proto: mo/front/caffe/proto
- Enable resnet optimization: True
- Path to the Input prototxt: /home/deploy.prototxt
- Path to CustomLayersMapping.xml: Default
- Path to a mean file: Not specified
- Offsets for a mean file: Not specified
Model Optimizer version: unknown version
[ INFO ] Importing extensions from: /dldt/model-optimizer/mo
[ INFO ] New subclass: <class 'mo.ops.crop.Crop'>
[ INFO ] Registered a new subclass with key: Crop
[ INFO ] New subclass: <class 'mo.ops.deformable_convolution.DeformableConvolution'>
[ INFO ] Registered a new subclass with key: DeformableConvolution
[ INFO ] New subclass: <class 'mo.ops.concat.Concat'>
[ INFO ] Registered a new subclass with key: Concat
[ INFO ] New subclass: <class 'mo.ops.split.Split'>
...
Some log info. I don't think anything interesting here.
...
[ WARNING ] Skipped <class 'extensions.front.override_batch.OverrideBatch'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.TopKNormalize.TopKNormalize'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.reshape_dim_normalizer.ReshapeDimNormalizer'> registration because it was already registered or it was
disabled.
[ WARNING ] Skipped <class 'extensions.front.ArgMaxSqueeze.ArgMaxSqueeze'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.standalone_const_eraser.StandaloneConstEraser'> registration because it was already registered or it wa
s disabled.
[ WARNING ] Skipped <class 'extensions.front.TransposeOrderNormalizer.TransposeOrderNormalizer'> registration because it was already registered or i
t was disabled.
[ WARNING ] Skipped <class 'mo.front.common.replacement.FrontReplacementOp'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.restore_ports.RestorePorts'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.softmax.SoftmaxFromKeras'> registration because it was already registered or it was disabled.
[ WARNING ] Skipped <class 'extensions.front.reduce_axis_normalizer.ReduceAxisNormalizer'> registration because it was already registered or it was
disabled.
[ WARNING ] Skipped <class 'extensions.front.freeze_placeholder_value.FreezePlaceholderValue'> registration because it was already registered or it
was disabled.
[ WARNING ] Skipped <class 'extensions.front.no_op_eraser.NoOpEraser'> registration because it was already registered or it was disabled.
[ WARNING ] Node attributes: {'_in_ports': {}, 'model_pb': name: "conv2_3_sep_bn_left"
type: "BatchNorm"
bottom: "conv2_3_sep_left"
top: "conv2_3_sep_left"
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
blobs { [0/1963]
shape {
dim: 32
}
}
blobs {
shape {
dim: 32
}
}
blobs {
shape {
dim: 1
}
}
phase: TRAIN
, 'kind': 'op', '_out_ports': {}, 'pb': name: "conv2_3_sep_bn_left"
type: "BatchNorm"
bottom: "conv2_3_sep_left"
top: "conv2_3_sep_left"
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
, 'type': 'Parameter'}
[ ERROR ] Unexpected exception happened during extracting attributes for node relu1.
Original exception message: list index (0) out of range
This is the error message I get. I also have created a GitHub issue
Here is the model that I want to convert from Caffe to OpenVino

The problem was that Nvidia-Digits uses a customized forked version of Caffe. That's why the model weights were not read properly by OpenVino.
I had to use this script to convert the model before converting it with OpenVino.

Related

OPA REGO: How to find all not matching items in another dictionary?

Given input as follow:
{
"source": "serverA",
"destination": "serverB",
"rules": {
"tcp": {
"ssh": [
22
],
"https": [
8443
]
},
"udp": [
53
]
}
}
and data source:
{
"allowedProtocolTypeToPortMapping": {
"tcp": {
"ssh": [22],
"https": [443]
},
"udp": {
"dns": [53]
},
"icmp": {
"type": [0]
}
}
}
I want to create a policy that checks all rules and shows the ones that are not compliant to data source. In this example that would be port 8443 with protocol type https that is not compliant (allowed only 443). What is the best way to achieve it using rego language?
Here's one way of doing it.
package play
violations[msg] {
# Traverse input object stopping only at array values
walk(input.rules, [path, value])
is_array(value)
# Get corresponding list of allowed ports from matching path
allowedPorts := select(data.allowedProtocolTypeToPortMapping, path)
# At this point, we have one array of ports from the input (value)
# and one array of allowed ports. Converting both to sets would allow
# us to compare the two using simple set intersection - i.e. checking
# that all ports in the input are also in the allowed ports.
ip := {port | port := value[_]}
ap := {port | port := allowedPorts[_]}
# Unless all ports in the input array are in both the input and the
# allowed ports, it's a violation
count(ip) != count(ip & ap)
msg := concat(".", path)
}
select(o, path) = value {
walk(o, [path, value])
}
Rego Playground example available here.

How to pass different values to Pipeline Parameters

suppose if i am doing hyper parameter tuning to one of my model, lets say, i am using AdaBoostClassifier() and want to pass different base_estimator, so i pass SVC & DecisionTreeClassifier as estimator
_parameters=[
{
'mdl':[AdaBoostClassifier(random_state=23)],
'mdl__learning_rate':np.linspace(0,1,20),
'mdl__base_estimator':[SVC(),DecisionTreeClassifier()]
}
]
now, i want to pass different values to ccp_alpha of DecisionTreeClassifier, something like this
'mdl__base_estimator':[LinearRegression(),DecisionTreeClassifier(ccp_alpha=[0.1,0.2,0.3,0.4])]
how can i do that, i tried passing it like this, but it is not working, here is my entire code
pipeline=Pipeline(
[
('scal',StandardScaler()),
('mdl','passthrough')
]
)
_parameters=[
{
'mdl':[DecisionTreeClassifier(random_state=42)] ,
'mdl__max_depth':np.linspace(2,30,2),
'mdl__min_samples_split':np.linspace(1,10,1),
'mdl__max_features':np.linspace(1,100,1),
'mdl__ccp_alpha':np.linspace(0,1,10)
}
,{
'mdl':[AdaBoostClassifier(random_state=23)],
'mdl__learning_rate':np.linspace(0,1,20),
'mdl__base_estimator':[SVC(),DecisionTreeClassifier(ccp_alpha=[0.3,0.4,0.5,0.7])]
}
]
grid_search=GridSearchCV(_pipeline,_parameters,cv=3,n_jobs=-1,scoring='f1')
grid_search.fit(x,y
)
This kind of splitting is why param_grid can be a list of dicts, as in your outer split; but it cannot easily handle the nested disjunction you have. Two approaches come to mind.
More disjoint grids:
_parameters=[
{
'mdl': [DecisionTreeClassifier(random_state=42)],
'mdl__max_depth': np.linspace(2,30,2),
'mdl__min_samples_split': np.linspace(1,10,1),
'mdl__max_features': np.linspace(1,100,1),
'mdl__ccp_alpha': np.linspace(0,1,10),
},
{
'mdl': [AdaBoostClassifier(random_state=23)],
'mdl__learning_rate': np.linspace(0,1,20),
'mdl__base_estimator': [SVC()],
},
{
'mdl': [AdaBoostClassifier(random_state=23)],
'mdl__learning_rate': np.linspace(0,1,20),
'mdl__base_estimator': [DecisionTreeClassifier()],
'mdl__base_estimator__ccp_alpha': [0.3,0.4,0.5,0.7],
},
]
Or list comprehension:
_parameters=[
{
'mdl': [DecisionTreeClassifier(random_state=42)],
'mdl__max_depth': np.linspace(2,30,2),
'mdl__min_samples_split': np.linspace(1,10,1),
'mdl__max_features': np.linspace(1,100,1),
'mdl__ccp_alpha': np.linspace(0,1,10),
},
{
'mdl': [AdaBoostClassifier(random_state=23)],
'mdl__learning_rate': np.linspace(0,1,20),
'mdl__base_estimator': [SVC()] + [DecisionTreeClassifier(ccp_alpha=a) for a in [0.3,0.4,0.5,0.7]],
},
]

parsing JSON file using telegraf input plugin : unexpected Output

I’m new to telegraf and influxdb, and currently looking forward to exploring telegraf, but unfortunetly, I have some difficulty getting started, I will try to explain my problem below:
Objectif: parsing JSON file using telegraf input plugin.
Input : https://wetransfer.com/downloads/0abf7c609d000a7c9300dc20ee0f565120200624164841/ab22bf ( JSON file used )
The input json file is a repetition of the same structure that starts from params and ends at it.
you find below the main part of the input file :
{
"events":[
{
"params":[
{
"name":"element_type",
"value":"Home_Menu"
},
{
"name":"element_id",
"value":""
},
{
"name":"uuid",
"value":"981CD435-E6BC-01E6-4FDC-B57B5CFA9824"
},
{
"name":"section_label",
"value":"HOME"
},
{
"name":"element_label",
"value":""
}
],
"libVersion":"4.2.5",
"context":{
"locale":"ro-RO",
"country":"RO",
"application_name":"spresso",
"application_version":"2.1.8",
"model":"iPhone11,8",
"os_version":"13.5",
"platform":"iOS",
"application_lang_market":"ro_RO",
"platform_width":"320",
"device_type":"mobile",
"platform_height":"480"
},
"date":"2020-05-31T09:38:55.087+03:00",
"ssid":"spresso",
"type":"MOBILEPAGELOAD",
"user":{
"anonymousid":"6BC6DC89-EEDA-4EB6-B6AD-A213A65941AF",
"userid":"2398839"
},
"reception_date":"2020-06-01T03:02:49.613Z",
"event_version":"v1"
}
Issue : Following the documentation, I tried to define a simple telegraf.conf file as below:
[[outputs.influxdb_v2]]
…
[[inputs.file]]
files = ["/home/mouhcine/json/file.json"]
json_name_key = "My_json"
#... Listing all the string fields in the json.(I put only these for simplicity reason).
json_string_fields = ["ssid","type","userid","name","value","country","model"]
data_format = "json"
json_query= "events"
Basically declaring string fields in the telegraf.conf file would do it, but I couldn’t get all the fields that are subset in the json file, like for example what’s inside ( params or context ).
So finally, I get to parse fields with the same level of hierarchy as ssid, type, libVersion, but not the ones inside ( params, context, user).
Output : Screen2 ( attachment ).
OUTPUT
By curiosity, I tried to test the documentation’s example, in order to verify whether I get the same expected result, and the answer is no :/, I don’t get to parse the string field in the subset of the file.
The doc’s example below:
Input :
{
"a": 5,
"b": {
"c": 6,
"my_field": "description"
},
"my_tag_1": "foo",
"name": "my_json"
}
telegraf.conf
[[outputs.influxdb_v2]]
…
[[inputs.file]]
files = ["/home/mouhcine/json/influx.json"]
json_name_key = "name"
tag_keys = ["my_tag_1"]
json_string_fields = ["my_field"]
data_format = "json"
Expected Output : my_json,my_tag_1=foo a=5,b_c=6,my_field="description"
The Result I get : "my_field" is missing.
Output: Screen 1 ( attachement ).
OUTPUT
By the way, I use the influxdb cloud 2, and I apologize for the long description of this little problem, I would appreciate some help please :), Thank you so much in advance.

What are the GLTF animations sampler input/output values?

I am reading the specification, but I can not understand the properties of the sampler.
This is the animation that I have
"animations" : [
{
"channels" : [
{
"sampler" : 0,
"target" : {
"node" : 0,
"path" : "translation"
}
}
],
"name" : "00001_2780.datAction",
"samplers" : [
{
"input" : 9,
"interpolation" : "CUBICSPLINE",
"output" : 10
}
]
},
{
"channels" : [
{
"sampler" : 0,
"target" : {
"node" : 1,
"path" : "translation"
}
}
],
"name" : "00002_2780.datAction",
"samplers" : [
{
"input" : 9,
"interpolation" : "CUBICSPLINE",
"output" : 11
}
]
}
],
What I can not understand is what are the values 9 and 10 for the first sample and 9 and 11 for the second
All that we have in the specification is
https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#animations
Each of the animation's samplers defines the input/output pair: a set of floating point scalar values representing linear time in seconds; and a set of vectors or scalars representing animated property.
And this makes it more unclear to me.
Is there a more detailed explanation about what input/output values are and what they represent. What will happen for example if I change the input from 9 to 99 or to 9.9 or to 0.9 or to 0. How will this change the animation?
Thanks
The numbers 9 and 10 here are glTF accessor index ID values. If you decode accessor index 9, you'll find the list of times for each of the keyframes of the animation. If you decode accessor 10, normally you would expect to find the list of values for the keyframes. But since this is CUBICSPLINE, accessor 10 will contain the in-tangent, value, and out-tangent for each keyframe.
One way to investigate glTF files like this is to use the glTF Tools extension for VSCode. You can right-click the input or output value and choose Go To Definition to get to the accessor in question, and choose Go To Definition again to decode it. (Disclaimer, I'm a contributor to glTF Tools).

How to configure Caffe deploy.prototxt?

I followed #ypx instructions on this question. Now I want to predict some pictures. So I'm using:
MODEL_FILE = '/tmp/deploy.prototxt'
PRETRAINED = '/tmp/ck.caffemodel'
IMAGE_FILE = '/tmp/img.png'
net = caffe.Classifier(MODEL_FILE, PRETRAINED, image_dims=(200, 200))
But I get this message:
I1002 13:49:24.331648 28172 net.cpp:435] Input 0 -> data
I1002 13:49:24.331667 28172 layer_factory.hpp:76] Creating layer data
I1002 13:49:24.332259 28172 net.cpp:110] Creating Layer data
F1002 13:49:24.332272 28172 net.cpp:427] Top blob 'data' produced by multiple sources.
*** Check failure stack trace: ***
I think that my problem is on my deploy.prototxt file. This is my deploy.prototxt and This is my train.prototxt
Can someone help me to configure my deploy file?
You should remove the training input layer (lines 8--21) from your deploy net.
That is, discard this:
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
transform_param {
scale: 0.00392156862745
}
data_param {
source: "/tmp/db"
batch_size: 64
backend: LMDB
}
}

Resources