I am trying to detect object using ssd_mobilenet_v1_coco model. My own trained model file .pb file is used for detection. After successful build , click run button and I got the below error.
"Not found: Op type not registered 'NonMaxSuppressionV2' in binary running on IPhone. Make sure the Op and Kernel are registered in the binary running in this process. "
I can executed and launch ios app for already trained .pb model file in below link.
please give a solution to fix the above issues and launch ios app.
https://github.com/JieHe96/iOS_Tensorflow_ObjectDetection_Example
The problem is exactly what the error says - The operation NonMaxSuppressionV2 is being used by the model (.pb file) you're using but it is not registered with tensorflow library while getting compiled for iOS platform.
This is because tensor flow restricts a lot of operations [especially the ones usually required only for training] on iOS/Android platforms so that the size of compiled libraries are less.
To rectify the above problem you can do the following -
Update the file present ops_to_register.h file present here.
Add the "NonMaxSuppressionV2Op<CPUDevice>" (Don't forget to add coma if you're adding in middle of array ) to kNecessaryOpKernelClasses array.
Like this -
constexpr const char* kNecessaryOpKernelClasses[] = {
"BinaryOp< CPUDevice, functor::add<float>>",
"BinaryOp< CPUDevice, functor::add<int32>>",
"AddNOp< CPUDevice, float>",
"NonMaxSuppressionOp<CPUDevice>",
//Added NonMaxSuppressionV2Op
"NonMaxSuppressionV2Op<CPUDevice>",
...
//Other operations
...
};
And also isequal(op, "NonMaxSuppressionV2") to constexpr inline bool ShouldRegisterOp(const char op[])
Like this -
constexpr inline bool ShouldRegisterOp(const char op[]) {
return false
|| isequal(op, "Add")
|| isequal(op, "NoOp")
|| isequal(op, "NonMaxSuppression")
//Added NonMaxSuppressionV2
|| isequal(op, "NonMaxSuppressionV2")
|| isequal(op, "Pack")
//other stuff
...
;
After you modify this file re-run everything from scratch as mentioned in the quick-start section of repo's readme.
If you are still losing some other operations. Repeating the same procedure for them too will work.
Hope that helped.
Related
What I did
const FileSystemCardStore = require('composer-common').FileSystemCardStore;
console.log('------>',FileSystemCardStore);
What I get is
------> undefined
I don't know if the API from the hyperledger community is wrong, cuz I don't see any FileSystemCardStore class in composer-common folder imported in node-modules.
My package.json says composer-common : "^0.19.0"
What is the problem ? and what I am doing wrong.
My Motive is to create a New Card for new Identity.
corrected answer:
you need to use the following classes in your code ; FileSystemCardStore is not available in the current Composer release, like it was previously in a 0.16.x release.
A full example is shown here (it uses an in-memory card example, but the same principle applies to file-based cards)
https://github.com/hyperledger/composer-sample-networks/blob/master/packages/perishable-network/test/perishable.js
Note that you should ideally be using the latest Composer release -> see here to build your apps.
I have created an IoT Hub Edge device. In the beginning, the default $edgeAgent and $edgeHub modules went in. That's fine. Then I added a "barkModule" (note the lower-case B at the start) -- just a test module to play with D2C event messages and DirectMethod calls to the module.
Later on, I removed that module and added a new one, this time with BarkModule (capital B). Been rocking this way for about a week.
I did this bit of code to get a list of a devices module twins (_deviceTwins is the twins of all the devices on the hub, this is basically just getting all the modules for the device) :
foreach (var _device in _deviceTwins) {
var moduleList = await registryManager.GetModulesOnDeviceAsync(_device.DeviceId);
DeviceList.Add(new DeviceAndModules { DeviceTwin = _device, Modules = moduleList.ToList() });
};
In its module twin list -- I'm getting an entry for both BarkModule and barkModule. Even though my device just has $edgeAgent, $edgeHub and BarkModule modules.
I even went digging in $edgeAgent's module twin, and there's a ton of meta-data event history stuff (seriously, this is absurdly large) -- but there's NO reference to the lowercase-b "barkModule" anywhere.
How is it maintaining this information? Why is this showing up still? Is there a way I can remove this?
This shows the modules:
This shows there's only three:
I'm trying to use the TensorFlow audio recognition model (my_frozen_graph.pb, generated here: https://www.tensorflow.org/tutorials/audio_recognition) on iOS.
But the iOS code NSString* network_path = FilePathForResourceName(#"my_frozen_graph", #"pb"); in the TensorFlow Mobile's tf_simple_example project outputs this error message: Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'.
Anyone knows how I can fix this? Thanks!
I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.
From documentation:
While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. This guide contains detailed instructions
on how to do that.
This might also be helpful: [iOS] Add optional Selective Registration of Ops #14421
Optimization
The build_all_ios.sh script can take optional
command-line arguments to selectively register only for the operators
used in your graph.
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.
After the build is done you can check /tensorflow/tensorflow/core/framework/ops_to_register.h for operations that were registered. (autogenerated during build with -g flag)
Some progress: having realized the unregistered DecodeWav error is similar to the old familiar DecodeJpeg issue (#2883), I ran strip_unused on the pb as follows:
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=/tf_files/speech_commands_graph.pb \
--output_graph=/tf_files/stripped_speech_commands_graph.pb \
--input_node_names=wav_data,decoded_sample_data \
--output_node_names=labels_softmax \
--input_binary=true
It does get rid of the DecodeWav op in the resulting graph. But running the new stripped graph on iOS now gives me an Op type not registered 'AudioSpectrogram' error.
Also there's no object file audio*.o generated after build_all_ios.sh is done, although AudioSpectrogramOp is specified in tensorflow/core/framework/ops_to_register.h:
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name decode*.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_wav_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name audio*_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$
Just verified that Pete's fix (https://github.com/tensorflow/tensorflow/issues/15921) is good:
add this line tensorflow/core/ops/audio_ops.cc to the file tensorflow/contrib/makefile/tf_op_files.txt and run tensorflow/contrib/makefile/build_all_ios.sh again (compile_ios_tensorflow.sh "-O3" itself used to work for me after adding a line to the tf_op_files.txt, but not anymore with TF 1.4).
Also, use the original model file, don't use the stripped version. Some note was added in the link above.
I am building a console application in Linux Ubuntu. Setting environment variables in Qt Creator's Run Environment panel is not working, both if I switch on the flag "Run in terminal" or not. It looks like they are just ignored. If I export those variables outside Qt Creator, in a plain terminal, and then run my console application, everything is fine.
I am using Qt Creator 3.5.1.
Ok so I think you are setting the variables in the correct place, but just in case here is a screen shot of where I set mine.One thing to note which we already discussed in the comments is what "kit" you are running. In the screenshot below, I only have one kit set up, but if you have more than one, you have to choose the appropriate kit by clicking on the little monitor icon in the bottom left of Qt Creator.
Then in code I use the following:
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
// Get the variable or a default value if the variable is not set.
// Qt abstraction that should work cross platform.
QString s = QProcessEnvironment::systemEnvironment().value("VAR_ONE", "");
// Get variable in platform dependent way.
char * s2 = getenv("VAR_ONE");
// Print out the results.
qDebug("%s", s.toStdString().c_str());
qDebug("%s", s2);
return a.exec();
}
If you are doing all of this and still having issues, I would try making an new empty console application and see if the above works to narrow down if the problem is with your Qt Creator in some way or if the project you are working in has some setting that is off.
In a first phase, i collect a list of constraints. Then, i would like to store this "session", i.e. All the constraints but all the associated variables as well in a file so that I can, in a second phase, read back the constraints and assert them, or even negate some of them before asserting.
What is the best way (fast and reliable) to store such a "session" in a file, and read it back ? Would the Z3_parse_smtlib2_file() API be the right way ? I have tried the Z3_open_log() API, but I don't find the API to read the log file generated by Z3_open_log(). And what about z3_log_replay(). This API does not seem to be exposed yet.
Thanks in advance.
AG
The log file created by Z3_open_log() can be replayed with Z3.exe (stand alone interpreter, not the lib) through the command line option /log myfile. As of today, I haven't seen any API in Z3 library that allows such a replay. For the time being, I have understood that the replay is deemed for debug analysis.
However, you can hack the library (just expose the z3_replayer class in z3_replayer.h) and use it to replay any log file, it is quite easy. The source code of my little feasibility-proof is given below, and is working fine as far as I know. I think it is very nice to be able to do that because sometimes I need to replay a session for debugging purpose. It is good to be able to replay it from a file, rather than from my whole program which is a bit heavy.
Any feedback would be very welcome. Also I would be interested to know whether this functionality could be integrated in the lib, or not.
AG.
#include <fstream>
#include <iostream>
#include "api/z3_replayer.h"
int main(int argc, char * argv[])
{
const char * filename = argv[1];
std::ifstream in(filename);
if (in.bad() || in.fail()) {
std::cerr << "Error: failed to open file: " << filename << "\n";
exit(EXIT_FAILURE);
}
z3_replayer r(in);
r.parse();
Z3_context ctx = reinterpret_cast<Z3_context>(r.get_obj(0));
check(ctx,Z3_L_TRUE); // this function is taken from the c examples
return 0;
}