How to know Tensorflow Lite model's input/output feature info? - machine-learning

I'm mobile developer. And I want to use various Tensorflow Lite models(.tflite) with MLKit.
But there are some issues, I have no idea of how to know .tflite model's input/output feature info(these will be parameters for setup).
Is there any way to know that?
Sorry for bad English and thanks.
Update(18.06.13.):
I found this site https://lutzroeder.github.io/Netron/.
This visualize graph based on your uploaded model(like .mlmode or .tflite etc.) and find input/output form.
Here is example screenshot!
https://lutzroeder.github.io/Netron example

If you already have a tflite model that you did not produce yourself, and you
want to look inside the tflite file and understand your inputs and outputs, you can use flatc tool and convert
the model to .json file and read that.
First clone the flatbuffers repo and build flatc.
git clone https://github.com/google/flatbuffers.git
Then you have to have the tensorflow schema.fbs stored locally. Either checkout the tensorflow github or download
that one file.
Then you can run flatc to generate the json file from then input tflite model.
flatc -t schema.fbs -- input_model.tflite
This will create a input_model.json file that can be easily read.

Adding to the above answer:
See these instructions for building:
https://google.github.io/flatbuffers/md__building.html
If you already have a tflite model that you did not produce yourself,
and you want to look inside the tflite file and understand your inputs
and outputs, you can use flatc tool and convert the model to .json
file and read that.
First clone the flatbuffers repo and build flatc.
git clone https://github.com/google/flatbuffers.git
Then you have to have the tensorflow schema.fbs stored locally. Either checkout the
tensorflow github or download that one file. Then you can run flatc to
generate the json file from then input tflite model.
flatc -t schema.fbs -- input_model.tflite
This will create a input_model.json file that can be easily read.

Related

Converting an AVDL file into something Apache's avro python package can parse

What I would like to be able to do is take an .avdl file and parse it into python. I would like to make use of the information from within python.
According to the documentation, Apache's python package does not handle .avdl files. I need to use their avro-tools to convert the .avdl file into something it does know how to parse.
According to the documentation at https://avro.apache.org/docs/current/idl.html, I can convert a .avdl file into a .avpr file with the following command:
java -jar avro-tools.jar idl src/test/idl/input/namespaces.avdl /tmp/namespaces.avpr
I ran through my .avdl file through Avro-tools, and it produced an .avpr file.
What is unclear is how I can use the python package to interpret this data. I tried something simple...
schema = avro.schema.parse(open("my.avpr", "rb").read())
but that generates the error:
SchemaParseException: No "type" property:
I believe that avro.schema.parse is designed to parse .avsc files (?). However, it is unclear how I can use avro-tools to convert my .avdl into .avsc. Is that possible?
I am guessing there are many pieces I am missing and do not quite understand (yet) what the purpose of all of these files are.
It does appear that an .avpr is a JSON file (?) so I can just read and interpret it myself, but I was hoping that there would be a python package that would assist me in navigating the data.
Can anyone provide some insights into this? Thank you.
The answer is to use the idl2schemata command with avro-tools.jar, providing it with an output directory to which it can write the .avsc files. The .avsc files can then be read AVRO python package.
For example:
java -jar avro-tools.jar idl2schemata src/test/idl/input/namespaces.avdl /tmp/

Is it possible to directly parse API protobuf files in iOS?

I am making an app that shows the real-time location of local buses. I have an API that returns the .pb (protobuf) file with Vehicle Positions. I am handling proto buffers for the first time and I have no idea why we can't parse them like JSON file.
I saw a library named "Swift-Protobuf", but in its documentation. They are asking to run a command to convert protobuf file into a swift object. But as I am making API calls every minute that returns the protobuf file. How can I run that command every time?
$ protoc --swift_out=. my.proto
I just want to parse those .pb files into a swift object. So that I can use the data on my project.
They are asking to run a command to convert protobuf file into a swift object. But as I am making API calls every minute that returns the protobuf file. How can I run that command every time?
I think you've misunderstood the documentation: you don't need to run protoc --swift_out=. my.proto for every .pb file you receive; you use that command to generate code that knows how to read and write data according to the schema that you define in a .proto file. You can then take that generated code and add it to your iOS project, and after that you can use the code to read and write protobuf data that matches your schema.
I am making an app that shows the real-time location of local buses.
So before you can get started, you're going to need a .proto file that describes the data format used by whoever provides the bus location data, or you'll need whoever provides that data to use SwiftProtobuf or similar to generate a Swift parser for their .proto file.
...I have no idea why we can't parse them like JSON file.
Well, the point of the protobuf format is to be language-agnostic and faster/easier to use than JSON or XML, and one of the design decisions that Google apparently made is to sacrifice human readability for size/speed. So you could write a parser to parse these files just as you would JSON data, but you'd have to learn how the format works. But it's a lot easier to describe the data you're sending and have a program generate the code. One nice aspect of this arrangement is that you can describe the schema once and then generate code that works with that schema for several languages, so you don't have to write code separately for your iOS app, your Android app, and your server.

load tensorflow model in java

I have created, trained and saved a tensorflow model using python (yaml file)
Now I want to load and use it in java (eclipse).
haw can I do it?
I quote from here. The SavedModel format encodes all model information in a directory, not a file. So you want to provide the directory containing the saved_model.pb file to SavedModelBundle.load(), and not the file itself. For more information go to the github page.

Parse or convert .pb files under .sonar folder

I'm using sonarqube 5.6.5,everything works well . Now i need to parse the issues.PB file generated under .sonar/batch-report/ folder. i tried using jsonformat but it is not working.
They are "Protobuf" format, which is a format by google for serializing data. You can get started here or find for example a tutorial here on how to use it in Java.
What I don't understand is that your question has a tag "protobuf-net", which github page explains very well how to use it (in .NET).

Merging files via transformer

Say I'm creating a site.
Currently I need to run a task which will compile template files and merge them into a .js file.
So I'm reading this guide about transformers and trying to create one.
But seems that I can only handle assets in transformer individually, via transform.primaryInput.id.
So, I'm wondering, how can I merge some assets into one file via a transformer?
I'm answering my own question.
What I need is an aggregate transformer.
This article should solve the problem: https://www.dartlang.org/tools/pub/transformers/aggregate.html

Resources