I'm trying to use the Google Coral edgetpu_compiler to combine two existing tflite models into a single model following Google's directions. I'm using two of Google's pre-compiled models. The error indicates the the models are already compiled for the Coral device. These models are in fact already compiled for the Edge TPU, but I'm trying to combine the two models. Am I doing something wrong or is combining Edge TPU models not supported?
Here is the command I'm running and the output:
$ edgetpu_compiler \
mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
Edge TPU Compiler version 2.0.267685300
Invalid model: mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite
Model already compiled for Edge TPU
Im running this on the Coral board OS version: Mendel GNU/Linux 3 (Chef).
Any guidance appreciated.
Thanks,
John
The models that you are trying to combine are already compiled for edgetpu and cannot be compiled again.
To compile both the models again, you can download the 'All model files' of "MobileNet SSD v2 (COCO)" and "MobileNet SSD v2 (Faces)" from https://coral.withgoogle.com/models/. After extracting these compressed files, you will find tflite_graph.pb files for both the models. You will have to convert these .pb files into .tflite (CPU version) files.
After getting CPU version of .tflite files, you should be able to compile two models together.
Please see the model requirements in detail at : https://coral.withgoogle.com/docs/edgetpu/models-intro/#compatibility-overview
Please read about co-compiling the model at : https://coral.withgoogle.com/docs/edgetpu/compiler/#co-compiling-multiple-models. Please also note that co-compiling n models will produce n models, not just one. The benefit is that the compiler support parameter data cached together in TPU RAM
Related
In pydrake, the following line successfully locates an SDF file:
my_sdf = FindResourceOrThrow("drake/examples/multibody/cart_pole/cart_pole.sdf")
Given the structure of the github repository and this example, I would expect the following line to work as well,
my_sdf = FindResourceOrThrow("drake/examples/multibody/four_bar/four_bar.sdf")
but it fails with RuntimeError: Could not find Drake resource_path....
Why is this? Are only some of the SDF files included with the python bindings? If so, is there a list of such available files anywhere?
Are only some of the SDF files included with the python bindings?
Yes, that's correct.
If so, is there a list of such available files anywhere?
If you installed using https://drake.mit.edu/pip.html, then you can list the installed SDFormat files for your current version of the Drake wheel like so:
$ find env/lib/python*/site-packages/pydrake/share/drake -name '*.sdf'
...
env/lib/python3.8/site-packages/pydrake/share/drake/examples/multibody/cart_pole/cart_pole.sdf
...
If you installed via some other mechanism, the command would be similar but you'd need to change the find path to wherever Drake is installed.
Why is this?
Drake is primary a library of stable code, not a library of models. We generally expect users to create their own models, possibly by copying and modifying some example models to get started.
Some model files are very large (e.g., meshes or textures). If we included those in our wheels, the wheel would exceed the default size allowed by PyPI.
We currently do install some models along with our wheels to facilitate our tutorials, but we plan to stop installing those and instead download them at runtime for the tutorials.
The set of installed models for a given version of Drake is somewhat random, and will generally shrink from one release to the next. If you need a stable version of Drake model(s), you should copy the model file(s) into your own project directly.
I built eight different machine learning models back in October 2021 using Tidymodels. Back then, I carefully saved the input data, my output and my .R files. Now, when I run my codes again, I get totally different outputs. I have updated my packages since October, and the only explanation that I can think of is that there has been some updates that cause the discrepancies. O wonder if others have experienced such issues and if they have been able to resolve it.
Edit: I am using the same machine and the same random seeds.
Following #Julia Silge's clue, I installed an older version of the rsample package (version 0.0.9) and then I could reproduce all my results.
I am using Ghostscript to convert pdf1.3 to pdf/a-1b using this command:
gs -dPDFA -dBATCH -dNOPAUSE -dNOOUTERSAVE -sColorConversionStrategy=sRGB -sDEVICE=pdfwrite -sOutputFile=output.pdf PDFA_def.ps input.pdf
The PDFA_def.ps is customized to use the srgb icc profile. Except that change it is the standard def file which comes with GS 9.26.
Now comes the tricky part:
1- running this conversion locally on a ubuntu 18.10, GS 9.26 it works fine an i get a valid pdf/a
2- running the same command in a docker container (ubuntu 18.10. GS 9.26) creates a pdf/a as well, which is considered to be valid
However, in the first scenario I can process the file using mustang (https://github.com/ZUGFeRD/mustangproject) to create a valid electronic invoice. In the second scenario (docker container) this fails, since the file is not considered to be valid pdf by mustang.
checking both pdf files I would have expected them to be identical since i am running the same converison on it. However they are not. The PDF create in the dockerfile is 10 bytes smaller and shows some different metainformation in the file itself.
I suspect that there must be some "hidden depdencies" that make GS to act different on my host system compared to a docker container, but it feels entirely wrong and I am running out of means to debug further.
Does anyone know, wether GS has some more depdencies that might cause the same command to produce different results?
The answer is 'maybe'. It depends on how Ghostscript was built for starters.
I assume you are using a package, and not building from source yourself. In which case there are a number of dependencies including; FreeType, LibJPEG, JBIG2dec, LibTIFF, JPEG-XR, LCMS2, LibPNG, OpenJPEG, Expat, zlib, potentially IJS, CUPS and X-Windows, depending on what devices were built in.
Any or all of these could be system shared libraries instead of being built using the specific version shipped by Artifex. They could also be different versions on the two systems.
That said, I think its 'unlikely' that this is your problem. However, without seeing the PDF files I can't tell you why there is a difference. Differences in the metadata are to be expected, since that includes a date/time stamp.
I'd really need to see examples of the original and the two output PDF files to be able to comment further.
[Edit]
Looking at the files they have been produced compressed (unsurprisingly) which can obviously lead to differences in size if there are small differences in the input streams. So the first task was to decompress the files.
That done I see there are, essentially, no differences between them. One of the operating systems is using a time zone 7 hours behind UTC, the other is in UTC so where one of the systems is time stamping with (eg)
2019-04-26T19:49:01Z
The other is using
2019-04-26T12:51:35-07:00
So instead of Z (for UTC) you get -07:00 which is where the extra 10 bytes are coming from. Other than that, the Unique IDs are (as you would imagine) different, the Length values for the streams are different for the streams containing dates, and the startxref is different because the streams are different lengths.
Both files claim to be compatible with PDF/A-1b. In short I can see no real differences between them. Since you are using a tool to further process the files, I'd suggest you try taking the PDF file from your working system and try processing it on the non-working system, and vice versa, it seems to me that the problem may be the later processing rather than the PDF file itself. Perhaps you have different versions of that tool on the two systems.
For what it may be worth, Ghostscript can be induced into creating a ZUGFeRD file directly, see this bug report and this commit to the repository.
I am a beginner in robotics and I am trying to use Google Cartographer to make my simulated Turtlebot build a map autonomously of its environment.
I have done already all the tutorials in ROS and I can make the robot build the map using teleoperation, but I don't know how to make it build the map by itself. I want to use Google Cartographer for this.
I can however run the demo provided by Google and it works (it builds the map of a Museum).
# Launch the 2D depth camera demo.
roslaunch cartographer_turtlebot demo_depth_camera_2d.launch bag_filename:=${HOME}/Downloads/cartographer_turtlebot_demo.bag
The questions:
How can I run it on my own world instead of the bag file of that museum?
Does it need a yaml map like the one I built with teleoperation? what is the command to make it use my yaml map instead of a bag file?
Can I use a .png image with yaml context?
Could it use the gazebo simulated worlds that are .sdf? What is the command to input this then?
These are the specifications I have:
Ubuntu Xenial
ROS Kinetic
Gazebo 7
Turtlebot2
Google Cartographer for turtlebot
Thanks a lot! It's like I understand the concepts but I don't know how to link things together to make them work.
I just installed Neo4j 2.2 Milestone 1 Release on a Windows 64-bit machine and I am unable to locate the file Neo3jImport.bat.
I want to play around with the feature described here. Until now, I have been playing around with the RNeo4J package. It has helped the learning curve quite a bit, but now that I am goint beyond toy datasets, importing data using the package is painful.
With that said, I can't seem to locate the file/utility that seemingly makes importing larger datasets a breeze. I was expecting to see the file at C:\Program Files\Neo4j Community\bin.
I imagine this is a really basic question, but I am somewhat stumped.
Thanks in advance.
Sorry for the exclusion but the binary installation misses also Neo4jShell and other command line scripts as is is intended for a UI only user.
Please use the ZIP download from neo4j.com/download as Mark suggested.