Which resources are included with pydrake FindResourceOrThrow? - drake

In pydrake, the following line successfully locates an SDF file:
my_sdf = FindResourceOrThrow("drake/examples/multibody/cart_pole/cart_pole.sdf")
Given the structure of the github repository and this example, I would expect the following line to work as well,
my_sdf = FindResourceOrThrow("drake/examples/multibody/four_bar/four_bar.sdf")
but it fails with RuntimeError: Could not find Drake resource_path....
Why is this? Are only some of the SDF files included with the python bindings? If so, is there a list of such available files anywhere?

Are only some of the SDF files included with the python bindings?
Yes, that's correct.
If so, is there a list of such available files anywhere?
If you installed using https://drake.mit.edu/pip.html, then you can list the installed SDFormat files for your current version of the Drake wheel like so:
$ find env/lib/python*/site-packages/pydrake/share/drake -name '*.sdf'
...
env/lib/python3.8/site-packages/pydrake/share/drake/examples/multibody/cart_pole/cart_pole.sdf
...
If you installed via some other mechanism, the command would be similar but you'd need to change the find path to wherever Drake is installed.
Why is this?
Drake is primary a library of stable code, not a library of models. We generally expect users to create their own models, possibly by copying and modifying some example models to get started.
Some model files are very large (e.g., meshes or textures). If we included those in our wheels, the wheel would exceed the default size allowed by PyPI.
We currently do install some models along with our wheels to facilitate our tutorials, but we plan to stop installing those and instead download them at runtime for the tutorials.
The set of installed models for a given version of Drake is somewhat random, and will generally shrink from one release to the next. If you need a stable version of Drake model(s), you should copy the model file(s) into your own project directly.

Related

With bazel how do I be/make sure objects taken from cache have been build for the right system/libraries?

I got some strange glibc-related linker errors for builds with distributed build cache configured on build nodes running different Linux distributions.
Now I somehow suspect build artifacts from those machines with different glibc versions getting mixed up, but I don't know how to investigate this.
How do I find out what Bazel takes into account when building the hash for a certain build artifact?
I know I can explicitly set environment variables which then will affect the hash. But how can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
And how do I check/compare what's been taken into account?
This is a complex topic and a multi-facet question. I am going to answer in the following order:
How do I check/compare what's been taken into account?
How to investigate against which glibc a build linked?
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
How do I check/compare what's been taken into account?
To answer this, you should look into the the execution look, specifically you can read up on https://bazel.build/remote/cache-remote#compare-logs. The *.json execution log should contain everything you need to know (granted, it might be a bit verbose) and is a little easier to process with shell-magic/your editor.
How to investigate against which glibc a build linked?
From the execution log, you can get all the required hashes to retrieve cached artifacts/binaries from your remote cache. Given these files, you should be able to use standard tools to get to the glibc version (ldd -r -v binary | grep GLIBC).
How can I be sure a given compiler, a certain version of glibc, etc. will lead to different hashes for built artifacts?
This depends on the way you have setup for compilation toolchain. The best case would be a fully hermetic compilation toolchain, where all necessary files are declared using attributes like https://bazel.build/reference/be/c-cpp#cc_toolchain.compiler_files.
But this would also mean to lock-down the compiler sysroot. This should include all libraries you are linking against if you want full hermeticity. If you want to use some system libraries, you need to tell bazel where to find them and to factor in their hash: https://stackoverflow.com/a/43419786/20546409 or https://www.stevenengelhardt.com/2021/09/22/practical-bazel-depending-on-a-system-provided-c-cpp-library/
If you use the auto-detected compiler toolchain, some tricks are used to lock-down the sysroot paths, but expect some non-hermiticity. https://github.com/limdor/bazel-examples/tree/master/linux_toolchain is a nice write-up how to move from the auto-detected toolchain to something more hermetic.
The hack
Of course, you can hack around this. Note, this is inherently a bad idea:
create a script that inspects the system, determines everything important like the glibc version, maybe the linux distribution (flavor)
creates a string describing this variation and hash-summing it
use that as the instance key/name for your remote cache

Including non-standard documents in R package

I am writing a R package for my Msc's project, together with some .Rmd analysis that uses my package and generates some html reports.
I consider both the package code AND the resulting analysis as deliverables for my project, so I would like to showcase the two in a same location, maybe the same github which is already hosting my package. Question is, can I just add a /reports directory to the package structure and put my html files inside ? As package will be mostly internal use, I don't plan on submitting it to CRAN anytime soon if that matters. However, I would also like to avoid distributing my reports to other potential users.
I've just found out about the .Rbuildignore file which seems like it would do the trick, but I'd be curious to hear what's the standard practice in this case or any other suggestions you may have.
Thanks,

FunScript: How to access TypeScript.Api<...>

I'm getting started with FunScript with a working example. Using Nuget to add the needed libraries, it works well.
In watching a 2013 video on channel9, they are making use of TypeScript.Api<...> to load types from typescript definition files.
I'm however unable to find this type provider anywhere.
Where is it located?
I realized that a good number of the type definitions have been compiled into libraries and available on nuget but I can't really use this since some of the code will be local typescript definition files.
The questions therefore are
Where is the TypeScript.Api<...> type provider?
If it is not available or the best way to use typescript definition, what other options exists.
As Thomas said, the type provider was removed mainly because it couldn't generate generic types, but the idea is to bring it back at some point.
For the moment, though not ideal, you can generate your own bindings following these steps.
Download or clone Funscript repository
git clone https://github.com/ZachBray/FunScript
Build the project
cd FunScript
build.cmd
This needs to be improved but for now you need to zip the .d.ts files you want to convert and then:
cd build\TypeScript
bin\FunScript.TypeScript.exe C:\Path\to\typedefinitions.zip
cd Output
Please note the first time you build the definitions it may take several minutes. Once it's done in the output folder you'll find the compiled .dll libraries with the bindings.
Also, while you're at it. It's better if you use the FunScript version you just build into build\main\bin, as it will probably be more updated than the nuget package.
Good luck and have fun(script)!
There were a bunch of changes in FunScript, so the TypeScript.Api<...> type provider is no longer the recommended way of calling JavaScript libraries from FunScript.
Instead, the bindings for JavaScript libraries are pre-generated and you can find them as packages on NuGet, if you search for the FunScript tag (NuGet search is not very good, so you may need to go through a number of pages to find the one you need...).
If you want to use a local TypeScript definition, then you'll need to run the command line tool to generate the bindings. The F# Atom plugin does this in the build script, so looking there is a good place to start. It has a local copy of various TypeScript bindings in the typings folder (together with the FunScript binaries needed to process them).
I liked the type provider approach much better, but sadly, type providers are somewhat restricted in what kind of types they can provide, so it wasn't all that powerful...

training haar and lbp classifiers in openCV

I have downloaded openCV 2.4.4 in a windows executable form. It had all the libraries that are needed pre built.
Now i am creating an object detection program that requires creating a haar or lbp classifier. I have found that opencv_traoncascade is now the preferred utility and the others are deprecated. The problem is that i am not able to find that utility in opencv2.4.4.exe which i downloaded. There is a folder traincascade but all it has are some hpp files.
Is there a way i can get it, i dont want to checkout the trunk and build it using cmake.
Also some reference to good tutorials explaining thus new utility will be helpful.
You have (as my exp.) two ways to do you want: use CMake or create your project and copy code you're need from opencv (this is not good idea:)).
I dont understand what you mean:
i dont want to checkout the trunk and build it using cmake
if you already
downloaded openCV 2.4.4 in a windows executable form
you have all to use CMake: open cmake, set source code: "...\opencv" and binary path (may be the same). Next - press configure, next - enable option "build examples", press configure second time, press generate...
It's all. Now open project opencv, find traincascade subproject and build it:)
There are some files in the archive. E.g. cascadeclassifier.cpp (CPU and GPU in the samples folder and the source in opencv/modules) and traincascade.cpp.
There is also this guide. I've heard that training is a bit difficult. If you come with a complete answer, feel free to share it.

I'm writing a script to install postgresql modules. How can I find the "contrib" directory?

I'm writing a script that will install some of the optional postgresql modules. Is there a way to programmatically figure out where the contrib directory is or do I have to prompt for the path? I've looked at a few examples and it seems inconsistent; doesn't appear in pg_config.
(The script might be run on OS X or linux, and I can't make assumptions about how postgresql was installed).
There is no easy works everywhere answer to this, so it's a matter of how much intelligence you want to try and put into your installer. And for some people it will be impossible to do what you hope for, the best you'll be able to do is offer guidance about what's missing. There is a lot of variation on packaging here between Linux distributions, versions of PostgreSQL, and the various ways you can install PostgreSQL on OS X (MacPorts, homebrew, etc.)
First off, only source code installs will have a contrib directory with source code in it that allows building the optional modules. In the packaged builds for Linux, all the contrib binaries may only be available via an optional package, called something like postgresql-contrib. That's the only way to make the optional modules that come with the database available: install the package the binaries are included in. You may see some variation in the OS X builds here too.
If you want to install extensions (what these are officially called as of PostgreSQL 9.1 now, rather than 'modules') using binaries you provide instead, what you then want to know is where to put the resulting shared libraries and matching SQL files that reference them. What pgconfig returns for pkglibdir will tell you where the binaries go, while sharedir points toward the default place to put the SQL. Providing binaries is a losing game though; the job of syncing with every platform to build them is a huge one.
And here are the sort of additional complications you'll run into in this area, if you wanted to ship source code and try to build things yourself in an automatic way:
PostgreSQL 9.1 now installs these using the CREATE EXTENSION
mechanism, so you'll need to handle both the pre-9.1 method and the
new one introduced there.
Not all PostgreSQL installations will have
pg_config. It's considered a development tool, and which package it
gets bundled with (and whether that package is mandatory or not)
varies. Debian/Ubuntu put it into the optional liqpq-dev, RedHat
derived RPMs have it in postgresql-devel or
postgresql-[version]-devel.
Due to pg_config being necessary for
compiling the new 9.1 extensions, packagers have started
reconsidering where pg_config goes; it's considered a lot more
important now than it used to be. 9.1 or later packages might alter which package it's contained in. That doesn't really change what you can and can't do though. It just impacts what advice you might offer for correcting situations your program can't deal with.
I've been describing the standard
Linux packaging here when I talk about that OS. There are also installers for both Linux and
OS X from EnterpriseDB, what they call their "one-click installers".
These use a different standard altogether for what people do and don't get installed in this area. I don't follow the commercial packaging to know what is actually different, but it's another variable you can expect people to encounter.
Recent OS X versions may
have some system PostgreSQL components floating around too. No idea
yet how this handle extensions though.
Basically, all three of version/packager/platform can vary how this will work, and the idea that you'll find any solution that handles even the majority of those permutations is optimistic. Installing extensions is known to be difficult in PostgreSQL, which is one thing that motivated all the 9.1 changes to turn it into a simple CREATE EXTENSION for many of them. But for now, those changes have just added another whole set of variation into the mix, actually making this harder during the transition period. It will be a while until PostgreSQL versions supporting that are the only ones in popular use.

Resources