Running ROS Inside of Pepper - ros

we are currently working on Pepper 2.5.10 and ROS Kinetic. We want to run ROS with our own applications inside the robot. We have tried some of the ROS projects with Pepper robot, but all of the applications run in our computer. We are thinking to install and run ROS inside of Pepper, do you think this is a practical way to do or do you have any other suggestion for this task.
Thanks for your suggestions.

You can check these links for cross compilation (compile on you pc and send it to Pepper):
pepper_ros_compiled
pepper_ros_compilation
or this one to use gentoo prefix (install useful tools like catkin_make or emerge on the pepper's head so compile directly on the robot) :
sbre_robot_ros_gentoo_prefix

Related

How to install multiple Tensorflow versions?

I'm trying to run the code from this repository: https://github.com/danielgordon10/thor-iqa-cvpr-2018
It has the following requirements
Python 3.5
CUDA 8 or 9
cuDNN
Tensorflow 1.4 or 1.5
Ubuntu 16.04, 18.04
an installation of darknet
My system satisfies neither of these. I don't want to reinstall tf/cuda/cudnn on my machine (especially if have to do that everytime I try to run deep learning code with different tensorflow requirements everytime).
I'm looking for a way to install the requirements and run the code regardless of the host.
To my knowledge that is exactly what Docker is for.
Looking into this there exist docker images from nvidia. For example one called "nvidia/cuda:9.1-cudnn7-runtime". Based on the name I assumed that any image build with this as the base comes with cuda installed. This does not seem to be the case as if I try to install darknet it will fail with the error that "cuda_runtime.h" is missing.
So what my question basicaly boils down to is: How do I keep multiple different versions of cuda and tensorflow on the same machine ? Ideally with docker (or similar) so I won't have to do the process to many times.
It feels like I'm missing and/or don't understand something obvious, because I can't imagine that it can be so hard to run tensorflow code with different versions without reinstalling things from scratch all the time.

Use multiple docker images at once

I am trying to containerize my ROS + tensorflow application. The problem is that I want to use GPU. I can either use GPU and forget about ROS, or use ROS and forget about GPU, but I don't know how I can enable both of them.
I have tried starting FROM a cuda image and installing ROS as described here, but the ros couldn't be installed, not being able to find the package.
I also tried building them one by one in the same Dockerfile and copying all the ROS-related stuff from a ROS build, but that failed too.
Ideally, I want to make it work as if I "include" a Cuda image and a ROS image, but resource on building from multiple images like this could hardly be found. Any help or pointer would be appreciated.

How do I install luac.cross on a Mac?

NodeMCU documentation states
NodeMCU firmware build now automatically generates a luac.cross image
as standard in the firmware root directory; this can be used to
compile and to syntax-check Lua source on the Development machine for
execution under NodeMCU Lua on the ESP8266.
Where do I get luac.cross from and how do I install it?
Do I build NodeMCU firmware from source on Mac and is luac.cross created as part of that process? I have been using the cloud service to create custom firmware. Is luac.cross available via cloud build?
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
Where do I get luac.cross from and how do I install it?
You gave the answer in the quote from the documentation you posted. Specifically this
NodeMCU firmware build now automatically generates a luac.cross image...
So, if you build the NodeMCU manually on your platform the build process will also create a lua.cross for your platform. That's the reason you cannot download or install lua.cross - it has to fit your platform i.e. OS et.al.
The logical next question would then be: how do I manually build NodeMCU on macOS?
I don't know the answer to that as I build with the Docker image (from yours truly) on macOS. Running the Docker build creates a luac.cross in the firmware root directory. However, as macOS is just the host OS for Docker in this setup luac.cross is for Linux rather than native for macOS. To use it you would start the Docker container again and run bash in it to get a shell to execute the Lua cross compilation: docker run --rm -ti -vpwd:/opt/nodemcu-firmware marcelstoer/nodemcu-build bash.
Straight lua code has overwhelmed the MakerFocus NodeMCU board resulting runtime panic with out of memory issue. Hoping compiled code will reduce RAM needs.
I hate to disillusion you, but if I had to bet I would expect that savings won't be significant enough to yield the expected results. As you already started reading documentation I'd like to point you to the relevant FAQ: How is NodeMCU Lua different to standard Lua? and Techniques for Reducing RAM
And maybe using LFS will be your life saver.
In case you want to use this tool regardless of the platform - you can use my API to build it:
curl -d #yourscript.lua -X POST https://nodemcu-luacross-run-64l7ehzjta-uc.a.run.app/compile > output.luac

Good alternative to environment modules for windows?

In the past I have used environment modules extensively in Unix based systems. The tool proved very usefull since we had many different projects each of them using a potentially different set of tools.
I am now however stuck with a windows machine and need to make the most of it. Does anybody know of a good alternative to environment modules for windows? I am basically looking for a tool that let's me manipulate the PATH (or $env:PATH in windows powershell) environment variable without having to touch it directly.
You can install environment modules on windows as well. The instructions are in the repository https://github.com/cea-hpc/modules/blob/master/INSTALL-win.txt
Basically you have to install active state tcl and copy the source files.
I use this and it works quite well.

Is Tensorflow/Docker Useful for Development, or just demo/tests?

I have been developing in Tensorflow/Python on OSX. Trying to graduate to the big leagues. Bought a big new GPU PC, installed linux, CUDA, Docker, Tensrflow. (Pulled out lots of hair in the process). I thought that Docker-Tensorflow would provide a linux VM environment with Tensorflow, in which I could run my IDE and work from the cmd line like before, but it just seems to serve Jupyter notebooks. I've found some posts with what seem like heroic measures to develop in Docker. I suspect that Docker-Tensorflow is just meant for running demos, serving Jupyter notebooks, etc., and that for development I should revert to a conventional Tensorflow installation. Can someone please confirm (or deny) this? Thanks!
Yes, I share your opinion. Instead I use Anaconda (prior pyenv and virtualenv) to maintain environments and (local) dependencies. In detail tensorflow itself has just a few dependencies.

Resources