I'm trying to export gym-gazebo GazeboCircuit2TurtlebotLidar as a custom env to my algorithm. I could not import gym-gazebo as typical gym env - ros

I am trying to connect an agent(GazeboCircuit2TurtlebotLidar-v0) from gym-gazebo library(gym-gazebo is an extension of the initial OpenAI gym for robotics using ROS and Gazebo, an advanced 3D modeling and rendering tool) with a deep reinforcement algorithm(DQN).
By referring to the documentations I created a catkin workspace and i cloned that library to local then ran the Turtlebot env using ROS(Noetic) and gazebo(version 11.10.2).
BLOCKER - I'm trying to export GazeboCircuit2TurtlebotLidar as a custom env to my algorithm. I could not import gym-gazebo as typical gym env.
enter code here
SAMPLE CODES:
//registar as a custom env
from gym.envs.registration import register
...
register(
id='GazeboCircuit2TurtlebotLidar-v0',
entry_point='gym_gazebo.envs.turtlebot:GazeboCircuit2TurtlebotLidarEnv',
# More arguments here
)
...
//import to dqn
from gym_gazebo.envs import gazebo_env
env = gym.make('GazeboCircuit2TurtlebotLidar-v0')
Does anyone has any idea regarding this robotic simulation issue as i am a beginner in this domain?

Related

pydrake is not available for installation anymore through google colab?

I have been using a google colab template for iterative LQR that uses the Pydrake, however, it seems like the code repository is removed and I can't reinstall it on google Colab:
try:
import pydrake
import underactuated
except ImportError:
!curl -s https://raw.githubusercontent.com/RussTedrake/underactuated/master/scripts/setup/jupyter_setup.py > jupyter_setup.py
from jupyter_setup import setup_underactuated
setup_underactuated()
# Setup matplotlib backend (to notebook, if possible, or inline).
from underactuated.jupyter import setup_matplotlib_backend
plt_is_interactive = setup_matplotlib_backend()
File "/content/jupyter_setup.py", line 1
404: Not Found
^
SyntaxError: invalid syntax
I tried clicking this link https://raw.githubusercontent.com/RussTedrake/underactuated/master/scripts/setup/jupyter_setup.py, and the page is not found... everything was working fine yesterday
Sorry. You're correct... I updated it this morning, and don't have a good deprecation policy in place on that repo, and this setup script is two versions ago. The path you want is https://raw.githubusercontent.com/RussTedrake/underactuated/master/setup/jupyter_setup.py
(remove the script from the directory). But if you look at that file, you'll see that even that is pointing to an updated setup script which you might want to point to.
This is actually all good news... we are on the path to a much better solution. You can now just pip install drake on colab (see the drake installation guide). Once I land the pip install underactuated (probably in time for my Spring offering of the class), then all of that nasty setup will be gone.

How to run lua ML model from Google Colab

I am trying to run this neural style transfer model from Github but on Google Colab because my computer doesn't have enough memory/CPU/anything.
I have mounted my google drive to my notebook, cloned the repo onto my drive by following this tutorial downloaded the models to my drive folder, and just to test that it works, I'm using the most basic example of running the Brad Pitt example in the readme using:
th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg
But for some reason it won't work.
Using subprocess.run() just returned No such file or directory
Using !th neural_style.lua... returns th: command not found
I've tried like four other things and they all give me a variant of the above two error messages. Any thoughts?
Here is the full notebook code to reproduce my setup on Colab from start to end/error:
# Mount the drive
from google.colab import drive
drive.mount('/content/drive')
# Clone the repo onto the drive
!git clone https://github.com/jcjohnson/neural-style
# Install Pytorch
!pip3 install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
# Download the models per the github repo instructions
!bash models/download_models.sh
!th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg
First:
%cd /content/
!git clone https://github.com/nagadomi/distro.git torch --recursive
import os
os.chdir('./torch/')
!bash install-deps
!./install.sh
!. ./install/bin/torch-activate
Now it should work using absolute path to th:
!/content/torch/install/bin/th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg

Running AWS SAM build from within Python script

I'm in the process of migrating entire CloudFormation stacks to Troposphere, including Lambda and Lambda-reliant CFN Custom Resources.
One of my goals is to circumvent the creation of template files altogether, making the Python code the sole "source of truth" (i.e without template files that are created and therefore can be edited, causing config drift).
This requires the ability to:
Pass a file-like object to the SAM builder (instead of a file-name)
Calling the AWS SAM builder from Python and not the CLI
My first naive idea was that I would be able to import a few modules from aws-sam-cli put a wrapper for io.StringIO around it (to hold the template as file-like object) and presto! Then I looked at the source code for sam build and all hope left me:
I may not be able to use Docker/containers for building, as I it will map the build environment, including template files.
AWS SAM CLI is not designed to have a purely callable set of library functions, similar to boto3. Close, but not quite.
Here is the core of the Python source
with BuildContext(template,
base_dir,
build_dir,
clean=clean,
manifest_path=manifest_path,
use_container=use_container,
parameter_overrides=parameter_overrides,
docker_network=docker_network,
skip_pull_image=skip_pull_image,
mode=mode) as ctx:
builder = ApplicationBuilder(ctx.function_provider,
ctx.build_dir,
ctx.base_dir,
manifest_path_override=ctx.manifest_path_override,
container_manager=ctx.container_manager,
mode=ctx.mode
)
try:
artifacts = builder.build()
modified_template = builder.update_template(ctx.template_dict,
ctx.original_template_path,
artifacts)
move_template(ctx.original_template_path,
ctx.output_template_path,
modified_template)
click.secho("\nBuild Succeeded", fg="green")
msg = gen_success_msg(os.path.relpath(ctx.build_dir),
os.path.relpath(ctx.output_template_path),
os.path.abspath(ctx.build_dir) == os.path.abspath(DEFAULT_BUILD_DIR))
click.secho(msg, fg="yellow")
This relies on a number of imports from a aws-sam-cli internal library with the build focused ones being
from samcli.commands.build.build_context import BuildContext
from samcli.lib.build.app_builder import ApplicationBuilder, BuildError, UnsupportedBuilderLibraryVersionError, ContainerBuildNotSupported
from samcli.lib.build.workflow_config import UnsupportedRuntimeException
It's clear that this means it's not as simple as creating something like a boto3 client and away I go! It looks more like I'd have to fork the whole thing and throw out nearly everything to be left with the build command, context and environment.
Interestingly enough, sam package and sam deploy, according to the docs, are merely aliases for aws cloudformation package and aws cloudformation deploy, meaning those can be used in boto3!
Has somebody possibly already solved this issue? I've googled and searched here, but haven't found anything.
I use PyCharm and the AWS Toolkit which if great for development and debugging and from there I can run SAM builds, but it's "hidden" in the PyCharm plugins - which are written in Kotlin!
My current work-around is to create the CFN templates as temp files and pass them to the CLI commands which are called from Python - an approach I've always disliked.
I may put in a feature request with the aws-sam-cli team and see what they say, unless one of them reads this.
I've managed to launch sam local start-api from a python3 script.
Firstly, pip3 install aws-sam-cli
Then the individual command can be imported and run.
import sys
from samcli.commands.local.start_api.cli import cli
sys.exit(cli())
... provided there's a template.yaml in the current directory.
What I haven't (yet) managed to do is influence the command-line arguments that cli() would receive, so that I could tell it which -t template to use.
Edit
Looking at the way aws-sam-cli integration tests work it seems that they actually kick off a process to run the CLI. So they don't actually pass a parameter to the cli() call at all :-(
For example:
class TestSamPython36HelloWorldIntegration(InvokeIntegBase):
template = Path("template.yml")
def test_invoke_returncode_is_zero(self):
command_list = self.get_command_list(
"HelloWorldServerlessFunction", template_path=self.template_path, event_path=self.event_path
)
process = Popen(command_list, stdout=PIPE)
return_code = process.wait()
self.assertEquals(return_code, 0)
.... etc
from https://github.com/awslabs/aws-sam-cli/blob/a83aa9e620ff679ca740496a3f1ff4872b88894a/tests/integration/local/invoke/test_integrations_cli.py
See also start_api_integ_base.py in the same repo.
I think on the whole this is to be expected because the whole thing is implemented in terms of the click command-line application framework. Unfortunately.
See for example http://click.palletsprojects.com/en/7.x/testing/ which says "The CliRunner.invoke() method runs the command line script in isolation ..." -- my emphasis.
I am using following python script to run sam cli commands. This should work for you too.
import json
import sys
import os
try:
LAMBDA_S3_BUCKET="s3-bucket-name-in-same-region"
AWS_REGION="us-east-1"
API_NAME = "YourAPIName"
BASE_PATH="/path/to/your/project/code/dir"
STACK_NAME="YourCloudFormationStackName"
BUILD_DIR="%s/%s" % (BASE_PATH, "build_artifact")
if not os.path.exists(BUILD_DIR):
os.mkdir(BUILD_DIR)
os.system("cd %s && sam build --template template.yaml --build-dir %s" % (BASE_PATH, BUILD_DIR))
os.system("cd %s && sam package --template-file %s/template.yaml --output-template-file packaged.yaml --s3-bucket %s" %(BASE_PATH, BUILD_DIR, LAMBDA_S3_BUCKET))
os.system("cd %s && sam deploy --template-file packaged.yaml --stack-name %s --capabilities CAPABILITY_IAM --region %s" %(BASE_PATH, STACK_NAME, AWS_REGION))
except Exception as e:
print(e.message)
exit(1)

travis-ci path to the shared library or how to link shared library to python

ci professionals,
I cannot figure out why this code cannot find the shared library. Please see the log
https://pastebin.com/KvJP9Ms3
[31mImportError while importing test module '/home/travis/build/alexlib/pyptv/tests/test_pyptv_batch.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_pyptv_batch.py:1: in <module>
from pyptv import pyptv_batch
pyptv/pyptv_batch.py:20: in <module>
from optv.calibration import Calibration
E ImportError: liboptv.so: cannot open shared object file: No such file or directory[0m
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
[1m[31m=========================== 1 error in 0.43 seconds ============================[0m
the pull request
https://github.com/alexlib/pyptv/pull/4
and the build https://travis-ci.org/alexlib/pyptv/builds/342237102
We have a C library (http://github.com/openptv/openptv) that we need to compile and using Cython bindings add to Python, then we use Python through bindings. The tests work locally but not on Travis-CI (great service). I think it's a simple issue with paths, but I couldn't figure out how to deal with this.
Thanks in advance
Alex
The answer I have found was to set also the DYLD_LIBRARY_PATH that is required on Mac OS X:
export PATH=$PATH:/usr/local/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/local/lib

how to combine Opencv module (virtualenvwrapper) and Tensorflow module (another virtualenv)?

Following several instructions on the web, I could install Opencv3.0 and tensorflow on my Ubuntu 16.04. Each tutorial recommends using virtual environment. Although I agree that, problem is that I just followed the tutorials and created separate environments.
** For minor information, Tensorflow installation was easy, but Opencv3.0 was hard.
I used virtualenv for Tensorflow with the name tf, and virtualenvwrapper for Opencv with the name cv, i.e., I activate tf by $ source ~/project/tf/bin/activate, and cv by $ workon cv.
In this case, what is the best way of using both?
Should I activate both always?
Should I enter one environment and install the other again?
Should I symlink site-package/cv.so to tf environment?
I think cv is now in the python site-package folder. I create tf with --site-package option, but it was before installing cv. I am so confusing. Please help.
Yes, I had the same issue and the sym-link between tensorflow and openCV will not work after making the cv.so inside the tf virtualenv after some struggle I got them both to work in the same environment but I suggest uninstalling openCV and then install it back without using a virtual environment for better results.
Cheers.

Resources