I have to run a ROS simulation on a HPC. On my local PC I usually open multiple terminals to run different ros commands which function together like roscore,etc. Now I am not sure how to do this on an HPC, I need to submit a job file for it.
For running multiple nodes ROS has roslaunch. You can bundle all the different nodes you want to run with different parameters and it will take care of it.
Read more about it here: http://wiki.ros.org/roslaunch
Related
I am very new to Docker and learning about it. I have a question that might be very basic but I could not find the exact answer yet. So we know that using Docker we can containerize our apps so one app dependency will not have any effect on other apps. Suppose I have two apps on the host machine and both of the apps are in their own containers. Say, for example, one app is using python2 and another is using python3 (installed on their own containers). And just for the sake of argument, suppose that python3 has some features which are not present in python2. And I am working on both of the apps together. Now my question is when I work on a particular app, how can I use switch between the apps? I meant for example, inside a Database Management System we have different databases, and when we want to work on a particular database we write the command use <databaseName> and then we can work on that database. If both of my containers are running, when writing code, how can I specify or how does docker or my code editor know I want to work on the app which uses python2 now and then switch to another app that uses python3. Suppose, the host machine can not have both python2 and python3 together outside of containers. Thanks in advance.
I'm trying to build an Oozie workflow to execute everyday a python script which needs specific libraries to run.
At the moment I created a python virtual environment (using venv) on a node of my cluster (consisting of 11 nodes).
Through Oozie I saw that it is possible to run the script using an SSH Action specifying the node containing the virtual environment. Alternatively it is possible to use a Shell Action to run the python script but this requires creating the virtual environment, with the same dependencies in terms of libraries, on the node where the shell will be executed (any of the cluster nodes).
I would like to avoid sharing keys or configuring all the cluster nodes to make this possible and looking in the docs I found this section talking about launching applications using Docker containers but in Hadoop version of my cluster this feature is experimental and not complete (Hadoop 3.0.0). I suppose that if you can launch Docker containers from shell you should be able to launch them from Oozie.
So my question is: has anyone tried to do it? Is it a trick to use docker this way?
I came across this question but to date 2019/09/30 there are no specific answers.
UPDATE: I tried to do it, and it works (you can find more info in my answer to this question). I'm still wondering if it's a correct way to do it.
One friend of mine and I are trying to develop a CorDapp for a financial use case, I can run the cordapp-tutorial and the demos, however they only run on localhost.
We would like to create two "real" nodes and I understood correctly we should build two Corda nodes, my pc as one node server and his pc as another node server, but how can we effectively connect over the internet? On slack I have been told to enable dev-mode, but how do you enable it?
We have a corda.jar and the nodea.conf, but the part I don't really understand from the documentation is:
"Each node server by default must have a node.conf file in the current working directory. After first execution of the node server there will be many other configuration and persistence files created in this workspace directory. The directory can be overridden by the --base-directory= command line argument."
What is intended as working directory?
I've read this documentation
: Corda Nodes
Thank to all, I think I will be asking a lot of question in the near future :D
In Corda 3.1, you can use the network bootstrapper to create a dev-mode network of nodes running on two separate machines as follows:
Create the nodes by following the instructions here (e.g. by using gradlew deployNodes)
Navigate to the folder where the nodes were created (e.g. build/nodes)
Open the node.conf file of each node and change the localhost part of its p2pAddress to the IP address of the machine where the node will be run (e.g. p2pAddress="10.18.0.166:10007")
After making these changes, we need to redistribute the updated nodeInfo files to each node, so that they have the updated IP addresses for each node. Use the network bootstrapper tool to automatically update the files and have them distributed to each node:
java -jar network-bootstrapper.jar kotlin-source/build/nodes
Move the node folders to their individual machines (e.g. using a USB key). It is important that none of the nodes - including the notary - end up on more than one machine. Each computer should also have a copy of runnodes and runnodes.bat.
For example, you may end up with the following layout:
Machine 1: Notary, PartyA, runnodes, runnodes.bat
Machine 2: PartyB, PartyC, runnodes, runnodes.bat
After starting each node, the nodes will be able to see one another and agree ledger updates among themselves
Warning
The bootstrapper must be run after the node.conf files have been modified, but before the nodes are distributed across machines. Otherwise, the nodes will not have the updated IP addresses for each node and will not be able to communicate.
Each of the nodes will have a node.conf file. To enable devMode add this line to the node.conf file.
devMode=true
I want to introduce another camera into my system for visual tracking of mobile robots using reacTIVision on Ubuntu 16.04.
I went into the camera.xml file of reacTIVision, but I could not add another camera and run them at the same time. Do you maybe have a solution for this problem?
Could I install another reacTIVision (maybe different version) on the same laptop and then run both reacTIVisions at the same time?
If anyone has some useful advice or suggestion to try out, it would be really helpful.
for your application scenario you can just start two separate reacTIVision instances. The easiest way would be starting reacTIVision from two different directories, with a separate camera.xml for each of your cameras. Alternatively you can also create two dediciated reacTIVision.xml files each including a separate camera.xml configuration for one of the two cameras. Then start reacTIVIsion with the -c option pointing to that config file.
usage: reacTIVision -c [config_file]
the default configuration file is reacTIVision.xml
-n starts reacTIVision without GUI
-l lists all available cameras
-h shows this help message
I just came across docker, and was looking through its docs to figure out how to use this to distribute a java project across multiple nodes, while making this distribution platform independent i.e the nodes can be running any platform. Currently i'm sending classes to different nodes and running it on them with the assumption that these nodes have the same environment as the client. I couldn't quite figure out how to do this, any suggestions wouldbe greatly appreciated.
I do something similar. In my humble opinion Docker or not is not your biggest problem. However, using Docker images for this purpose can and will save you a lot of headaches.
We have a build pipeline where a very large Java project is built using Maven. The outcome of this is a single large JAR file that contains the software we need to run on our nodes.
But some of our nods also need to run some 3rd party software such as Zookeeper and Cassandra. So after the Maven build we use packer.io to create a Docker image that contains all needed components which ends up on a web server that can be reached only from within our private cloud infrastructure.
If we want to roll out our system we use a combination of Python scripts that talk with the OpenStack API and create virtual machines on our cloud, and Puppet which performs the actual software provisioning inside of the VMs. Our VMs are CentOS 7 images, so what Puppet actually does is to add the Docker yum repos. Then installs Docker through yum, pulls in the Docker image from our repository server and finally uses a custom bash script to launch our Docker image.
For each of these steps there are certainly even more elegant ways of doing it.