Can I have multiple cameras running on one installed reacTIVision? - localization

I want to introduce another camera into my system for visual tracking of mobile robots using reacTIVision on Ubuntu 16.04.
I went into the camera.xml file of reacTIVision, but I could not add another camera and run them at the same time. Do you maybe have a solution for this problem?
Could I install another reacTIVision (maybe different version) on the same laptop and then run both reacTIVisions at the same time?
If anyone has some useful advice or suggestion to try out, it would be really helpful.

for your application scenario you can just start two separate reacTIVision instances. The easiest way would be starting reacTIVision from two different directories, with a separate camera.xml for each of your cameras. Alternatively you can also create two dediciated reacTIVision.xml files each including a separate camera.xml configuration for one of the two cameras. Then start reacTIVIsion with the -c option pointing to that config file.
usage: reacTIVision -c [config_file]
the default configuration file is reacTIVision.xml
-n starts reacTIVision without GUI
-l lists all available cameras
-h shows this help message

Related

How to use two different versions of a program in Docker

I am very new to Docker and learning about it. I have a question that might be very basic but I could not find the exact answer yet. So we know that using Docker we can containerize our apps so one app dependency will not have any effect on other apps. Suppose I have two apps on the host machine and both of the apps are in their own containers. Say, for example, one app is using python2 and another is using python3 (installed on their own containers). And just for the sake of argument, suppose that python3 has some features which are not present in python2. And I am working on both of the apps together. Now my question is when I work on a particular app, how can I use switch between the apps? I meant for example, inside a Database Management System we have different databases, and when we want to work on a particular database we write the command use <databaseName> and then we can work on that database. If both of my containers are running, when writing code, how can I specify or how does docker or my code editor know I want to work on the app which uses python2 now and then switch to another app that uses python3. Suppose, the host machine can not have both python2 and python3 together outside of containers. Thanks in advance.

Install entire database (including binaries) inside A VOLUME in Docker

I need to containerize a JanusGraph database inside Docker, i don't know what files/directories needs to reside in volume to become persistent/writable. In order to make all the things simple and fast, can i install the entire database in a volume? Not only the data, but the entire app, all the binaries etc. I think this is a fast way to containerize some of my apps.
The janusGraph subdirectories of binaries, data, log resides inside a "janusgraph-hadoop" directory
For example: i will create a volume called /janusgraph-hadoop and run the command to install all the software inside that (it will be a volume).
This can be considered a bad practice or there are no problem in doing that?
I know, we have some JanusGraph already containerized, but they are not official, and my doubt is more general in order to containerize some apps in a more direct way without the need to research what directories need to be in volume and what not.
I will not redistribute any of this, it's just to my use.
At a technical level, nothing would stop you from launching a plain container with an attached volume and installing software there.
docker run -v my_opt:/opt -it --rm ubuntu sh
I wouldn't consider this an especially effective use of Docker. If your colleague wants to use your database installation, you have no way of giving it to them; if you leave the project for six months and come back to it, you'll have no record of how you built this setup. If you were set on this approach, you might find the networking and snapshot setups for more typical virtual machines to be better matched to it.

Running several apps via docker-compose

We are trying to run two apps via docker-compose. These apps are (obviously) in separate folders, each of them having their own docker-compose.yml . On the filesystem it looks like this:
dir/app1/
-...
-docker-compose.yml
dir/app2/
-...
-docker-compose.yml
Now we need a way to compose these guys together for they have some nitty-gritty integration via http.
The issue with default docker-compose behaviour is that if treats all relative paths with respect to folder it is being run at. So if you go to dir from the example above and run
docker-compose up -f app1/docker-compose.yml -f app2/docker-compose.yml
you'll end up being out of luck if any of your docker-compose.yml's uses relative paths to env files or whatever.
Here's the list of ways that actually work, but have their drawbacks:
Run those apps separately, and use networks.
It is described in full at Communication between multiple docker-compose projects
I've tested that just now, and it works. Drawbacks:
you have to mention network in docker-compose.yml and push that to repository some day, rendering entire app being un-runnable without the app that publishes the network.
you have to come up with some clever way for those apps to actually wait for each other
2 Use absolute paths. Well, it is just bad and does not need any elaboration.
3 Expose the ports you need on host machine and make them talk to host without knowing a thing about each other. That is too, obviously, meh.
So, the question is: how can one manage the task with just docker-compose ?
Thanks to everyone for your feedback. Within our team we have agreed to the following solution:
Use networks & override
Long story short, your original docker-compose.yml's should not change a bit. All you have to do is to make docker-compose.override.yml near it, which publishes the network and hooks your services into it.
So, whoever wants to have a standalone app runs
docker-compose -f docker-compose.yml up
But when you need to run apps side-by-side and communicating with each other, you should go with
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Moving from Docker Containers to Cloud Foundry containers

Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
I'll describe the application as a novice because I am.
The application I start as a service(from /etc/init.d) and it reads a config file during startup, which specifies what all modules to load and IP of other services and it's own (0.0.0.0 does not work, so I have to give actual IP).
I had to manually update the IP and some details in the config file when the container starts. So, I wrote a startup script which did all the changes when the container starts and then the service start command.
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option. I still have to try those but what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
I hope I was clear enough to explain my doubt.
Help appreciated.
An old question, but a lot has changed since this was posted:
Recently I started to practice Dockers. Basically, I am running a C application on Docker container. Now, I want to try cloud foundry, therefore, trying to understand the difference between the two.
...
The last option I have is to use docker containers in Cloud foundry, but I want to understand if I can achieve what I described above.
There's nothing wrong with using Docker containers on CF. If you've already got everything set up to run inside a Docker container, being able to run that on CF give you yet another place you can easily deploy your workload.
While these are pretty minor, there are a couple requirements for your Docker container, so it's worth checking those to make sure it's possible to run on CF.
https://docs.cloudfoundry.org/devguide/deploy-apps/push-docker.html#requirements
Anyways, I am not working on this now as CF is not suitable for the project. It's an SIP application and CF only accepts HTTP/S requests.
OK, the elephant in the room. This is no longer true. CF has support for TCP routes. These allow you to receive TCP traffic directly to your application. This means, it's no longer just HTTP/S apps that are suitable for running on CF.
Instructions to set up your CF environment with TCP routing: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html
Instructions to use TCP routes as a developer: https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route-with-port
Now, moving on to Cloud Foundry, the first thing I was not able to find is 'How to deploy C application' then I found a C build pack and a binary build pack option.
Picking a buildpack is an important step. The buildpack takes your app and prepares it to run on CF. A C buildpack might sound nice as it would take your source code, build and run it, but it's going to get tricky because your C app likely depends on libraries. Libraries that may or may not be installed.
If you're going to go this route, you'll probably need to use CF's multi-buildpack support. This lets you run multiple buildpacks. If you pair this with the Apt buildpack, you can install the packages that you need so that any required libraries are available for your app as it's compiled.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
https://github.com/cloudfoundry/apt-buildpack
Using the binary buildpack is another option. In this case, you'd build your app locally. Perhaps in a docker container or on an Ubuntu VM (it needs to match the stack being used by your CF provider, i.e. cf stacks, currently Ubuntu Trusty or Ubuntu Bionic). Once you have a binary or binary + set of libraries, you can simply cf push the compiled artifacts. The binary buildpack will "run" (it actually does nothing) and then your app will be started with the command you specify.
My $0.02 only, but the binary buildpack is probably the easier of the two options.
what I am not able to understand how I can provide a startup script to a cloud foundry container or in brief how to achieve what I was doing with Dockers.
There's a few ways you can do this. The first is to specify a custom start command. You do this with cf push -c 'command'. This would normally be used to just start your app, like './my-app', but you could also use this to do other things.
Ex: cf push -c './prep-my-app.sh && ./my-app'
Or even just call your start script:
Ex: cf push -c './start-my-app.sh'.
CF also has support for a .profile script. This can be pushed with your app (at the root of the files you push), and it will be executed by the platform prior to your application starting up.
https://docs.cloudfoundry.org/devguide/deploy-apps/deploy-app.html#profile
Normally, you'd want to use a .profile script as you'd want to let the buildpack decide how to start your app (setting -c will override the buildpack), but in your case with the C or binary buildpack's, it's unlikely the buildpack will be able to do that, so you'll end up having to set a custom start command anyway.
For this specific case, I'd suggest using cf push -c as it's slightly easier, but for all other cases and apps deployed with other buildpacks, I'd suggest a .profile script.
Hope that helps!

How to bootstrap a docker container?

I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn

Resources