Does RinSim natively support invisible, non-colliding agents on a PDP Roadmodel? - rinsim

We are using the map of Leuven as a graph roadmodel. We aim to develop a system using exploration ants, as well as feasibility ants that roam around the map to propagate knowledge about the local environment. Exploration ants are sent out by vehicles and report back when they have found a favourable path to follow to pick up packages, while feasibility ants would be sent out by parcels and randomly roam around to notify vehicles of their existence.
Ideally, these ants are not visible on the map while roaming around and would not be bound by regular time constraints (faster movement than other vehicles).
Is there some kind of support for a delegate MAS system like this, and if not, what would be the best approach to implement it?

This question has been answered here.
As for your second question: Instead of adding ants in a RoadModel, you could view the ants as messages that contain some extra data (such as the path they they were sent along) and send them throughout the graph using a CommModel. This way, they are invisible and non-colliding.

Related

ARKit with multiplayer experience to share same planes [duplicate]

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

Emulate GPS or a serial device

Is it possible to get location data out of Google Gears, Google Gelocation API or any other web location API (such as Fire Eagle) in such a format that it appears to other software as a GPS device?
It occured to me reading these answers to my question regarding WiFi location finding, on Super User, that if I could emulate a GPS unit, many of these web services could act as a 'poor-mans' GPS to otherwise less useful software that requires it.
Is GPSD an option?
Preferably OSX & Python, but I would be interested in any implementation.
There is a very similar thread on a Python mailinglist that mentions Windows virtual COM ports and discusses Unix's pseudo-tty capabilities. If the app(s) you want to use let you type in a specific tty device file, this may be the easiest route. (Short of asking the authors to provide a plugin API for what you're trying to do, or buying yourself a $20 bluetooth GPS mouse.)
Are you using OS X?
There is a project macosxvirtualserialport on Google code that provides a graphical wrapper around some of the features of a utility called socat. I'd recommend taking a look at socat if you see potential in the pseudo-tty route. I believe you could use socat to link a pipe from a Python program to a pseudo-tty.
Most native Mac apps will be querying IOServiceMatching for a device with kIOSerialBSDRS232Type, and I doubt that a pseudo-tty will show up as an IOKit service.
In this case, unless you can find a project that has already implemented such a thing, you will need to implement a driver as described in this How to create virtual COM port thread. If you're going to the trouble of create a device driver, you would want to base it on IOKit because of that likely IOServiceMatching query. You can find the Apple16X50Serial project mentioned in that post at the top of Apple's open source code list (go to the main page and pick an older OS release if you want to target something pre-10.6).
If your app is most useful with realtime data (e.g. the RouteBuddy app mentioned in the Python mailinglist thread can log current positions) then you will want to fetch updates from your web sources (hopefully they support long-polling) and convert them to basic NMEA RMC sentences. You do not want to do this from inside your driver code. Instead, divide your work up into kernel-land and user-land pieces that can communicate, and put as little of the code as possible into the kernel part.
If you want to let apps both read and write to these web services, your best bet would probably be to simulate a Garmin device. Garmin has more-or-less documented their protocol in the IntfSpec.pdf file included with their Device Interface SDK. Again, you'd want to split as much as you could into user-space code.
I was unable to find a project or utility that implements the kernel side of an IOKit-based virtual serial interface, but I'd be surprised if there wasn't one hiding somewhere out there. Unfortunately, most of the answers I found to that question were like this, with the developer being told to get busy writing a kext.
I'm not exactly sure how to accomplish what you're asking, but I may be able to lend some insight as to how you might begin to get it done. So here goes:
A GPS device shows up to most systems as nothing more than a serial device -- a.k.a. a COM port if you're dealing with Windows, /dev/ttySx if you're in *nix. By definition, a serial port's specific duty is to stream data across a bus, one block at a time. So, it would then follow logically that if you want to emulate the presence of a GPS device, you should gather the data you're consuming and put it into a stream that somehow acts like an active serial port.
There are, however, some complications you might want to consider:
Most GPS devices don't just send out location data; there's also information on satellite locations, fix quality, bearing, and so on. Then again, nobody's made any rules saying you have to make all that data available. There's probably more to this, but I'll admit that I need to do more research in this area myself.
I'm not sure how fast you can receive data when dealing with Google Latitude, etc., but any delays in receiving would definitely result in visible pauses in your "serial port"'s data stream. Again, this may not be as big a complication as it seems, because GPS devices are known to "burst" data across the bus anyway, but I'd definitely keep an eye on that. You want to make sure there's always a surplus of data coming across, not a shortage.
Along the way you'll also have to transform the coordinates you receive into valid GPS sentences, as well. You can find specifications for those, but I would definitely make friends with the NMEA standard -- even though it is a flawed standard, it's the one everyone seems to agree on anyway.
Hope this helped you, at least a little bit. Are there anymore details specific to your problem that you think could be useful in answering this question?
Take a look to Franson GPS Gate which allows you to connect to Google Earth among other things (like simulating GPS and so on). Is windows only though but I think you could get some useful ideas from it.
I haven't looked into it very much, but have you considered using Skyhook's SDK? It might provide you with some of what you are looking for. It's available for every major desktop and mobile OS.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Computer vision application for automotive telematics application

What sort of application can be considered to be the really business winner for automotive telematics applications related to image processing/computer vision ?
here are the criteria :
1. Innovative
2. Social
3. Fun.
Have you read the articles from the DARPA grand challenge winners?
DARPA site
Google Scholar
I believe the "DARPA Grand Challenge" style of automation meets your .1 requirement as there are plenty of innovation on that front.
But I still think that we are a good decade away from a fully autonomous vehicle, even though the technology is almost there. The main reason is that people are still very afraid of relenting control to the computer, even though it might be the safest choice.
The transition will be slow. More and more models will bring small chunks of automation, such as smarter cruise control systems (that's a big winner right now), autonomous parking (in the market for a while now) and anti collision systems.
Which brings us to your .2 and .3
The above mentioned systems are not fun, they are necessary [for increased safety]. Nowadays, Social Media and Fun don't really mix with driving because they distract the driver from its main task. In the future, when you're on the freeway in auto-pilot mode, you will be able to open your laptop and be free to do whatever you want, since computers will be always connected to the internet. So I don't believe the car itself needs to provide you that aspect of entertainment.
What I do believe it's a killer functionality for cars is the enhancement of intelligent comfort systems integrated with biometrics. Nowadays, cars already have things like personal keys that will make it adjust things like seat height and etc according to your preferences, but it would be much nicer if it could automatically identify who is the driver by some biometric feature (iris, etc) and adjust multiple parameters automatically. That's the end of the key. I'm not talking about seat and pedals adjustment, but transmission style (husband likes a more aggressive transmission), performance limiters (daughter cannot exceed 90% of posted limit... the car knows what the limit is according to where it is).
In my opinion, if you implement biometric recognition + autonomous navigation, the possibilities are endless.
Although none of the applications here use computer vision, they are probably the best once out there yet. They have received quite a bit of media hype.

Custom robotics for building an auto CD-loading arm

Where would you recommend that I find a company to develop or buy a CD/DVD loading arm similar to: http://www.dextimus.com/
Preferably programmable via USB but if I only can get one with a serial interface that would be fine. Drivers dont matter - I can interface directly with the unit as my situation is very unique.
If you have some experience with electronics, you can give it a shot and build it yourself, like this or this.
I should add that the schematics and the source code are included, and in more details in the first project.
I suppose I might just shorten this by giving a list of resources first:
http://www.embedinc.com/ I trust this company to do good work. Expensive (actually, they are reasonably priced in the design community, but would be considered expensive by most hobbyists and individuals). Not great at people skills, but very very very good at what they do.
You should check out the various microcontroller communities and forums for hobbyists and professionals that can do this. Search for microchip, atmel, msp430, arm, powerpc, etc.
Sparkfun is a supplier to the electronics community - they have great forums where you can post your request, and you'll find people who might do it for fun with only the cost of materials. Might take longer, might not be as 'professional' or well packaged and delivered, but it might be your best low cost option.
There are many electronic design companies that could do this (for instance, I can do this sort of thing).
But there are many questions you haven't answered (and may not have researched) that could prevent success:
Is this patented?
What CD loading/unloading methods are not patented, are out of patent, or otherwise available?
What is your design goal - a one off just for you, or a device that can be built in the hundreds for industrial use, or a device meant for general office workers/consumers that is built in the millions?
Do you realize that this design qould surely cost mroe than simply buying one, if one is all you need?
As an example, assuming you don't need the nice enclosure and don't mind a 'prototype' look, just the mechanicals, electronics, and firmware design (no software on the PC) would likely be 100-250 billable hours for a design firm. At a cheap $90/hr, that's $9k to nearly $25k for one prototype. Add PC software and the nice enclosure, etc and you'll double that.
If you can find a local 'Make' group (techshop, GoTech, or similar) then you might be able to find a hobbyist that is willing to play with this idea for the cost of materials.
But if you define what your goal is, and give us an idea of your resources you may find a better answer.
-Adam
You can create a very nice simple solution using radio control servos. They come in many sizes, but even the small ones have enough torque to move a big arm to move a cd.
The real bonus with servos is that they normally have 180 degrees of rotation and internally have a variable resistor (rheostat) for positioning feedback. Positioning accuracy is normally within 1 degree of rotation which should be fine for a cd loader.
For picking up the CDs, nothing will beat a vacuum. I recommend a small battery powered vacuum cleaner. Funnel the suction into a 1/4 inch pipe. At the other end of the pipe a one inch diameter cup should provide more than enough lift from the small amount of suction.
As for the pile of blank CDs to be burnt, I would advise in moving the pile up rather than an arm down to it. probably having the top blank cd about 1/4 inch higher than the cd tray - By doing this, the arm only needs to rotate in one axis and the vacuum should be enough to suck the cd back out of the tray.
Now, for the electronics. For the servo control I suggest an rs232 serial servo controller. I've used the one from http://www.basicx.com/Products/servo/servo8t.htm as it also gives back torque information from the current draw.
For the low sample rate digital i/o, i suggest (for windows) inpout32.dll which is a dll to give you direct access to the bits of a parallel port. This will allow you to turn on the vacuum at the correct time and possibly sense when cd's have run out. Note that a parallel port can sink more current than it sources so for outputs you should connect to the 5v power line and set the output pin to 0 to turn on the output and 1 to turn it off.
The other nice option, which is very, very simple to interface and very cheap is to get hold of a picaxe from http://www.rev-ed.co.uk/picaxe/. These use a very simple programming language (a BASIC spin off) allowing you to read serial data in and control the servos and digital I/O on one chip. Last time I used one, the language was a bit simple - if statements had to jumped labels, else didn't exist.
If you do use a microcontroller and servos, it is best to use a dual voltage power supply as servos are noisy and can cause the microcontrollers to reset.
As for switching loads such as the vacuum on, you'll need to use a mosfet or (if money is no object) the simpler option, a solid state relay.
All digital inputs you use on the microcontroller should be pulled either to +V or ground with say a 5k resistor so they never float.
I cannot stress how simple and cheap the picaxes are. They have a built in interpreter so although code space is minimal on the small 8 pin units, they are programmable via a simple serial lead.
Good luck. Once you get into automation control, you'll never be able to stop. I'm in the middle of building a 3 axis CNC router so I can cut parts for other projects (I tell my girlfriend it's so she can cut out xmas decorations!).
You might want to contact Aaron Shephard about his Florian project.
I've found that a really easy board to control stepper motors or sorvos are produced by phidgets - the API is incredibly easy, and available for a vast array of platforms.

Resources