How does scdf realize the multi-output function? - spring-cloud-dataflow

Based on the introduction of fan-in and fan out on the official website, I designed the flow model as shown in the figure below:
[1]: https://i.stack.imgur.com/DynzW.png
The source has two functions to send out hello and mygod string messages respectively. I want to use the destination function of scdf to bind the two functions to different topics, and then be consumed by 2 sinks, but the function that sends out mygod messages Cannot run successfully (scdf cannot recognize the corresponding function).
Is there any solution?

This specific case on having multiple inputs to a same app (hello-sink app in your case) isn't supported on SCDF OSS but supported in the commercial version of SCDF. You can find the documentation around this here

Related

How to write a library for multiple devices with similar versions of an API

I am trying to develop a library of shared code for my company.
We are developing on a technology by SICK called AppSpace, which is designed for machine vision. AppSpace is a stand alone eco-system, beneath which there comes a variety of SICK programmable devices (e.g. programmable cameras, LiDAR sensors), and an IDE with which these can be programmed. Programs are written in Lua, using HTML/CSS for the front end.
AppSpace provides a Lua API for these devices.
In my company, a few of us write applications and it is therefore important that we create a library of shared code to avoid redundancy / rewritten code.
However, each firmware version of each device has a corresponding API version. That is to say, that on a given device the API can change between firmware versions, and also that API versions differ across devices. Two devices will have two different sets of API functions available to them. Functions they share in common may also have slightly different implementations.
I am at a loss as to how such a situation can be properly managed.
I suppose the most "manual" route would be for each device to have its own partial copy of the library, and to manually update each device's library to have the same behavior each time a change is made, ensuring that each device conforms to its API. This seems like bad practice as it is very error prone - the libraries would inevitably become out of sync.
Another option might be to have a master library, and to scrape the API documentation for each device. Then build a library manager which parses the Lua code from the library and identifies missing functions for each device. This seems completely impractical and also error prone, probably.
What would be the best way to develop and maintain a library of shared code which can be run on multiple devices, if it is even possible?
I would like to answer this and review some of the topics discussed.
First and foremost; functions that are shared in common between devices will be implemented differently by means of the compiled code on the respective device (i.e. PC, 2d camera, 3d camera, LIDAR, etc) while the functionality maintains the same between them all. This way the code can be readily ported from one device to another. That is the principle of the SICK AppEngine that is running on all SICK AppSpace devices as well as 3rd party hardware with AppEngine installed.
The APIs embedded into the devices are called a CROWN (Common Reusable Objects Wired by Name) component and can be tested against nil to determine if they are exposed APIs. Here's an example of an CROWN called 'IMAGE'. If it exists than you could run this code when it does.
if IMAGE then
--do code
end
SICK also has a AppPool that you can upload your source code to and it will test all the required CROWNs and return a list of all SICK devices that can run properly.

Alchemy API / Bluemix not providing consistent results

I am using the Alchemy API (Bluemix) and rails wrapper and am getting back nil for blocks of text. For example, consider below text:
"The Vancouver International Flamenco Festival presents renowned flamenco dancer Mercedes “La Winy” Amaya in an electrifying tribute to flamenco’s vibrant past, featuring the authentic Spanish Gypsy style of flamenco, from sumptuous sway to fierce flourish."
When I call the keyword endpoint, I only get keyword results about half the time. When I search the same block of text multiple times, I get results half the time and nil half the time.
I'm only making calls about once per second so rate capping is not an issue.
What is causing this to happen? Where should I start looking?
Since AlchemyAPI was acquired and integrated into IBM Watson (as accessed via Bluemix), I don't think this question can be answered in it's current form. The AlchemyAPI services as used with the old Ruby wrapper mentioned above have been deprecated.
Instead, I suggest getting new credentials on the services for whichever aspect of AlchemyAPI you were using. The new products are mapped as follows:
AlchemyLanguage -> Watson Natural Language Understanding
AlchemyDataNews -> Watson Discovery
AlchemyVision -> Watson Visual Recognition

how to do image processing through bluemix

I'm a starter of IBM Bluemix and i don't know how to use image processing tools. Please help me out with this. and also please tell me how to load the images into bluemix image processing tool.
Have a check on Alchemy API of IBM Bluemix.
AlchemyAPI offers a set of three services that enable businesses and developers to build cognitive applications that understand the content and context within text and images. For instance, using AlchemyAPI, developers can perform tasks such as extracting the people, places, companies, and other entities mentioned in a news article or analyze an image to understand the contents of the photo.
AlchemyLanguage
AlchemyLanguage is a collection of 12 APIs that offer text analysis through natural language processing. The AlchemyLanguage APIs can process text and help you to understand its sentiment, keywords, entities, high-level concepts and more.
AlchemyVision
AlchemyVision understands and analyzes complex visual scenes without needing textual clues. Developers can use this API to do tasks like image recognition, scene recognition, and understanding the objects within images.
AlchemyData
AlchemyData provides news and blog content enriched with natural language processing to allow for highly targeted search and trend analysis. Now you can query the world's news sources and blogs like a database.
An example screenshot of how it looks-
They have a great tutorial here on how to Get started - Step 1.
If you are looking for image processing using Python, here is a great tutorial with simplistic steps on how to kick off.
More examples or references-
Bluemix - Tutorials Videos
Analyze notes with the AlchemyAPI service on IBM Bluemix
Getting started with the Visual Recognition service
Real Time Analysis of Images Posted on Twitter Using Bluemix
Editors' picks: Top 15 Bluemix tutorials
If you would like to use runtimes, you could use imagemagick libraries, recently added on Cloud Foundry. The binaries should be on this path
/var/vcap/packages/imagemagick/bin
Otherwise you can refer to the chosen buildpack specific options: for example with the php one you could use GD library, installing through composer utility
{ "require": { "ext-gd": "*" } }
Another opportunity could be to use a docker container instead of a runtime, which allows you to keep the scalability opportunities of Bluemix but giving you wider configuration options.
Generally speaking it depends a lot from the technology you would like to use (java/php/python etc...)

Implementing ospf topology collector

I need to implement a software module that is able to retrieve the topology of an autonomous system.
Looking at the various protocol that are implemented in Cisco routers i concluded that the only two alternatives to obtain topology are smnp and ospf.
The first one is a workaround and i don't want to use it, this leads to ospf.
I haven't found library in c, java and python that are usable; this one ( http://www.ospf.org/ )is probably the most complete but comes without documentation and i don't have enough time to analyze all the code.
So i found quagga that can implement a software ospf router; seems the perfect alternative since it can work with both real network and simulated network in gns3.
But it's possible to obtain the ospf routing table from quagga since everything is from command line?
This are my conclusions and doubts if someone can suggest something better or help me with the next step it would be appreciated since i'm stuck at the moment.
Use quagga's ospfclient feature. There is already an example provided in the ospfclient directory (see ospfclient.c) which will show you how to retrieve the LSA database from a quagga/ospfd instance. For this solution to work you need to attach a PC to one of your OSPF backbone routers and configure quagga/ospfd on it to successfully learn the routes then you start your ospfclient to retrieve any information you need.

Carmen Robotics

I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.

Resources