I'm trying to design a system where streams (backed by an external service, ex. an azure queue storage) may be added or removed dynamically. When a new stream appears, a set of cooperating agents are created to handle this new data stream.
All samples seems to statically configure the available streams (at startup). Is there a way to add and remove streams dynamically?
Sure. You can subscribe and push to any stream without the need to create it explicitly upfront, much in the same way as with virtual actors - the stream will be created (activated) on the first use
Related
I have a system with multiple subsystems communicating with CANOpen. There is a main unit with a screen (for men-machine interface and stuff) and sub-units for minor operations(like sample button status, manage power, take measurements...).
We defined a CANOpen based communication protocol for this system. Subsystems share their conditions periodically with TPDO messages and do stuff according to main unit's commands sent with RPDO messages. And also some NMTs are in use too.
So I've been asked to add a new command to this protocol, zeroize. This command shall be sent broadcast and it shall cause everybody to delete softwares. What is the right way to do this?
Maybe I can use a RPDO? Are we allowed to define new NMT commands in CANopen? Maybe I can do it with NMT but by using a new commandt hat is not in use already?
Thanks in advance
Ip.
It is a bit confusing what you mean with TPDO and RPDO since the main unit's TPDO is going to be the peripheral units' RPDO and vice versa. But yes, the correct way to send out some custom broadcast message would be with a PDO.
Although, depending on what you mean with "delete software", CANopen might provide a mean for it. There are the save (OD 1010h) and load (OD 1011h) registers in the object dictionary. Save is to be used for the purpose of storing all CANopen communication (PDO configuration, mapping etc) in non-volatile memory. And load is used to restore CANopen parameters to factory defaults. These should however not be used to save/load application-specific settings.
You are not allowed to define new NMT commands.
Objects 1010h and 1011h can be used to reset the values in the object dictionary. If you really want to delete the software, the firmware upgrade protocol from CiA 302-3 might help. Writing 00h (Stop program) followed by 03h (Clear program) to object 1F51h sub-index 1 on the slave will delete the application. Whether it's actually "zeroed out" depends on the implementation. You'll need two SDO requests per slave for this though. The standard specifies that object 1F51h cannot be PDO mapped. Although that requirement may not be enforced for your devices, in which case you could achieve broadcast "zeroing" with two PDOs.
I've been trying to access cluster variables. Recently I learned that you cant do so using .NET Network Shared Variables and I found that people usually do this via AcitveX.
Using ActiveX I am able to run any VI I want and change the values but most of the VIs that I am trying to access have UI Loops and Consumer Loops. Changing the value of a control manually, fires an event that is detected and leads to certain actions that I am interested in. After reading some old KBs I found out that with ActiveX one can't do that.
Is it the same in LabView 2015? In some forums people discussed creating a VI within the ActiveX program that fires the user events, a sort of a layer. Can someone share examples of such VIs? Are there any other work arounds?
You can programmatic fire a signal event by using the property node -> value(Signaling)
Right click on the control in the block diagram, this can be found under: Create -> Property Node -> Value(Signaling).
Any value written to this node will generate a signal event for this specific control. You don't specifically need ActiveX to generate these events.
You can fire events with property nodes (as already explained by #D.J. Klomp)
You can capture and handle change events with event structures
This can be done even for single controls inside a cluster.
I'm currently using the dataAPI to keep the dataitems synchronized between handheld and wearable.
Still I want to make sure that every data is stored and there is no data lost in the process.
I'm currently reading GPS parameters when the wear is not connected to the handheld and when they connect, they sync the dataitems.
How reliable is DataAPI?
Is my idea of creating a local file doubling my effort?
How can I create a local file on my wear device and then access it?
Syncing data using DataApi is reliable and I recommend using that; if you come across a scenario that sync is not happening reliably, that should be considered a bug and needs to be reported as such. One issue that folks run into is that they create the same data item and they don't get the onDataChanged() callback but that is by design, if the very same data is being added multiple times, there is no change, hence no callback triggers.
Another factor you might want to consider is whether the data you create on one node is for consumption by all other nodes or only a targeted one; DataApi syncs data across all connected nodes so if I create a data item on watch1 and want to sync that with my phone and if there is a watch2 in the picture as well, watch2 also gets the same data.
If you end up using the DataApi, I strongly recommend to make sure to put in place a policy that removes the data once it is synced and consumed otherwise data will be accumulated with no supervision and you'll finally run out of space.
To answer your questions:
I don't know how reliable it effectively is, but we had problems where data updates didn't trigger the appropriate listeners on the watch side. So I'm not sure. Maybe someone has an official statement for this?
I think it depends on the amount of data you want to store. So I suggest you first become clear about the amount and then choose the format. Keep in mind that there is also the possibility to store data in the Shared Preferences.
These guys here tried to save an image on the watch, but that makes no difference wheter it is an image file or text or whatever file.
Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.
I am working on an mobile app to listen in on ongoing asterisk calls. Asterisk is set up to record calls, however the inbound and outgoing voices get saved to different wav files. Overcoming first obstacle was to stream wav files while they are being written to - this was achieved using Node JS, however now I need to join the mix two files together and stream them, which would be doable if the files were not written to at the same time.
First option would be to figure out how to programatically join the two while continuously checking if EOF has changed while also streaming the result. (Feels above my paygrade)
Second option would be to stream two files independently to client IOS application which would play them at the same time. If the first challenge of playing two streams simultaneously would be solved, it would require very stable connection. Therefore I don't see this as a viable option
Third possibility would be to embed softphone into the IOS app and use it as a client for ChanSpy. Would that be possible and what library can help me achieve it?
What do you guys suggest, perhaps there are more options out there?
Thanks
What about Using Application_MixMonitor instead?
Why not just build a SIP client on IOS and use ChanSpy to listen to the calls live?
http://www.voip-info.org/wiki/view/Asterisk+cmd+ChanSpy
You can supply m option to mixmon application or use sox to do the mixing.
https://wiki.asterisk.org/wiki/display/AST/Application_Monitor
http://leifmadsen.wordpress.com/tag/mixmonitor-sox-mixing-asterisk-script/