Difference between SWIFT Interact & Fileact transmissions - edi

Can you please help me to understand the difference between SWIFT Interact & Fileact protocol

The two products are designed to handle different types of financial processing.
SWIFT Interact is designed for solutions where a real-time gross settlement system is in play, better known by its acronym RTGS. This type of processing is near real-time processing. In the US the FedWIRE for example is an RTGS solution.
The SWIFT Fileact is used for time-insensitive processing, i.e. batch processing. In such a system, financial transactions are queued up and then batch processed at a particular time. For example ACH is a batch processed system.
Though both protocols are handling financial transactions, the difference lies in the processing handling aspect of it.

#venkat
If you are looking for more information about OpenStack Swift kindly follow our developer documentation http://docs.openstack.org/developer/swift/ for details.

Related

Improving Twilio Speech Recognition of Proper Nouns

I am working in an application that gathers a user's voice input for an IVR. The input we're capturing is a limited set of proper nouns but even though we have added hints for all of the possible options, we very frequently get back unintelligible results, possibly as a result of our users having various accents from all parts of the world. I'm looking for a way to further improve the speech recognition results beyond just using hints. The available Google adaptive classes will not be useful, as there are none that match the type of input that we're gathering. I see that Twilio recently added something called experimental_utterances that may help but I'm finding little technical documentation on what it does or how to implement.
Any guidance on how to improve our speech recognition results?
Google does a decent job doing recognition of proper names, but not in real time just asynchronously. I've not seen a PaaS tool that can do this in real time. I recommend you change your approach and maybe identify callers based on ANI or account number or have them record their name for manual transcription.
david

Tool chain for proprietary communication protocol

I (the company rather) have a protocol implementation which uses the CAN hardware (CAN Transceiver). The protocol itself is not a standard CAN protocol stack. Is it possible to use any of the off the shelf CAN-bus monitors for debugging and investigating the data in the bus? I intend to see the bytes being transmitted and more importantly the other information like cycle time, frequency, delay, jitter (if any) and so on. Ofcourse more information is good and in-case some of these above mentioned parameters are missing, its still acceptable. The main purpose of the project is to show that the proprietary implementation is better (performance, bandwidth, speed, etc) than standard CAN stack while still using the CAN hardware (transceivers).
I think most commercial available CAN analyzers can do what you need. Look if they offer an option for programability, in order to provide the possibility to make your own interpretation of the CAN frames.
Can-Wiki offers a list
http://www.emtas.de/en/produkte/can-interpreter

Can I bulk purchase numbers from the Twilio API

I am trying to purchase mulitple numbers using c# with the Twilio API. However currently we must purchase one number at one time, It takes a lot of time to purchase 10-15 numbers in the loop.
So how can I pass a list of numbers through API so it takes less time to buy numbers from twilio.
Twilio evangelist here.
Today there is no way to buy numbers in bulk via the API. You have to make one API request per number that you want to buy.
If the library is not performing fast enough for you, first I'd love to know what kind of performance you are seeing and what you expect so I can work on improving the library.
Second, I'd suggest looking at just using the built in .NET HTTP client libraries instead of using the Twilio library. The library is pretty general purpose and tuned more for ease of use than performance. If you can use .NET 4 or higher, you can use the TPL to get some good perf gains. I've built samples using the HttpClient library and TPL that resulting in substantially higher requests/sec than the library gives me today.
Hope that helps.

Getting started - creating an iPhone app that controls another (non-iOS) device via bluetooth commands

All,
Apologies in advance - this question might be too open-ended for SO.
Anyway... A friend of mine (an engineer and entrepreneur) is in the process of building a high-tech piece of lab equipment. He's asked me about the feasibility of building an iPhone/iPad/iPod application that would allow users to control the device via Bluetooth, so I'm helping him gather some information. I'm hoping to get a few pointers on how to get started. Specifically:
Would this require a native app, or could this be accomplished with HTML5 (with or without something like PhoneGap?)
Can you point me to a good primer on bluetooth networking? Everything I've found assumed a VERY high level of pre-existing knowledge.
What are the basics on how something like this is accomplished? Is there a single, established protocol for how one device "controls" another, or is bluetooth more like SSL - just a pipe that allows you to convey any type of message?
I realize this question is incredibly broad and detailed - so I'm not really looking for specifics. But obvious Google searches don't turn up much, and I'm otherwise having a hard time finding a good starting point.
Thanks in advance.
You can communicate via bluetooth in two ways: One is using the Low Energy Bluetooth capabilities of iOS 5 and newer iPhone/ipads.
https://developer.apple.com/library/ios/#documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/_index.html#//apple_ref/doc/uid/TP40011295
Unfortunately the documentation is sparse and will require some hacking away. If you choose this route I would consider starting here and learning as much as you can about how the protocols work before hacking into the framework:
http://developer.bluetooth.org/gatt/services/Pages/ServicesHome.aspx
The limitations of this route are that it might not be best for sending a lot of data. I have only built stuff that sent simple commands which it does work great for.
The other option is the external accessory framework. This will require you to get an mfi license from apple (not fun). You will also need to pay royalties. But it will do what you want. You won't need to concern yourself much with underlying protocols if you use this, the framework provides a friendly api for processing streams.
http://developer.apple.com/library/ios/#documentation/ExternalAccessory/Reference/ExternalAccessoryFrameworkReference/_index.html

iOS / C: Algorithm to detect phonemes

I am searching for an algorithm to determine whether realtime audio input matches one of 144 given (and comfortably distinct) phoneme-pairs.
Preferably the lowest level that does the job.
I'm developing radical / experimental musical training software for iPhone / iPad.
My musical system comprises 12 consonant phonemes and 12 vowel phonemes, demonstrated here. That makes 144 possible phoneme pairs. The student has to sing the correct phoneme pair 'laa duu bee' etc in response to visual stimulus.
I have done a lot of research into this, it looks like my best bet may be to use one of the iOS Sphinx wrappers ( iPhone App › Add voice recognition? is the best source of information I have found ). However, I can't see how I would adapt such a package, can anyone with experience using one of these technologies give a basic rundown of the steps that would be required?
Would training be necessary by the user? I would have thought not, as it is such an elementary task, compared with full language models of thousands of words and far greater and more subtle phoneme base. However, it would be acceptable (not ideal) to have the user train 12 phoneme pairs: { consonant1+vowel1, consonant2+vowel2, ..., consonant12+vowel12 }. The full 144 would be too burdensome.
Is there a simpler approach? I feel like using a fully featured continuous speech recogniser is using a sledgehammer to crack a nut. It would be far more elegant to use the minimum technology that would solve the problem.
So really I'm hunting for any open source software that recognises phonemes.
PS I need a solution which runs pretty much real-time. so even as they are singing the note, firstly it blinks on to illustrate that it picked up the phoneme pair that was sung, and then it glows to illustrate whether they are singing the correct note pitch
If you are looking for a phone-level open source recogniser, then I would recommend HTK. Very good documentation is available with this tool in the form of the HTK Book. It also contains an entire chapter dedicated to building a phone level real-time speech recogniser. From your problem statement above, it seems to me like you might be able to re-work that example into your own solution. Possible pitfalls:
Since you want to do a phone level recogniser, the data needed to train the phone models would be very high. Also, your training database should be balanced in terms of distribution of the phones.
Building a speaker-independent system would require data from more than one speaker. And lots of that too.
Since this is open-source, you should also check into the licensing info for any additional details about shipping the code. A good alternative would be to use the on-phone recorder and then have the recorded waveform sent over a data channel to a server for the recognition, pretty much something like what google does.
I have a little bit of experience with this type of signal processing, and I would say that this is probably not the type of finite question that can be answered definitively.
One thing worth noting is that although you may restrict the phonemes you are interested in, the possibility space remains the same (i.e. infinite-ish). User training might help the algorithms along a bit, but useful training takes quite a bit of time and it seems you are averse to too much of that.
Using Sphinx is probably a great start on this problem. I haven't gotten very far in the library myself, but my guess is that you'll be working with its source code yourself to get exactly what you want. (Hooray for open source!)
...using a sledgehammer to crack a nut.
I wouldn't label your problem a nut, I'd say it's more like a beast. It may be a different beast than natural language speech recognition, but it is still a beast.
All the best with your problem solving.
Not sure if this would help: check out OpenEars' LanguageModelGenerator. OpenEars uses Sphinx and other libraries.
http://www.hfink.eu/matchbox
This page links to both YouTube video demo and github source.
I'm guessing it would still be a lot of work to mould it into the shape I'm after, but is also definitely does do a lot of the work.

Resources