Which streaming API to implement in Cocoa/iOS/MacOSX? - ios

There are a couple of streaming functions and/or classes in iOS/MacOSX. If I wanted to implement my own custom stream i.e. some object that either provides or consumes sequential bytes of data, which of the following API's would be best to use?
Unix file descriptors (int)
No way to implement custom file descriptor?
FILE*
Use funopen to implement own stream.
C-level API, not a lot of Cocoa actually uses this. E.g. cannot bridge this to NSInputStream.
Data can be consumed by your own code
CGDataProviderRef, CGDataConsumerRef
Allows both sequential and direct access read implementations.
Used by CoreGraphics but nothing else.
Data cannot be consumed from your own code
NSInputStream, NSOutputStream
Used by a lot of Cocoa e.g. XML, JSON parsing
Doc allows subclassing, but in practice subclassing is tricky because of asynchronous use.
Data can be consumed from your own code
It would seem NSInputStream/NSOutputStream is the ideal "Cocoa" way of implementing your own stream. However I'm not sure how to implement an App Store-friendly i.e. no overriding of hidden methods (see e.g. http://bjhomer.blogspot.com.au/2011/04/subclassing-nsinputstream.html), subclass of NSInputStream, and either:
(a) make a synchronous-only stream (how do I know whether a particular Cocoa API would consume streams asynchronously and thus not work with my synchronous-only stream?) or
(b) make an asynchronous stream?

Related

Rest, Json, Alamofire - Swift

I have really huge problem with these 3. I can't them distinct. I know that are libraries and cooperate with themselves . It seems to me that all three make the same , I mean retrieve data from Internet. Could you explain me what's going on with these three?
REST(Representational state transfer): It is a framework which provides you a way of communicating between computers using internet. Typically, over an API call. It consists of an Architecture with 6 Constraints (5 Compulsory & 1 Optional). Read more about it here.
JSON(JavaScript Object Notation): This is a standard representation of data exchange. There are other representations like XML which were used for the same. JSON consists of basic data structures in order to pass data over network. It uses String, Number, Array, another JSON Object, null and a Boolean to efficiently represent data.
The above 2 concepts are relevant in any stream or language of computer science.
Alamofire(HTTP Networking library for Swift): This library IS ONLY USED IN iOS APPS WITH SWIFT. It doesn't hold relevance outside this subset. Sure, there is a method of making network calls without using Alamofire in swift. You can read about NSURL, NSURLSession etc. to learn the classic method. The problem with normal NSURL calls is that it is very elaborate to write those calls and can get messy in no time. Thankfully, there is a way of mitigating that mess. Alamofire handles those async calls efficiently and also lets you do cool stuff with the response easily.
NOTE: These 3 are not at all same. REST calls can be made using JSON, XML, URL Encoding etc. JSON can be used in normal JavaScript and not necessarily needs to be passed over a network. and Alamofire exists to just ease the pain of making network calls in iOS.
Hope this helps!

Is a generic output required to simply write samples to a file?

I want to record audio from an iPhone microphone and write those samples to a file. Looking at the documentation, it's not clear to me if I can simply perform the write operation inside the render callback function of the Remote IO unit? Or if I instead need to attach a generic output AU and write the samples coming out of that unit(?) The latter implies more overhead in terms of setting up an AUGraph, AUNodes etc, so I'd prefer the former.
You can do it in your Input Callback, using the ExtAudioFile API (and ExtAudioFileWriteAsync in particular). The "async" bit is what makes it viable in the realtime input callback.
See this answer for more, as it's quite a similar setup.
ExtAudioFileWriteAsync docs are here.

efficient and flexible binary data parsing

I have an external device that spits out UDP packets of binary data and software running on an embedded system that needs to read this data stream, parse it and do somethign useful. The binary data gets logged to a file as well. I would like to write a parser that can easily take the input directly from either the UDP stream, or a file, parse the data into a specific format and then direct the output to either a file (e.g. matlab dat file) or to another process that will do some real time processing. Are there any resources that would help me with this and what is the best way to go about this? I think it might make sense to use C++ streams but I'm not familiar with creating custom output streams. Does this seem like a good approach to take or is there a better way to go about it?
Thanks.
The beauty of binary data is that its is generally of very fixed format.
A typical method of parsing it is to declare a structure that maps onto the received packets, and then to just use type-casts to read the fields as structure elements.
The beauty is that this requires no parsing.
you have to be careful about structure packing rules, and endian-ness to make the structure map exactly the same way. Use of the C "offsetof" and "sizeof" macros is useful to emit some debug info to check that your structure is indeed mapping to what you think it is mapping.
Packing rules can typically be altered either by directives (such as #pragma's) or command line options. Endian-ness you are stuck with. If its different from what your embedded system uses, declare all the fields as bytes, or use something like the "ntoh" macro to do the byte swapping.
The New Jersey Machine Code Toolkit is a scheme for decoding arbitrary binary patterns. It was originally designed for decoding instruction sets, but it ought to be just fine for decoding message formats. You provide a description of the binary format, it synthesizes code to access the fields of that format (when valid). THus you can refer to message fields using generated function calls rather than think about where the field is or how it is encoded.

What is a 'Stream', relating to cin and cout?

A tutorial is talking about cin and cout:
"Syntactically these streams are not used as functions: instead, data are written to streams or read from them using the operators <<, called the insertion operator and >>, called the extraction operator."
What is a 'stream'?
Consider a "Stream" as a physical hose, or pipe. At one end, someone may pour some water in. At the other end, it will come out. This is 'reading' and 'writing' to the stream.
A stream is just a place where data goes. It can be a 'socket stream' (over the internet) or a 'file stream' (to a file), or perhaps a 'memory stream', just data written to a place in-memory (ram).
A "stream" is an object that represents a source of data, or a place where data can be written.
Examples include file handles and pipes - things that you can read data from or write data to.
An important property of streams is that they share a common interface, so the same code can write to either a file or a pipe (for instance) without needing to be rewritten.
You should look at streams as abstractions on underlying 'sources' or 'sinks' of data. A source is something you read data from, and a sink is something you write data to.
The concept of streams allows you to perform I/O on various forms of media, network connections, pipes between applications, files, etc.
The stream abstraction is very valuable to us as developers as it allows us to simplify input and output, and it gives us the flexibility to arrange and reconnect the sources and destinations of these streams.
A good analogy is that of a hose. You can send and receive data through hoses, and you can connect these hoses to various things.
By allowing programs to talk through hoses, we allow all sorts of programs to talk to each other, and we increase interoperability and utility vastly.
This is at the heart of the UNIX philosophy, and supports some very powerful programming idioms.

Available Game network protocol definition languages and code generation

I've been looking for a good general purpose binary network protocol definition framework to provide a way to write real-time game servers and clients (think World Of Warcraft or Quake III) in multiple languages (e.g. Java backend server and iPhone front-end client written in Objective-C and Cocoa).
I want to support Java Flash clients, iPhone clients and C# clients on windows (and XNA clients on XBOX).
I'm looking for a way to efficiently send/receive messages over a TCP/IP or UDP socket stream connection. I'm not looking for something that can be sent over an HTTP Web Service, like JSON or XML marshalled Objects. Although Hessian's binary web service protocol is a very interesting solution
I want a network protocol format and client/server basic implementation that will allow a client to connect to a server and send any message in the defined protocol and receive any message in the protocol without having to bind to some kind of RPC endpoint. I want a generic stream of any message in my protocol incoming and outgoing. This is so that I can support things like the server sending all clients the positions of various entities in the game every 100 milliseconds.
The network protocol frameworks I've found are as follows:
Google's Protocol Buffer - but it lacks support for things like sending/receiving arbitrary messages from your given protocol.
Apache Thrift - an interesting option but it is geared mainly towards RPC instead of generic game client/server socket type connections where the client or server can send messages at any time and not just in response to a client RPC request.
Raknet Multiplayer - Raknet provides full multiplayer network library (it's free for indie development with revenue under $250k)
UPDATE : OculusVR Acquired RakNet and its Free/OpenSource now. U can find it on Github
Hessian Binary Web Service Protocol - is a HTTP web service binary protocol, it is well-suited to sending binary data without any need to extend the protocol with attachments.
Raknet provides a good game/simulation oriented multiplayer library.
Apache Thrift and Google's protocol buffers seem to be the simplest approaches to using in a game network protocol client/server architecture.
Hessian seems like a great fit if you want to create a web based game server with a Java or flash client using some type of server push technology like COMET. Hessian might provide a really interesting way to support real-time games on the web and even be able to host them on VM web solutions like Google's App engine or Amazon's EC2.
There's some discussion about using various protocol definition frameworks for games and other uses:
Comparison of Various Serialization Frameworks
Thrift vs Protocol Buffers - Thrift is declared the better framework because it has a fully supported RPC client/server implementation
Using Protocol Buffers for client server Game API determining what type of message to decode
Bi-Directional RPC using thrift
DIS
If you do go the route of writing your own protocol, you may want to read the answer I posted here.
In summary it discusses what you should think about when writing a protocol, and list a few tricks for versioning and maintaining backwards and forward compatibility.
If you are really concerned about multiple platforms and language, be sure to take into account endian issues. A binary protocol designed for this use must use network-byte-order, so it needs custom per-data-type serialization functions; you cannot just blindly push C structs into network buffers.
A common solution for this problem at game companies is to have protocol description language or specification in a simple format like XML or python or lua, and then have code generation for each target language that generated packet classes with both data structure and serialization. This specification could use a type system that starts with basic types, then extends to include game-specific types with semantic information, enumerations or more complex structures. For example a data file could look like:
Attack = {
source = 'objectId',
target = 'objectId',
weapon = 'weapon::WEAP_MAIN',
seed = 'int'
}
This could generate code like:
#define PT_ATTACK 10002
class PacketAttack : public Packet {
public:
PacketAttack () : m_packetType(PacketAttack::s_packetType) {}
ObjectId m_source;
ObjectId m_target;
WeaponType m_weapon;
int m_seed;
bool Write(Stream* outStream) {
Packet::Write(outStream);
outStream << m_source;
outStream << m_target;
outStream << m_weapon
outStream << m_seed;
}
bool Read(Stream* inStream);
static const int s_packetType;
};
This does require some more infrastructure.. streams, packet base classes, safe serialization functions..
I want to echo Bill K's suggestion. It's not hard to roll your own protocol.
For the iPhone side, have a look at AsyncSocket which support for delimiter based TCP packets built in, and it's not hard to build a solution which uses packet headers.
If you quickly want to have a testserver to play against AsyncSocket on the iPhone, you can look at Naga (for the java server part) which has ready made stuff both for delimiter based packets and packets with headers. Naga was partially written with networked games in mind.
I disagree with "roll your with simple delimited strings approach": question is, what exactly would be the benefit? Getting to write and maintain more code?
The only reasons I could see would be lack of tool support (writing for some odd platform), or specific (very) hard performance or message size constraints.
Or, sometimes, really wanting to write a format -- that's ok, but it must be an explicit reason.
Depending on exact needs I would suggest considering JSON, since it can read and write arbitrary messages; has good object binders for Java (just like xml), is easier to read than binary formats, and is all around "good enough" for many use cases.
If message size is very important, Protobuf can work well -- while its size is not always as small as gzipped alternatives (gzip+xml, gzip+json compress very well), it's usually close.
ASN.1 fits the definition of "good general purpose binary network protocol definition framework". It's also standardized by ITU-T, so there's a lot of existing tools and libraries for various languages.
The DER encoding is suitable for efficient network communications, the XER encoding for human-readable (and writable) permanent storage.
Because you want to use different languages and also because you want something clean/small, I suggest the protocol buffers of google. You need a pre-compile part for the RPC but I really think that's the best option when you begin to mix different languages.. Here's the link: http://code.google.com/apis/protocolbuffers/docs/overview.html
Why not implement UDP directly? Your question mostly mentions what you don't want.. What further form of abstration do you want on top of UDP?
Download the Quake III sourcecode and see how they frame game updates over UDP?
The IP protocol has been designed to support multiple devices/OSes in a uniform way, isn't this what you ask for?
What protocol has implementations across a huge range of systems, hmm, IP perhaps?

Resources