I am able to send data upto 20 KB from one Mac to another using iPhone Simulators successfully.However for data larger than that it is not able to send.Even fragmenting data doesnot work. Help...
Your question is a bit obscure to be answered adequately. Without any of your code, output, or examples, we don't have a lot to go on.
A great tutorial exists, however, for networking between different Mac/iPhone devices, and is available here:
http://mobileorchard.com/tutorial-networking-and-bonjour-on-iphone/
If you're interested in learning how to properly network two devices, it's a fantastic start. The project provides a lot of great classes that add a thin Objective-C layer on top of the standard C socket functions, and can make your networking headaches go away very quickly.
Related
I am making an app that involves sending and receiving files from one iPhone to several other iPhones. I did a lot of googling and came nowhere in finding the classes that support the above said features. I am wondering whether it is possible and if it is possible what are the classes that i can use to do that
Take a look at my library https://github.com/abdullahselek/Merhaba
you can send data with Bonjour networking
I've integrated a libPd patch in iOS.
When entering a text field, and presenting the keyboard there's some crackling sounds.
How would I go about debugging this?
NB I've tagged this question with Objective-C and iOS, however this question may require knowledge in all four tags - libPd and Pure Data well:
What is Pure Data
Pure Data is a powerful programming language for the manipulating of audio from core mathematical concepts. It's widely used games as well as DJ and other music focused applications. Some example apps that are built with Pure Data and libPd are: The Rj Voyager app from RjDj and the Inception App from Warner Brothers.
libPD is a method of embedding Pure Data patches (developed using the visual interface) within an iOS app. Controlling the Pd interface is done via a publish/subscribe message interface similar to OSC or MIDI. .
The GitHub page for libPd is here: https://github.com/libpd
What help am I looking for?
I'm not sure where to start debugging this. Someone who has integrated and used libPd on iOS could surely share experience. It could be related to the following:
How threading works, and how it interacts with the main queue
What sample rates work best given the target devices
What debugging tools are available.
Other advice earned through deep experience.
I don't know anything about PD, but it seems likely that the presentation of the keyboard is causing you to be CPU-starved for some reason. You might try:
verifying this still happens when in release and not attached to a debugger (log messages cause long delays when attached to the debugger, which alone can cause hiccups like this)
profiling your code using Instruments to see if you're inadvertently using a whole lot of CPU at once or
increasing buffer sizes so PD doesn't need CPU as often.
I was experiencing the same symptoms in an app I'm working on. I did manage to ascertain a couple of things early on. My recent changes involved sending alot of messages to pd during app init. I noticed when debugging that when I reduced the amount of messages sent, the sound improved. Also, I didn't see this in the simulator, only on the device.
The libpd example PolyPatch was pretty useful in this case, if you increase the amount of patches that can be generated. I found that the sound was breaking up with many patches open, in exactly the same way as in my app. This is quite simply where the overhead of using libpd takes its toll on performance. What's also clear is that simplifying a patch (so it contains less objects) impacts performance. But by far the biggest hit is creating a new, separate patch. So you won't want to be creating huge numbers of patches. Debugging does of course take a toll too.
44.1khz works pretty much everywhere as far as sample rates go (it's the pd standard too). And there's nothing to stop you debugging the libpd code right there in xcode, i've done that a few times. Other than that, there is the issue of debugging patches. You can either set up your patch with test versions of your objects directly in pd, or you should be able to set up libpd to view the same output as you would normally see in pd's main window in the console (you just need to make sure that you have something like this
[PdBase setDelegate:_dispatcher];
in your code - it's all in the dox of course). Then you just pepper your patch with print messages as required...
Hope it helps, and is still relevant after 3 mths...!
As a final school-graduation project I try to develop a kind of spying-car. Which means there is an iPhone placed on a little LEGO-car and an iPad used as a "steering wheel" for the car. Also it is planned to transmit Audio and Video from the iPhone's microphone/camera to the iPad (more than the steering data vice versa).
In the first place the connection from iOS to iOS should be established over a local WiFi network and later - if possible - over 3G (by using the iOS devices network-IP and a DNS server to deal with frequently changing addresses).
My question is: which technology do you recommend using? I read about GameKit, peer-to-peer and so on, but I think these technologies are too abstract for later being able to communicate over 3G. I guess I need to go a little deeper into the low levels of the communication progress. Any suggestion that could bring me a step forward is highly appreciated! (also regarding other parts of my project)
One more thing: Some user suggested using a third party service and to route the sent (video) data over an external server. If possible, I'd rather not use any "middle man". It should just be a basic server-client communication where the iPad is the server and the iPhone the client.
It is kind of an open ended question, but interesting.
First of all, GameKit do have 3g p2p support, see here:
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/GameKit_Guide/Matchmaking/Matchmaking.html
It will handle the peer-to-peer addressing and establisment of the socket. It can also handle voice chat, but I personally never tried this feature so I can't say if it is feasible in your case.
One idea is to leverage existing video-chat services. This will give you a low-latency audio/video channel with peer-to-peer addressing (well, likely using a central server).
Apple's FaceTime is such a service, but there is no public API to it (AFAIK). Same goes for Skype and Google.
There are some paid services that look like they have nice iOS APIs:
http://tokbox.com/platform
http://docs.weemo.com/sdk/ios/
You have to figure out a way to transmit control commands to the peer iPhone, I did not see if the services above had any possibility of sending text messages/arbitrary data.
Tokbox has a free trial so you could try it out and see if it works for you.
I would go for GameKit if this is a hobby project on a budget and there is time for hacking, and probably look into a more high-level API if there is a deadline...
sorry for writing this as an answer, but i don't have enough rep to comment...
i'm working on a similar project and i currently don't have any advice regrading video-streaming, however, from what i read (extensively) i came to the conclusion that i need to use p2p connection between devices for better performance and use socket programming in order to achieve this (although not the most easy to implement choice).
I considered using GameKit which i think will probably answer most of your needs as Krumelur pointed out. but in my case, eventually the app will be cross platform so i had to use low level network programming. you can check out my question here to see the sources i used to try and make the connection between 2 peers, hopefully you'll have better luck than me...
I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.
You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.
lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)
I want to port a good OpenCV code on an embedded platform. Earlier such stuffs were very difficult to perform but now TI has come up with nice embedded platforms which are comparatively hassle free as they say.
I want to know following things:
Given that :
The OpenCV code is already running on PC smoothly. (obviously)
Need to determine these before purchasing the device.
Can't put the code here in stackoverflow. :P
To chose from Texas Instruments: C6000.
Questions:
How to make it sure that the porting will be done?
What steps to be taken to make it sure that after porting the code, will run (at least).
to determine whether the code might require some changes to make its run smooth.
The point 3 above is optional.
I need info which will at least give me some start up in this regard.
What I thought I should do?
I am to list the inbuilt functions down.
Then to find available online bench marking for those functions for the particular device like as shown towards the end of this doc.
...
Need to know how to proceed further?
However C6-Integra™ DSP+ARM Processor seems the best.
The best you can do is to try a device simulator (if it is available), but what you'll see there is far from perfect.
Actually, nothing can tell you how fast and how well the app will run on the embedded device before running you specific app on that specific device.
So:
Step 1 Buy it
Step 2 Try it
Things to consider:
embedded CPU architecture: Your app needs a big cache? how big is the embedded cache?
algorithm: do you use a lot of floating point operations? how good is the device at floating point ops?
do you have memory transfers? data bus on a PC is waaay faster than on embedded
hardware support: do you use a lot of double-precision calculations? they are emulated on ARMs. They are gonna kill your app (from millisecons on a PC it can go to seconds on a ARM)
Acceleration. Do your functions use SSE? (many OpenCV funcs are SSEd, even if you don't know). Do you have the NEON counterpart? (OpenCV does not have much support for that). The difference can be orders of magnitude from x86 SSE to embedded without NEON.
and many, many others.
So, again: no one can tell you how it will work. Just the combination between the specific app and the real device tells the truth.
even a run on a similar device is not relevant. It can run smoothly on a given processor, and with another, with similar freq or listed memory, it will slow down too much
This is an interesting question but run is a very generic word in this context, therefore I feel the need to break it down to other 2 questions:
Will it compile in an embedded device?
Will it run as fast/smooth as in a PC?
I've used OpenCV in a lot of different devices, including ARM, SH4, MIPS and I found out that sometimes the manufacturer of the device itself provides a compiled version of OpenCV (for my surprise), which is great. That's something you can look into, maybe the manufacturer of your device provide OpenCV binaries.
There's no way to know for sure how smooth your OpenCV application will be on the target device unless you are able to find some benchmark of OpenCV running in there. PCs have far better processing power than embedded devices, so you can expect less performance from the target device.
There are 3rd party applications like opencv-performance, that you can use to test/benchmark the environment once you get your hands on it. And if performance is such a big deal in this project, you might also be interested in this nice article which explain some timing tests done on couple of OpenCV features comparing implementations using the C and C++ interfaces of OpenCV.