can iPhone detect digital signal through microphone? - ios

I'm working on a project which needs the iPhone to detect a small set of voltage data (two different values of voltage representing 0 and 1 respectively). My initial thought is to detect it through microphone, but I'm not sure if iPhone would be able to detect these signals since it contains no info of frequency. I searched in iOS Developer resources and on google, but nothing clear about this problem. Can anyone help me with this question? Thanks a lot!

as per our discussion, it seems you want to send a digital signal to the iphone. Now there are two main ways to do this.
Either sign up to Apples MFi hardware licensing program which allows you to create hardware for the iPhone. MFi program. Or,
There is an easy way to do this but it would require the use of the headphone jack. For demonstration and testing purposes you can use the headphone jack and if its a simple on and off signal then you can get good results with the headphone jack overall and might not require to create yourself a piece of hardware. Here's a link to that Grab sensor data and send it through the iPhone headphone jack
In fact, using the headphone approach is not as bad as it may sound, you can receive a nice signal if needed. Anyway, it will suffice for your purposes. Have a look what this guy is doing. I suggest you start with the video demonstration to get an overall idea of this approach.
UPDATE 1
Have a look at this link. People are using the headphone port to detect voltage. The reason this works is because the iPhone jack is a combination earpiece (output only) and microphone (input only) connector. The microphone input is a single wire, with common ground to the earpieces.

The usual problem when trying to use an audio signal as digital signal input is that it is high-pass-filtered to avoid offsets (long term pressure changes which could destroy the dynamic range of the soundcards Analog-to-Digital converters). This means you will get only the changes of the digital signal (like "clicks"). Actually you will be better of the higher frequency your signal has, but it will be non-trivial to process it well then (this was what ordinary modems did very well).
If you can control the digital signals sent as sound (or to the microphone jack), make sure they are modulated the signal with one or more tones, like a morse transmitter or modem. Essentially you want to use your iPhone as an Aucoustic Coupler. This is possible, but of course a bit computationally expensive considering the quite low bandwidth.
Start by using some dirt simple modulation, like 1200-1800 Hz and make an FFT of it. There's probably some reference implementation for any sound card out there to get started with.
There have been some cool viruses recently that was said to be able to jump air-gaps, they used similar techniques as this one.
If you still really want a DC (or slowly changing digital signal), check out a solution using a separate oscillator that is amplitude-modulated by the incoming signal

Related

Symbol Sync parameters

I'm trying to implement a flowgraph in gnuradio that is a basic OFDM transceiver to communicate between two USRP devices (Ettus N210). I'm using an Octoclock of National Instruments to have the same clock source for both devices. My problem is that I don´t want to use this octclock to synchronize the devices, instead of using this octoclock device I would like to use the Symbol Sync block to achieve synchronism between devices, but I'm not getting any good results from that. Can anyone help me with the parameters or does anyone have ever worked with this block?
There's no Symbol Sync Parameters that can solve this!
The largest part of your flow graph makes no sense:
Your USRP takes baseband signals and shifts them to the frequency you want. You shouldn't mix this yourself.
Your receive-side mixer (don't use that at all) is totally broken; you're missing the low-pass filter. Again, don't mix at all. That's your USRP's job.
If you want to only occupy an upper 1/4 of the bandwidth, simply select these carriers with the carrier map parameter. No need for resampling. Really no reason to use the arbitrary resampler for a rate of 4.
Your receive side first throws away half of your signal, don't do that, if you want an analytical signal.
These are digital complex signals, if you really wanted to do the mixing (don't!), you could simply multiply with a single complex sine, or use the "rotator" block. Again, don't mix.
Your low-pass filter is totally wrong; your signal of interest has more than 240 kHz of bandwidth.
Your sync block hurts you – your OFDM receiver has built-in synchronization.
You're completely missing channel coding, which won't turn out in your favor even at really good SNR. Which is strange - you even have the definition of a decoder in your flow graph, you just don't use it.
Essentially:
everything after the OFDM Transmitter should be removed.
everything before the OFDM Receiver should be removed.
Add channel coding.
I'm a bit confused: This really looks like you took the working ofdm_loopback.grc example that comes with GNU Radio and reworked it until it didn't work anymore. Also, GNU Radio 3.8 has been out for a year – might be worth to update your system.
Thanks for the answer.
The purpose of those blocks of sine and cosine, both on the transmitter and the receiver sides, is to put the signal to an intermediate frequency, before apply the carrier of USRP sink/source. Basically, I need the signal on intermediate frequency (on both Tx/Rx sides) to do some tasks. In the attached file I try to give you a better explanation of what I want to do. The variable O it's the oversampling factor (O=4).
Tx_Rx_scheme
Best Regards

Is it possible to work with an connection range BLE?

For one of our projects i'm looking for a way to only let centrals (native iOS or Android app) connect with the perhiperal when they are inside a defined connection range (distance around the perhiperal).
I know that BLE is not designed for distance measuring but i hoped there is a reliable way to make a difference between centrals in a range <2m from the perhiperal and centrals >3,5 meter from the perhiperal.This means i do not need to measure the exact distance.
An important thing to mentoin is that our perhiperal can be located in an "open field" situation but also in situations where it is surounded with walls or concrete for example in a entry floor of a building or a carpark.
Another possible issue is that the central can be inside a car but if this is the case, all centrals for the concerning perhiperal are inside a car. Ofcourse it can be different cars.
Note that there is max 1 perhiperal at the time inside the connection range.
In our current version we developed an formulla what uses the received RSSI strength to estimate the distance. Unfortunately we cannot get this working reliable enough. Maybe we need to use another formulla or calibration method or whatever, we really tried many things during the last 6 months.
The concrete question is:
Is it technically possible to achieve the target as described above and when yes, what is the way to achieve the target above? We are open for specific BLE antenna's or specific designed casings for the BLE antenna or whatever is needed. It is also okay when we need to build an calibration application or specific hardware to calibrate our perhiperals, for each perhiperal, so we are realy open minded for any solution as long as it works reliable!
When more info is required to give an answere, please let me know what is missing and i will complete the info.
Unfortunately, it is like you said, you cannot use Bluetooth or Bluetooth Low Energy for short distance/range measurements. Bluetooth was just not designed that way. You want to get accurate reliable measurement for something between 2-3.5, this is way too small for BLE to be capable of that. I know that this is not what you want to hear, but I have already tried this and wasted many months on this before.
The only thing I can recommend if you really really need to continue down this path is that in order to get a more reliable outcome, you will need many many devices that are measuring the RSSI simultaneously, and then those devices need to be talking to each other to get an average RSSI measurement. You may also want to look at configuring the Tx power based on the average readings that you get, i.e. the closer the device gets, you lower the Tx power of both the scanning and advertising devices. Finally, directional antennas can be used if you are planning to use non-Android non-iOS devices for scanning, but this will be tricky if the only antenna you can change is that of the peripheral.

How to connect multiple > 10 wireless sensors to Arduino

I am working on a small learning project with arduino in which there are multiple sensors like motors, 7 segment display, temprature sensor, lcd display, button etc and all of these need to talk single IPhone or Ipad.
My first thought is to buy multiple flora arduino having each a sensor. Then each independent unit can connect via bluetooth. But i am not sure if it is good idea or not. How will they all connect to single device.
Idea is like 10 arduino device sending and receiving signal from one iphone or Ipad.
As Piglet and TomServo said, The motors, 7 segment display, temprature sensor, lcd display, button are not sensors those can be considered as peripherals!
Now even though its a vast topic and you will need a lot of work to accomplish this, Its not hard if you have embedded c or C programming experience.
You should look into how micro controllers work and how to start programming them arduino site should be good starting point!
https://www.arduino.cc/en/Guide/HomePage
https://www.quora.com/Which-are-the-Best-books-on-learning-to-program-microcontroller
Now once you set up your environment and programmed a simple program like toggling a LED, you should start looking at communication protocols,
The devices/peripherals communicate to master/Host microcontroller(Arduino) through set of protocols such as SPI, UART,I2C etc. you will need to decide an ICs for example HDSP-5503 is a seven segment display you can find more at
http://www.mouser.com/ProductDetail/Broadcom-Avago/HDSP-5503/?qs=sGAEpiMZZMsx4%2fFVpd5sGZOLZP3%252bEaxq
You can not just buy arduino with ton of sensors and connect it to iPhone, iPhone will require your custom app to support your board check out this guy!
https://itunes.apple.com/us/app/hm10-bluetooth-serial-lite/id1030454675?mt=8
hopefully this will help good luck!

iOS-Arduino communication: any cheap solution?

I'd like to have an iPhone and an Arduino-based device talk to each other. Here are the requirements:
I want to fully rely on iPhone's built-in components without any peripherals (for example, HiJack).
The less configuration before the two can communicate, the better. This means a WiFi-based is not desirable, because we'll need to set up Wi-Fi credentials for the Arduino beforehand.
Bitrate is not important. Only a few bytes are exchanged.
As cheap as possible.
I see that Bluetooth 4.0 LE (for example, Stack Overflow question iPhone - Any examples of communicating with an Arduino board using Bluetooth?) meets my requirements, but are there any cheaper solutions?
One thing that came into my mind is sound - the way Chirp used to share data between two iOS devices, but I don't know if is feasible on Arduino and, if it is, how much it would be. Any other solutions?
I can think of a few options:
Bluetooth, you can get a cheap one from eBay for about $10
Wi-Fi using Electric Imp (cost around $30), which is very easy to setup using the brilliant BlinkUp technique. See the project ElectricImp, control central heating via iPhone for an example.
Chirp is a brilliant idea as well. From a hardware prospective I see it is feasible to do in Arduino; you just need a MIC circuit ($8) and speaker.
However, the real challenge is the software side, i.e., the algorithm that you will use to encode data as sound and vice versa. If such algorithm requires intensive calculation, you might not be able to do it in Arduino, and you can consider using an ARM-based microcontroller.

shazam for voice recognition on iphone

I am trying to build an app that allows the user to record individual people speaking, and then save the recordings on the device and tag each record with the name of the person who spoke. Then there is the detection mode, in which i record someone and can tell whats his name if he is in the local database.
First of all - is this possible at all? I am very new to iOS development and not so familiar with the available APIs.
More importantly, which API should I use (ideally free) to correlate between the incoming voice and the records I have in the local db? This should behave something like Shazam, but much more simple since the database I am looking for a match against is much smaller.
If you're new to iOS development, I'd start with the core app to record the audio and let people manually choose a profile/name to attach it to and worry about the speaker recognition part later.
You obviously have two options for the recognition side of things: You can either tie in someone else's speech authentication/speaker recognition library (which will probably be in C or C++), or you can try to write your own.
How many people are going to use your app? You might be able to create something basic yourself: If it's the difference between a man and a woman you could probably figure that out by doing an FFT spectral analysis of the audio and figure out where the frequency peaks are. Obviously the frequencies used to enunciate different phonemes are going to vary somewhat, so solving the general case for two people who sound fairly similar is probably hard. You'll need to train the system with a bunch of text and build some kind of model of frequency distributions. You could try to do clustering or something, but you're going to run into a fair bit of maths fairly quickly (gaussian mixture models, et al). There are libraries/projects that'll do this. You might be able to port this from matlab, for example: https://github.com/codyaray/speaker-recognition
If you want to take something off-the-shelf, I'd go with a straight C library like mistral, as it should be relatively easy to call into from Objective-C.
The SpeakHere sample code should get you started for audio recording and playback.
Also, it may well take longer for the user to train your app to recognise them than it's worth in time-saving from just picking their name from a list. Unless you're intending their voice to be some kind of security passport type thing, it might just not be worth bothering with.

Resources