I am working on a line-following robot that works with computer vision. The image processing is done on an iPhone using OpenCV. For controlling the robot I have two ideas:
Generate sound of given frequencies (e.g. the higher the frequency, the more the robot should move to the left; the lower the frequency, the more the robot should move to the right). I have already done this successfully on an Android device. I found this code: Produce sounds of different frequencies in Swift, however I do not understand how I can play a sound indefinitely until a new frequency is given. Is this possible with this code, if yes, how?
If it would be possible (which I don't know) to precisely control the output waves of the sound in stereo (one channel for the left motor, one channel for the right motor), I could in theory directly control the motor driver from those 'sound' waves. Would this be possible or is this too complicated?
Note that I would like to avoid to use wireless communication such as a bluetooth or wifi module, since the environment in which the robot will be used will be used will have lots of possible interference.
Thank you in advance!
What about infrared communication? Makes more sense than sound.
Related
I would like to construct a platform for my robot that can rotate in 360 degrees.
I have stepper motor that is able to rotate so.
The problem is I have some sensors and controllers placed on that platform so I need to power them up and read data from them.
I need to have wires going from the lower unrotatible level of the robot to that platform.
Picture:
Any thoughts on how could I achieve something like this?
For your application, you will need something to transmit the electrical signals and power to your rotating platform. Thankfully, there is a device called a slip ring which will do just this.
From Wikipedia:
A slip ring is an electromechanical device that allows the transmission of power and electrical signals from a stationary to a rotating structure. A slip ring can be used in any electromechanical system that requires rotation while transmitting power or signals. It can improve mechanical performance, simplify system operation and eliminate damage-prone wires dangling from movable joints.
You can find them from your favorite electronics vendor, but here is an example from Adafruit, distributed by Digikey.
I want to measure the distance between two or more moving iPhone by using ultrasound or whatever sound measure.
How can this will be possible?
Please help me
Regards
The iPhone does not provide any ultrasound capabilities has part of its specs. You would need the iPhone speaker to produce certain type of waves and also have a transducer to pick up the sound waves when they bounce back.
Ultrasound pictures are made from sound waves which are too high
pitched to be heard by the human ear. The sound waves travel through
your skin and are focused on a certain part of your body by a scanning
device called a “transducer.” It picks up the sound waves as they
bounce back from organs inside the body.
You can get more information by going to the iPhone specs and more information here about ultra sound.
Maybe in the future we'll be able to do this with our iPhones ?
I am working on my iOS app and I need to detect sound level from a certain frequency range. Here is a good tutorial for detecting sound level, but how to do that in specific frequency range in iOS SDK?
You need to capture audio from the microphone (AVAudioEngine is a good API for doing that), calculate its Fourier Transform (the Accelerate framework will do that with blazing speed) and examine the amplitude of the frequency bucket corresponding to your frequency. If it's large then you've got a match.
A possibly simpler and more efficient technique would be a Goertzal filter which is good at detecting a given frequency.
Without knowing the exact use case, I can imagine that similar code as what's used in a musical instrument tuner app would work. A quick search found this guitar tuning app which uses fast Fourier transforms.
Note that FFTs are implemented in the Accelerate framework, and in this app they appear to be imported from some other library.
I am trying to figure out how knocktounlock.com is able to detect "knocks" on the iPhone. I am sure they use the accelerometer to achieve this, however all my tries come up with false flags (if user moves, jumps, etc it sometimes fires)
Basically, I want to be able to detect when a user knocks/taps/smacks their phone (and be able to distinguish that from things that may also give a rise to the accelerometer). So I am looking for sharp high peeks. The device will be in the pocket so the movement of the device will not be very much.
I have tried things like high/low pass (not sure if there would be a better option)
This is a duplicate of this: Detect hard taps anywhere on iPhone through accelerometer But it has not received any answers.
Any help/suggestions would be awesome! Thanks.
EDIT: Looking for more thoughts before I accept the answer below. I did hear back from Knocktounlock and they use the fourth derivative (jounce) to get better values to then analyse. Which is interesting.
I would consider knock on the iPhone to be exactly same as bumping two phones with each other. Check out this Github Repo,
https://github.com/joejcon1/iOS-Accelerometer-visualiser
Build&Run the App on iPhone and check out the spikes on Green line. You can see the value of the spike clearly,
Knocking the iPhone:
As you can see the time of the actual spike is very short when you knock the phone. However the spike patterns are little different in Hard Knock and Soft knock but can be distinguished programmatically.
Now lets see the Accelerometer pattern when iPhone moves in space freely,
As you can see the Spikes are bell shaped that means the it takes a little time for spike value to return to 0.
By these pattern it will be easier to determine the knocking pattern. Good Luck.
Also, This will drain your battery out as the sensor will always be running and iPhone needs to persist connection with Mac via Bluetooth.
P.S.: Also check this answer, https://stackoverflow.com/a/7580185/753603
I think the way to go here is using pattern recognition with accelerometer data.
You could (write and) train a classifier (e.g. K-nearest neighbor) with data you gathered and that has been classified by hand. Neural networks are also an option. However, there will be many different ways to solve that problem. But there is probably no straightforward way for achieving this.
Some papers showing pattern recognition approaches to similar topics (activity, movement), like
http://www.math.unipd.it/~cpalazzi/papers/Palazzi-Accelerometer.pdf
(some more, but I am not allowed to post them with my reputation count. You can search for "pattern recognition accelerometer data")
There is also a master thesis about gesture recognition on the iPhone:
http://klingmann.ch/msc_thesis_marco_klingmann_iphone_gestures.pdf
In general you won't achieve 100% correct classification. Depending on the time/knowledge one has got the result will vary between good-usable and we-could-use-random-classification-instead.
Just a though, but It could be useful to add to the mix the output of the microphone to listen to really short, loud noises at the same time that a possible "knock" movement has been detected.
I am surprised that 4th derivative is needed, intuitively feels to me 3rd ("jerk", the derivative of acceleration) should be enough. It is a big hint what to keep eye on, though.
It seems quite simple to me: collect accelerometer data at high rates, plot on chart, observe. Calculate from that first derivative, plot&observe. Then rinse&repeat, derivative of the last one. Draw conclusions. I highly doubt you will need to do pattern recognition per se, clustering/classifiers/what-have-you - i think you will see very distinct peak on one of your charts, may only need to tune collection rate and smoothing.
It is more interesting to me how come you don't have to be running the KnockToUnlock app for this to work? And if it was running in the background, who left it run there for unlimited time. I dont think accel. qualifies for unlimited background run. And after some pondering, i am guessing the reason is that the app uses Bluetooth to connect Mac as accessory - and as such gets a pass from iOS to run in the background (and suck your battery, shhht)
To solve this problem you need to select the frequency. Tap (knock)
has a very high frequency, so you should chose the frequency of the
accelerometer is not lower than 50 Hz (perhaps even 100 Hz) for
quality tap detection in the case of noise from other movements.
The use of classifiers is necessary, but in order to save battery consumption you should not call a classifier very often.It should write a simple algorithm that would find only taps and situation similar to knoks and report that you program need to call a classifier.
Note the gyro signal, it also responds to knocks, besides the
gyroscope signal not be need separated from the constant component
and the gyroscope signal contains less noise.
That is a good video about the basics of working with smartphones sensors: http://talkminer.com/viewtalk.jsp?videoid=C7JQ7Rpwn2k#.UaTTYUC-2Sp .
I am designing an information kiosk which incorporates a mobile phone hidden inside the kiosk.
I wonder whether it would be possible to use the VGA camera of the phone as a sensor to detect when somebody is standing in front of the kiosk.
Which SW components (e.g. Java, APIs, bluetooth stack etc) would be required for a code to use the VGA camera for movement detection?
Obvious choice is to use face detection. But you would have to calibrate this to ensure that the face detected is close enough to the kiosk. May be using the relative size of the face in the picture. This could be done using opencv lib which is widely used. But as this kiosk would be deployed in places you would have little control of the lighting, there's a good chance of false positives and negatives. May be you also want to consider a proximity sensor in combination with face detection.
Depending on what platform is the information kiosk using the options would vary... But assuming there is linux somewhere underneath, you should take a look at OpenCV library. And in case it is of any use - here's a link to my funny experiment to get the 'nod-controlled interface' for reading the long web pages.
And speaking of false positives - or even worse - false negatives - in case of bad lighting or unusual angle the chances are pretty high. So you'd need to complement that by some fallback mechanism like onscreen button 'press here to start' which would be there by default, and then use the inactivity timeout alongside with the face detection to avoid having just one information input vector.
Another idea (depending on the light conditions), might be to measure the overall amount of light in the picture - natural light should be eliciting only slow changes, while the person walking close to the kiosk would cause rapid lighting change.
In j2me (java for mobile phones), you can use the mmapi (mobile media api) to capture the camera screen.
Most phones support this.
#Andrew's suggestion on OpenCV is good. There are a lot of motion detection projects. BUT, I would suggest adding a cheap CMOS camera rather than the mobile phone camera.