I am trying to learn how to get some sensors plugged into an Arduino board to talk to an iPhone over Bluetooth with a Red Bear Labs mini board but have hit a brick wall.
The sensors get a reading and this is sent to the phone over BLE. So far I've connected to the device and I get back what appears to be data but I can't make sense of it.
I've written a little sketch that looks like this, to simulate the sensor data.
#include <SoftwareSerial.h>
SoftwareSerial bluetooth(5, 6);
void setup() {
bluetooth.begin(57600);
}
void loop() {
//int reading = analogRead(2);
int reading = 123; // fake reading
byte lowerByte = (byte) reading & 0xFF;
byte upperByte = (byte) (reading >> 8) & 0xFF;
bluetooth.write(reading);
bluetooth.write(upperByte);
bluetooth.write(lowerByte);
delay(1000);
}
In iOS I send a call to read the data and then the data is received by a piece of code that looks something like:
- (void)peripheral:(CBPeripheral *)peripheral
didUpdateValueForCharacteristic:(CBCharacteristic *)characteristic
error:(NSError *)error
{
Byte data[20];
static unsigned char buf[512];
static int len = 0;
NSInteger data_len;
if (!error && [characteristic.UUID isEqual:[CBUUID UUIDWithString:#RBL_CHAR_TX_UUID]]){
data_len = characteristic.value.length;
[characteristic.value getBytes:data length:data_len];
if (data_len == 20){
memcpy(&buf[len], data, 20);
len += data_len;
if (len >= 64){
[[self delegate] bleDidReceiveData:buf length:len];
len = 0;
}
} else if (data_len < 20) {
memcpy(&buf[len], data, data_len);
len += data_len;
[[self delegate] bleDidReceiveData:buf length:len];
len = 0;
}
}
}...
}
But when I look at the data that comes back it just makes no sense to me at all.. (I'll dig out an example as soon as I can).
Does anyone know a simple step I'm missing or a good example I could look at to try and better understand this?
I finally realised that the data was correct, I had to 'pull out' the data by bit shifting it.
UInt16 value;
UInt16 pin;
for (int i = 0; i < length; i+=3) {
pin = data[i];
value = data[i+2] | data[i+1] << 8;
NSLog(#"Pin: %d", pin);
NSLog(#"Value %d",value);
}
Related
I get a libyuv crash recently.
I try a lot, but no use.
Please help or try to give some ideas how to achieve this. Thanks!
I have a iOS project(Objective C). One of the functions is encode the video stream.
My idea is
Step 1: Start a timer(20 FPS)
Step 2: Copy and get the bitmap data
Step 3: Transfer the bitmap data to YUV I420 (libyuv)
Step 4: Encode to the h264 format (Openh264)
Step 5: Send the h264 data with RTSP
All of function run on the foreground.
It works well for 3~4hr.
BUT it always will be crashed after 4hr+.
Check the CPU(39%), Memory(140MB), it is stable(No memory leak, CPU busy, etc.).
I try a lot, but no use ( Include add try-catch in my project, detect the data size before run in this line )
I figure out it will run more if decrease the FPS time(20FPS -> 15FPS)
Does it need to add something after encode each frame?
Could someone help me or give some idea for this? Thanks!
// This function runs in a GCD timer
- (void)processSDLFrame:(NSData *)_frameData {
if (mH264EncoderPtr == NULL) {
[self initEncoder];
return;
}
int argbSize = mMapWidth * mMapHeight * 4;
NSData *frameData = [[NSData alloc] initWithData:_frameData];
if ([frameData length] == 0 || [frameData length] != argbSize) {
NSLog(#"Incorrect frame with size : %ld\n", [frameData length]);
return;
}
SFrameBSInfo info;
memset(&info, 0, sizeof (SFrameBSInfo));
SSourcePicture pic;
memset(&pic, 0, sizeof (SSourcePicture));
pic.iPicWidth = mMapWidth;
pic.iPicHeight = mMapHeight;
pic.uiTimeStamp = [[NSDate date] timeIntervalSince1970];
#try {
libyuv::ConvertToI420(
static_cast<const uint8 *>([frameData bytes]), // sample
argbSize, // sample_size
mDstY, // dst_y
mStrideY, // dst_stride_y
mDstU, // dst_u
mStrideU, // dst_stride_u
mDstV, // dst_v
mStrideV, // dst_stride_v
0, // crop_x
0, // crop_y
mMapWidth, // src_width
mMapHeight, // src_height
mMapWidth, // crop_width
mMapHeight, // crop_height
libyuv::kRotateNone, // rotation
libyuv::FOURCC_ARGB); // fourcc
} #catch (NSException *exception) {
NSLog(#"libyuv::ConvertToI420 - exception:%#", exception.reason);
return;
}
pic.iColorFormat = videoFormatI420;
pic.iStride[0] = mStrideY;
pic.iStride[1] = mStrideU;
pic.iStride[2] = mStrideV;
pic.pData[0] = mDstY;
pic.pData[1] = mDstU;
pic.pData[2] = mDstV;
if (mH264EncoderPtr == NULL) {
NSLog(#"OpenH264Manager - encoder not initialized");
return;
}
int rv = -1;
#try {
rv = mH264EncoderPtr->EncodeFrame(&pic, &info);
} #catch (NSException *exception) {
NSLog( #"NSException caught - mH264EncoderPtr->EncodeFrame" );
NSLog( #"Name: %#", exception.name);
NSLog( #"Reason: %#", exception.reason );
[self deinitEncoder];
return;
}
if (rv != cmResultSuccess) {
NSLog(#"OpenH264Manager - encode failed : %d", rv);
[self deinitEncoder];
return;
}
if (info.eFrameType == videoFrameTypeSkip) {
NSLog(#"OpenH264Manager - drop skipped frame");
return;
}
// handle buffer data
int size = 0;
int layerSize[MAX_LAYER_NUM_OF_FRAME] = { 0 };
for (int layer = 0; layer < info.iLayerNum; layer++) {
for (int i = 0; i < info.sLayerInfo[layer].iNalCount; i++) {
layerSize[layer] += info.sLayerInfo[layer].pNalLengthInByte[i];
}
size += layerSize[layer];
}
uint8 *output = (uint8 *)malloc(size);
size = 0;
for (int layer = 0; layer < info.iLayerNum; layer++) {
memcpy(output + size, info.sLayerInfo[layer].pBsBuf, layerSize[layer]);
size += layerSize[layer];
}
// alloc new buffer for streaming
NSData *newData = [NSData dataWithBytes:output length:size];
// Send the data with RTSP
sendData( newData );
// free output buffer data
free(output);
}
[Jan/08/2020 Update]
I report this ticket on the Google Issue Report
https://bugs.chromium.org/p/libyuv/issues/detail?id=853
The Googler give me a feedback.
ARGBToI420 does no allocations. Its similar to a memcpy with a source and destination and number of pixels to convert.
The most common issues with it are
1. the destination buffer has been deallocated. Try adding validation that the YUV buffer is valid. Write to the first and last byte of each layer.
This often occurs on shutdown and threads dont shut down in the order you were hoping. A mutex to guard the memory could help.
2. the destination is an odd size and the allocator did not allocate enough memory. When alllocating the UV plane, use (width + 1) / 2 for width/stride and (height + 1) / 2 for height of UV. Allocate stride * height bytes. You could also use an allocator that verifies there are no overreads or overwrites, or a sanitizer like asan / msan.
When screen casting, usually windows are a multiple of 2 pixels on Windows and Linux, but I have seen MacOS use odd pixel count.
As a test you could wrap the function with temporary buffers. Copy the ARGB to a temporary ARGB buffer.
Call ARGBToI420 to a temporary I420 buffer.
Copy the I420 result to the final I420 buffer.
That should give you a clue which buffer/function is failing.
I will try them.
Right now I'm investigating possibility to implement video streaming through MultipeerConnectivity framework. For that purpose I'm using NSInputStream and NSOutputStream.
The problem is: I can't receive any picture so far. Right now I'm trying to pass simple picture and show it on the receiver. Here's a little snippet of my code:
Sending picture via NSOutputStream:
- (void)sendMessageToStream
{
NSData *imgData = UIImagePNGRepresentation(_testImage);
int img_length = (int)[imgData length];
NSMutableData *msgData = [[NSMutableData alloc] initWithBytes:&img_length length:sizeof(img_length)];
[msgData appendData:imgData];
int msg_length = (int)[msgData length];
uint8_t *readBytes = (uint8_t *)[msgData bytes];
uint8_t buf[msg_length];
(void)memcpy(buf, readBytes, msg_length);
int stream_len = [_stream writeData:(uint8_t*)buf maxLength:msg_length];
//int stream_len = [_stream writeData:(uint8_t *)buf maxLength:data_length];
//NSLog(#"stream_len = %d", stream_len);
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Sent: %ld", (long)_tmpCounter];
});
}
The code above works totally fine. stream_len parameter after writing equals to 29627 bytes which is expected value, because image's size is around 25-26 kb.
Receiving picture via NSinputStream:
- (void)readDataFromStream
{
UInt32 length;
if (_currentFrameSize == 0) {
uint8_t frameSize[4];
length = [_stream readData:frameSize maxLength:sizeof(int)];
unsigned int b = frameSize[3];
b <<= 8;
b |= frameSize[2];
b <<= 8;
b |= frameSize[1];
b <<= 8;
b |= frameSize[0];
_currentFrameSize = b;
}
uint8_t bytes[1024];
length = [_stream readData:bytes maxLength:1024];
[_frameData appendBytes:bytes length:length];
if ([_frameData length] >= _currentFrameSize) {
UIImage *img = [UIImage imageWithData:_frameData];
NSLog(#"SETUP IMAGE!");
_imgView.image = img;
_currentFrameSize = 0;
[_frameData setLength:0];
}
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Received: %ld", (long)_tmpCounter];
});
}
As you can see I'm trying to receive picture in several steps, and here's why. When I'm trying to read data from stream, it's always reading maximum 1095 bytes no matter what number I put in maxLength: parameter. But when I send the picture in the first snippet of code, it's sending absolutely ok (29627 bytes . Btw, image's size is around 29 kb.
That's the place where my question come up - why is that? Why is sending 29 kb via NSOutputStream works totally fine when receiving is causing problems? And is there a solid way to make video streaming work through NSInputStream and NSOutputStream? I just didn't find much information about this technology, all I found were some simple things which I knew already.
Here's an app I wrote that shows you how:
https://app.box.com/s/94dcm9qjk8giuar08305qspdbe0pc784
Build the project with Xcode 9 and run the app on two iOS 11 devices.
To stream live video, touch the Camera icon on one of two devices.
If you don't have two devices, you can run one app in the Simulator; however, you can only use the camera on the real device (the Simulator will display the video broadcasted).
Just so you know: this is not the ideal way to stream real-time video between devices (it should probably be your last choice). Data packets (versus streaming) are way more efficient and faster.
Regardless, I'm really confused by your NSInputStream-related code. Here's something that makes a little more sense, I think:
case NSStreamEventHasBytesAvailable: {
// len is a global variable set to a non-zero value;
// mdata is a NSMutableData object that is reset when a new input
// stream is created.
// displayImage is a block that accepts the image data and a reference
// to the layer on which the image will be rendered
uint8_t * buf[len];
len = [aStream read:(uint8_t *)buf maxLength:len];
if (len > 0) {
[mdata appendBytes:(const void *)buf length:len];
} else {
displayImage(mdata, wLayer);
}
break;
}
The output stream code should look something like this:
// data is an NSData object that contains the image data from the video
// camera;
// len is a global variable set to a non-zero value
// byteIndex is a global variable set to zero each time a new output
// stream is created
if (data.length > 0 && len >= 0 && (byteIndex <= data.length)) {
len = (data.length - byteIndex) < DATA_LENGTH ? (data.length - byteIndex) : DATA_LENGTH;
uint8_t * bytes[len];
[data getBytes:&bytes range:NSMakeRange(byteIndex, len)];
byteIndex += [oStream write:(const uint8_t *)bytes maxLength:len];
}
There's a lot more to streaming video than setting up the NSStream classes correctly—a lot more. You'll notice in my app, I created a cache for the input and output streams. This solved a myriad of issues that you would likely encounter if you don't do the same.
I have never seen anyone successfully use NSStreams for video streaming...ever. It's highly complex, for one reason.
There are many different (and better) ways to stream video; I wouldn't go this route. I just took it on because no one else has been able to do it successfully.
I think that the problem is in your assumption that all data will be available in NSInputStream all the time while you are reading it. NSInputStream made from NSURL object has an asynchronous nature and it should be accessed accordingly using NSStreamDelegate. You can look at example in the README of POSInputStreamLibrary.
I've recently purchased this: http://redbearlab.com/bleshield/
I've connected it to my Arduino and I'm trying to run the very first test program that they tell me to run which is the BLE Controller sketch. I've connected it and resolved some original compile errors that I initially got, and now it will upload. When I upload it, my iPhone is unresponsive to the shield. I'm trying to figure out if the problem is in the code or if it's a problem with the shield itself. If it's the code, how could I fix the code? I'm relatively new to Arduino and completely new to making it work with Bluetooth. Here's the entire sketch that the guide told me to download from Github.
BLEControllerSketch.ino
/*
Copyright (c) 2012, 2013 RedBearLab
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#include <Servo.h>
#include <SPI.h>
#include <EEPROM.h>
#include <boards.h>
#include <RBL_nRF8001.h>
#include "Boards.h"
#define PROTOCOL_MAJOR_VERSION 0 //
#define PROTOCOL_MINOR_VERSION 0 //
#define PROTOCOL_BUGFIX_VERSION 2 // bugfix
#define PIN_CAPABILITY_NONE 0x00
#define PIN_CAPABILITY_DIGITAL 0x01
#define PIN_CAPABILITY_ANALOG 0x02
#define PIN_CAPABILITY_PWM 0x04
#define PIN_CAPABILITY_SERVO 0x08
#define PIN_CAPABILITY_I2C 0x10
// pin modes
//#define INPUT 0x00 // defined in wiring.h
//#define OUTPUT 0x01 // defined in wiring.h
#define ANALOG 0x02 // analog pin in analogInput mode
#define PWM 0x03 // digital pin in PWM output mode
#define SERVO 0x04 // digital pin in Servo output mode
byte pin_mode[TOTAL_PINS];
byte pin_state[TOTAL_PINS];
byte pin_pwm[TOTAL_PINS];
byte pin_servo[TOTAL_PINS];
Servo servos[MAX_SERVOS];
void setup()
{
Serial.begin(57600);
Serial.println("BLE Arduino Slave");
/* Default all to digital input */
for (int pin = 0; pin < TOTAL_PINS; pin++)
{
// Set pin to input with internal pull up
pinMode(pin, INPUT);
digitalWrite(pin, HIGH);
// Save pin mode and state
pin_mode[pin] = INPUT;
pin_state[pin] = LOW;
}
// Default pins set to 9 and 8 for REQN and RDYN
// Set your REQN and RDYN here before ble_begin() if you need
//ble_set_pins(3, 2);
// Set your BLE Shield name here, max. length 10
//ble_set_name("My Name");
// Init. and start BLE library.
ble_begin();
}
static byte buf_len = 0;
void ble_write_string(byte *bytes, uint8_t len)
{
if (buf_len + len > 20)
{
for (int j = 0; j < 15000; j++)
ble_do_events();
buf_len = 0;
}
for (int j = 0; j < len; j++)
{
ble_write(bytes[j]);
buf_len++;
}
if (buf_len == 20)
{
for (int j = 0; j < 15000; j++)
ble_do_events();
buf_len = 0;
}
}
byte reportDigitalInput()
{
if (!ble_connected())
return 0;
static byte pin = 0;
byte report = 0;
if (!IS_PIN_DIGITAL(pin))
{
pin++;
if (pin >= TOTAL_PINS)
pin = 0;
return 0;
}
if (pin_mode[pin] == INPUT)
{
byte current_state = digitalRead(pin);
if (pin_state[pin] != current_state)
{
pin_state[pin] = current_state;
byte buf[] = {'G', pin, INPUT, current_state};
ble_write_string(buf, 4);
report = 1;
}
}
pin++;
if (pin >= TOTAL_PINS)
pin = 0;
return report;
}
void reportPinCapability(byte pin)
{
byte buf[] = {'P', pin, 0x00};
byte pin_cap = 0;
if (IS_PIN_DIGITAL(pin))
pin_cap |= PIN_CAPABILITY_DIGITAL;
if (IS_PIN_ANALOG(pin))
pin_cap |= PIN_CAPABILITY_ANALOG;
if (IS_PIN_PWM(pin))
pin_cap |= PIN_CAPABILITY_PWM;
if (IS_PIN_SERVO(pin))
pin_cap |= PIN_CAPABILITY_SERVO;
buf[2] = pin_cap;
ble_write_string(buf, 3);
}
void reportPinServoData(byte pin)
{
// if (IS_PIN_SERVO(pin))
// servos[PIN_TO_SERVO(pin)].write(value);
// pin_servo[pin] = value;
byte value = pin_servo[pin];
byte mode = pin_mode[pin];
byte buf[] = {'G', pin, mode, value};
ble_write_string(buf, 4);
}
byte reportPinAnalogData()
{
if (!ble_connected())
return 0;
static byte pin = 0;
byte report = 0;
if (!IS_PIN_DIGITAL(pin))
{
pin++;
if (pin >= TOTAL_PINS)
pin = 0;
return 0;
}
if (pin_mode[pin] == ANALOG)
{
uint16_t value = analogRead(pin);
byte value_lo = value;
byte value_hi = value>>8;
byte mode = pin_mode[pin];
mode = (value_hi << 4) | mode;
byte buf[] = {'G', pin, mode, value_lo};
ble_write_string(buf, 4);
}
pin++;
if (pin >= TOTAL_PINS)
pin = 0;
return report;
}
void reportPinDigitalData(byte pin)
{
byte state = digitalRead(pin);
byte mode = pin_mode[pin];
byte buf[] = {'G', pin, mode, state};
ble_write_string(buf, 4);
}
void reportPinPWMData(byte pin)
{
byte value = pin_pwm[pin];
byte mode = pin_mode[pin];
byte buf[] = {'G', pin, mode, value};
ble_write_string(buf, 4);
}
void sendCustomData(uint8_t *buf, uint8_t len)
{
uint8_t data[20] = "Z";
memcpy(&data[1], buf, len);
ble_write_string(data, len+1);
}
byte queryDone = false;
void loop()
{
while(ble_available())
{
byte cmd;
cmd = ble_read();
Serial.write(cmd);
// Parse data here
switch (cmd)
{
case 'V': // query protocol version
{
byte buf[] = {'V', 0x00, 0x00, 0x01};
ble_write_string(buf, 4);
}
break;
case 'C': // query board total pin count
{
byte buf[2];
buf[0] = 'C';
buf[1] = TOTAL_PINS;
ble_write_string(buf, 2);
}
break;
case 'M': // query pin mode
{
byte pin = ble_read();
byte buf[] = {'M', pin, pin_mode[pin]}; // report pin mode
ble_write_string(buf, 3);
}
break;
case 'S': // set pin mode
{
byte pin = ble_read();
byte mode = ble_read();
if (IS_PIN_SERVO(pin) && mode != SERVO && servos[PIN_TO_SERVO(pin)].attached())
servos[PIN_TO_SERVO(pin)].detach();
/* ToDo: check the mode is in its capability or not */
/* assume always ok */
if (mode != pin_mode[pin])
{
pinMode(pin, mode);
pin_mode[pin] = mode;
if (mode == OUTPUT)
{
digitalWrite(pin, LOW);
pin_state[pin] = LOW;
}
else if (mode == INPUT)
{
digitalWrite(pin, HIGH);
pin_state[pin] = HIGH;
}
else if (mode == ANALOG)
{
if (IS_PIN_ANALOG(pin)) {
if (IS_PIN_DIGITAL(pin)) {
pinMode(PIN_TO_DIGITAL(pin), LOW);
}
}
}
else if (mode == PWM)
{
if (IS_PIN_PWM(pin))
{
pinMode(PIN_TO_PWM(pin), OUTPUT);
analogWrite(PIN_TO_PWM(pin), 0);
pin_pwm[pin] = 0;
pin_mode[pin] = PWM;
}
}
else if (mode == SERVO)
{
if (IS_PIN_SERVO(pin))
{
pin_servo[pin] = 0;
pin_mode[pin] = SERVO;
if (!servos[PIN_TO_SERVO(pin)].attached())
servos[PIN_TO_SERVO(pin)].attach(PIN_TO_DIGITAL(pin));
}
}
}
// if (mode == ANALOG)
// reportPinAnalogData(pin);
if ( (mode == INPUT) || (mode == OUTPUT) )
reportPinDigitalData(pin);
else if (mode == PWM)
reportPinPWMData(pin);
else if (mode == SERVO)
reportPinServoData(pin);
}
break;
case 'G': // query pin data
{
byte pin = ble_read();
reportPinDigitalData(pin);
}
break;
case 'T': // set pin digital state
{
byte pin = ble_read();
byte state = ble_read();
digitalWrite(pin, state);
reportPinDigitalData(pin);
}
break;
case 'N': // set PWM
{
byte pin = ble_read();
byte value = ble_read();
analogWrite(PIN_TO_PWM(pin), value);
pin_pwm[pin] = value;
reportPinPWMData(pin);
}
break;
case 'O': // set Servo
{
byte pin = ble_read();
byte value = ble_read();
if (IS_PIN_SERVO(pin))
servos[PIN_TO_SERVO(pin)].write(value);
pin_servo[pin] = value;
reportPinServoData(pin);
}
break;
case 'A': // query all pin status
for (int pin = 0; pin < TOTAL_PINS; pin++)
{
reportPinCapability(pin);
if ( (pin_mode[pin] == INPUT) || (pin_mode[pin] == OUTPUT) )
reportPinDigitalData(pin);
else if (pin_mode[pin] == PWM)
reportPinPWMData(pin);
else if (pin_mode[pin] == SERVO)
reportPinServoData(pin);
}
queryDone = true;
{
uint8_t str[] = "ABC";
sendCustomData(str, 3);
}
break;
case 'P': // query pin capability
{
byte pin = ble_read();
reportPinCapability(pin);
}
break;
case 'Z':
{
byte len = ble_read();
byte buf[len];
for (int i=0;i<len;i++)
buf[i] = ble_read();
Serial.println("->");
Serial.print("Received: ");
Serial.print(len);
Serial.println(" byte(s)");
Serial.print(" Hex: ");
for (int i=0;i<len;i++)
Serial.print(buf[i], HEX);
Serial.println();
}
}
// send out any outstanding data
ble_do_events();
buf_len = 0;
return; // only do this task in this loop
}
// process text data
if (Serial.available())
{
byte d = 'Z';
ble_write(d);
delay(5);
while(Serial.available())
{
d = Serial.read();
ble_write(d);
}
ble_do_events();
buf_len = 0;
return;
}
// No input data, no commands, process analog data
if (!ble_connected())
queryDone = false; // reset query state
if (queryDone) // only report data after the query state
{
byte input_data_pending = reportDigitalInput();
if (input_data_pending)
{
ble_do_events();
buf_len = 0;
return; // only do this task in this loop
}
reportPinAnalogData();
ble_do_events();
buf_len = 0;
return;
}
ble_do_events();
buf_len = 0;
}
Any and all help is greatly appreciated.
I've been working with these quite a bit in the last 8 months, so I'll do what I can to help. I don't have the reputation to comment and ask for clarification on specific things, so I'm just gonna try to cover everything I can.
Let's first lay out a few things:
Because your sketch uploaded I'm assuming that you have added all of the necessary libraries to Arduino. If you haven't, I don't know why it is uploading, but be sure to do that.
The problem almost certainly won't be with the Arduino code. I ran the code you provided, as well as the sketch provided by RBL (Red Bear Lab) that is on my computer. I was able to connect to my iPhone using both sketches.
I'm going to lay out everything I think could potentially be the source of the problem:
Make sure that all of your header pins (you should have a bunch of headers sticking out of the board in rows of 3 next to the digital pins) are connected correctly, as RBL shows in their instructions. If you want me to provide a picture of mine I can do that.
Make sure that the white power light is on on your shield. If it isn't, power isn't getting to the shield itself.
Be sure that you actually have bluetooth enabled on your phone.
You didn't mention that you downloaded the app. Be sure to do that (it is called "BLE Controller" by RedBear), as you cannot connect to the iPhone without the app (Apple's bluetooth menu will not show the BLE shield).
If you have downloaded the app, be sure that you have selected the correct setting from the choices using the button on the top left of the screen (3 lines on top of each other). For the sketch you provided, you should select BLE Controller.
If you have tried everything and nothing else is working, try one of the other sketches provided by RBL, such as SimpleChat. This uses the serial monitor on Arduino to communicate back and forth with the iPhone. If this doesn't work, upload a picture of your specific shield (the top of it) so I can take a look at it. Best of luck.
I am currently writing a game in which I intend to transfer a .caf sound file from the iPhone to my C++ server. Right now I am getting an EXC_BAD_ACCESS when I message the sendGameUpdateWithFile function, and I really have no idea why.
Although this is not related to any uni assignments, please keep in mind that I am a CS student and not (yet) a professional network programmer, so judge my code thereafter.
Here's the code I use to transmit the data to the server (which crashes right now):
- (void) sendGameUpdateWithFile:(NSString*)filePath gameID:(NSInteger)gameID {
NSMutableData* data = [[NSMutableData alloc] init];
data = [NSMutableData dataWithContentsOfFile:filePath];
fileheadPacket head;
head.msgtype = 0x12;
strncpy(head.data1, [myUsername cStringUsingEncoding:NSASCIIStringEncoding], [myUsername length]);
int followingPackets = (([data length] % 1024 == 0) || ([data length] < 1024))? ([data length]/1024) : ([data length]/1024)+1;
head.following = followingPackets;
head.fileid = gameID;
head.size = sizeof(packet);
[mySock writeData:[NSData dataWithBytes:&head length:sizeof(packet)] withTimeout:-1 tag:7];
filePacket sendPackets[followingPackets];
for(int i = 0; i < followingPackets; i++){
NSRange thisRange;
thisRange.location = i*1024;
thisRange.length = (i+1)*1024;
filePacket tmp;
tmp.msgtype = 0x13;
tmp.size = sizeof(filePacket);
memset(tmp.data1, 0, sizeof(tmp.data1));
[data getBytes:tmp.fileBuffer range:thisRange];
strncpy((char*)&sendPackets[i], (char*)&tmp, sizeof(tmp));
}
for(int i = 0; i < followingPackets; i++){
[mySock writeData:[NSData dataWithBytes:&sendPackets[i] length:sizeof(filePacket)] withTimeout:-1 tag:3];
}
}
The structs I use for data look like this:
typedef struct file_packet {
int msgtype:8;
int size:16;
int nul:8;
int following:24;
int emp:8;
char data1[64];
char fileBuffer[1024];
} filePacket;
typedef struct filehead_packet {
int msgtype:8;
int size:16;
int nul:8;
int following:24;
int emp:8;
char data1[64];
int fileid;
char rest[60];
} fileheadPacket;
The server expects the given msgtypes -- the problem lies on the client side.
I'm studying an example from the Linux Device Driver book(http://lwn.net/Kernel/LDD3/), and I don't understand the use and usefullness of the function memset in this context and I hoped that someone could explain it to me. I understand that we allocate memory for our device structure using kmalloc and with memset we put 0's in front of the memory address? Here is the example nonortheless:
int scull_p_init(dev_t firstdev)
{
int i, result;
result = register_chrdev_region(firstdev, scull_p_nr_devs, "scullp");
if (result < 0) {
printk(KERN_NOTICE "Unable to get scullp region, error %d\n", result);
return 0;
}
scull_p_devno = firstdev;
scull_p_devices = kmalloc(scull_p_nr_devs * sizeof(struct scull_pipe), GFP_KERNEL);
if (scull_p_devices == NULL) {
unregister_chrdev_region(firstdev, scull_p_nr_devs);
return 0;
}
memset(scull_p_devices, 0, scull_p_nr_devs * sizeof(struct scull_pipe));
for (i = 0; i < scull_p_nr_devs; i++) {
init_waitqueue_head(&(scull_p_devices[i].inq));
init_waitqueue_head(&(scull_p_devices[i].outq));
init_MUTEX(&scull_p_devices[i].sem);
scull_p_setup_cdev(scull_p_devices + i, i);
}
The memset is not putting 0 in front of scull_p_devices. It is overwriting the memory from the address in scull_p_devices up to the size of the allocated region with zeros.