Background
I have a small battery powered system running on freeRTOS. I need to periodically run OTA updates, as per any proper internet connected device. The problem is that, being battery powered, the device spends 99.9% of it's life in deep sleep.
When the device is awake, it's possible to post an OTA update from AWS, by publishing to the device's OTA/update topic. From the console, you can only use QOS = 0. But from inside, say a lambda, I believe it's possible to use QOS = 1.
{
"state": {
"desired": {
"ota_url":"https://s3-ap-southeast-2.amazonaws.com/my.awesome.bucket/signed_binary_v21.bin"
}
}
}
Questions
How can I modify this approach to successfully update a device that sleeps for 15 minutes at a time, and wakes for maybe 10 s. During the waking period, it sends a message. Is there some way of leaving the desired OTA/update in the shadow that somehow is included in the response from AWS. I've not managed to figure out how shadows really work. OR can you specify a retry period and time to keep trying perhaps?
Is this approach essentially consistent with the most recent best practice from a security perspective : signed binary, encrypted flash & secure boot etc.
Many thanks.
IDF comes with an OTA example that is pretty straight forward. As suggested in the comments on your question, the example downloads the firmware update, and it will automatically revert to the previous image if the updated version has errors that cause a reset. This is the operative part of the example:
extern const uint8_t server_cert_pem_start[] asm("_binary_ca_cert_pem_start");
extern const uint8_t server_cert_pem_end[] asm("_binary_ca_cert_pem_end");
esp_err_t _ota_http_event_handler(esp_http_client_event_t* evt);
#define FIRMWARE_UPGRADE_URL "https://yourserver.com/somefolder/somefirmware.bin"
void simple_ota_task(void* pvParameter)
{
esp_http_client_config_t config = {
.url = FIRMWARE_UPGRADE_URL,
.cert_pem = (char*)server_cert_pem_start,
.event_handler = _ota_http_event_handler,
};
esp_err_t ret = esp_https_ota(&config);
if (ret == ESP_OK)
ESP_LOGW("ota", "Upgrade success");
else
ESP_LOGE(TAG, "Upgrade failure");
esp_restart();
}
void foo()
{
// after initializing WiFi, create a task to execute OTA:
//
xTaskCreate(&simple_ota_task, "ota_task", 8192, NULL, 5, NULL);
}
// event handler callback referenced above, it has no functional purpose
// just dumps info log output
esp_err_t _ota_http_event_handler(esp_http_client_event_t* evt)
{
switch (evt->event_id) {
case HTTP_EVENT_ERROR:
ESP_LOGD(TAG, "HTTP_EVENT_ERROR");
break;
case HTTP_EVENT_ON_CONNECTED:
ESP_LOGD(TAG, "HTTP_EVENT_ON_CONNECTED");
break;
case HTTP_EVENT_HEADER_SENT:
ESP_LOGD(TAG, "HTTP_EVENT_HEADER_SENT");
break;
case HTTP_EVENT_ON_HEADER:
ESP_LOGD(TAG, "HTTP_EVENT_ON_HEADER, key=%s, value=%s", evt->header_key, evt->header_value);
break;
case HTTP_EVENT_ON_DATA:
ESP_LOGD(TAG, "HTTP_EVENT_ON_DATA, len=%d", evt->data_len);
break;
case HTTP_EVENT_ON_FINISH:
ESP_LOGD(TAG, "HTTP_EVENT_ON_FINISH");
break;
case HTTP_EVENT_DISCONNECTED:
ESP_LOGD(TAG, "HTTP_EVENT_DISCONNECTED");
break;
}
return ESP_OK;
}
To setup embedding the cert, obtain the public key by examining the SSL cert for the site on which the bin file will reside, with a web browser, save it, then convert it to Base64, aka PEM format. (I used some web page.) In my case I created a directory at the same level as the /main directory, called /server_certs, and saved the Base64 conversion in that directory as ca_cert.pem. (That was way too pedantic, wasn't it?)
Then added these lines to CMakeFiles.txt in the /main directory:
# Embed the server root certificate into the final binary
set(COMPONENT_EMBED_TXTFILES ${IDF_PROJECT_PATH}/server_certs/ca_cert.pem)
register_component()
The thing that was unclear to me was how to determine that a newer version is available, if there is a built-in way I couldn't discern it, and did not want to download the update for nothing, ever. So I rolled my own, embedded a version string in the firmware, created a WebAPI entry point on my server that returns the current version (both maintained by hand until I can think of a better way... that I feel like implementing) and call the OTA functionality when the strings don't match. It looks a lot like this:
// The WebAPI returns data in JSON format,
// so if a method returns a string it is quoted.
#define FW_VERSION "\"00.09.015\""
// the HTTP request has just completed, payload stored in recv_buf
//
// tack a null terminator using an index maintained by the HTTP transfer
recv_buf[pos + 1] = 0;
// test for success
if (strncmp(recv_buf, "HTTP/1.1 200 OK", 15) == 0)
{
// find the end of the headers
char *p = strstr(recv_buf, "\r\n\r\n");
if (p)
{
ESP_LOGI("***", "version: %s content %s", FW_VERSION, (p + 4));
if (strcmp((p + 4), FW_VERSION) > 0)
{
// execute OTA task
foo();
}
else
{
// Assumes the new version has run far enough to be
// considered working, commits the update. It doesn't
// hurt anything to call this any number of times.
esp_ota_mark_app_valid_cancel_rollback();
}
}
}
If you're not using the CMake/Ninja build tools, consider checking it out, it's way, way faster than the MingW32-based tool set.
As you have mentioned, you do not fully understand how shadows work, let me explain how using shadows may fit in scenario similar to yours.
Normally, a device that sleeps for long and wakes up for a short duration, usually makes a request to device shadow at wake up. If the shadow contains
{"state": {"desired":{"ota_url": "xxx", "do_ota": true, ...}}},
In this case, device should initiate ota update code inside device.
Further, the approach mentioned here has nothing to do with security and best practices (signed binary, encrypted flash and secure boot). All these three things should be handled by ota update program. Shadows are only used to indicate that there is a ota update available. Using shadow help you avoid making unnecessary GET Request to OTA server, in turn saving precious power and compute resources.
Related
In short
We have a mobile app that streams fairly high volumes of data to and from a server through various bidirectional streams. The streams need to be closed on occasion (for example when the app is backgrounded). They are then reopened as needed. Sometimes when this happens, something goes wrong:
From what I can tell, the stream is up and running on the device's side (the status of both the GRPCProtocall and the GRXWriter involved is either started or paused)
The device sends data on the stream fine (the server receives the data)
The server seems to send data back to the device fine (the server's Stream.Send calls return as successful)
On the device, the result handler for data received on the stream is never called
More detail
Our code is heavily simplified below, but this should hopefully provide enough detail to indicate what we're doing. A bidirection stream is managed by a Switch class:
class Switch {
/** The protocall over which we send and receive data */
var protocall: GRPCProtoCall?
/** The writer object that writes data to the protocall. */
var writer: GRXBufferedPipe?
/** A static GRPCProtoService as per the .proto */
static let service = APPDataService(host: Settings.grpcHost)
/** A response handler. APPData is the datatype defined by the .proto. */
func rpcResponse(done: Bool, response: APPData?, error: Error?) {
NSLog("Response received")
// Handle response...
}
func start() {
// Create a (new) instance of the writer
// (A writer cannot be used on multiple protocalls)
self.writer = GRXBufferedPipe()
// Setup the protocall
self.protocall = Switch.service.rpcToStream(withRequestWriter: self.writer!, eventHandler: self.rpcRespose(done:response:error:))
// Start the stream
self.protocall.start()
}
func stop() {
// Stop the writer if it is started.
if self.writer.state == .started || self.writer.state == .paused {
self.writer.finishWithError(nil)
}
// Stop the proto call if it is started
if self.protocall?.state == .started || self.protocall?.state == .paused {
protocall?.cancel()
}
self.protocall = nil
}
private var needsRestart: Bool {
if let protocall = self.protocall {
if protocall.state == .notStarted || protocall.state == .finished {
// protocall exists, but isn't running.
return true
} else if writer.state == .notStarted || writer.state == .finished {
// writer isn't running
return true
} else {
// protocall and writer are running
return false
}
} else {
// protocall doesn't exist.
return true
}
}
func restartIfNeeded() {
guard self.needsRestart else { return }
self.stop()
self.start()
}
func write(data: APPData) {
self.writer.writeValue(data)
}
}
Like I said, heavily simplified, but it shows how we start, stop, and restart streams, and how we check whether a stream is healthy.
When the app is backgrounded, we call stop(). When it is foregrounded and we need the stream again, we call start(). And we periodically call restartIfNeeded(), eg. when screens that use the stream come into view.
As I mentioned above, what happens occasionally is that our response handler (rpcResponse) stops getting called when server writes data to the stream. The stream appears to be healthy (server receives the data we write to it, and protocall.state is neither .notStarted nor .finished). But not even the log on the first line of the response handler is executed.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
Second question: How do we debug this? Everything we could think of that we can query for a status tells us that the stream is up and running, but it feels like the objc gRPC library keeps a lot of its mechanics hidden from us. Is there a way to see whether responses from server may do reach us, but fail to trigger our response handler?
Third question: As per the code above, we use the GRXBufferedPipe provided by the library. Its documentation advises against using it in production because it doesn't have a push-back mechanism. To our understanding, the writer is only used to feed data to the gRPC core in a synchronised, one-at-a-time fashion, and since server receives data from us fine, we don't think this is an issue. Are we wrong though? Is the writer also involved in feeding data received from server to our response handler? I.e. if the writer broke due to overload, could that manifest as a problem reading data from the stream, rather than writing to it?
UPDATE: Over a year after asking this, we have finally found a deadlock bug in our server-side code that was causing this behaviour on client-side. The streams appeared to hang because no communication sent by the client was handled by server, and vice-versa, but the streams were actually alive and well. The accepted answer provides good advice for how to manage these bi-directional streams, which I believe is still valuable (it helped us a lot!). But the issue was actually due to a programming error.
Also, for anyone running into this type of issue, it might be worth investigating whether you're experiencing this known issue where a channel gets silently dropped when iOS changes its network. This readme provides instructions for using Apple's CFStream API rather than TCP sockets as a possible fix for that issue.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
From what I can tell by looking at your code, the start() function seems to be right. In the stop() function, you do not need to call cancel() of self.protocall; the call will be finished with the previous self.writer.finishWithError(nil).
needsrestart() is where it gets a bit messy. First, you are not supposed to poll/set the state of protocall yourself. That state is altered by itself. Second, setting those state does not close your stream. It only pause a writer, and if app is in background, pausing a writer is like a no-op. If you want to close a stream, you should use finishWithError to terminate this call, and maybe start a new call later when needed.
Second question: How do we debug this?
One way is to turn on gRPC log (GRPC_TRACE and GRPC_VERBOSITY). Another way is to set breakpoint at here where gRPC objc library receives a gRPC message from the server.
Third question: Is the writer also involved in feeding data received from server to our response handler?
No. If you create a buffered pipe and feed that as request of your call, it only feed data to be sent to server. The receiving path is handled by another writer (which is in fact your protocall object).
I don't see where the usage of GRXBufferedPipe in production is discouraged. The known drawback about this utility is that if you pause the writer but keep writing data to it with writeWithValue, you end up buffering a lot of data without being able to flush them, which may cause memory issue.
I just wanted to start making experience with the DHT22/AM2302 (a temperature and humidity sensor), but I have no idea how to initialize and get the data of it ... I tried to use GpioPin:
gpioController = GpioController.GetDefault();
if(gpioController == null)
{
Debug.WriteLine("GpioController Initialization failed.");
return;
}
sensorPin = gpioController.OpenPin(7); //Exception throws here
sensorPin.SetDriveMode(GpioPinDriveMode.Input);
Debug.WriteLine(sensorPin.Read());
but get the exception: "A resource required for this operation is disabled."
After that I took a look at the library for the unixoids and found this:
https://github.com/technion/lol_dht22/blob/master/dht22.c
But I have no idea how to realize that in VCSharp using Windows 10, anyone an idea or experience?
Thank you very much in advance!
UPDATE:
I got the hint, that there is not GPIO-Pin 7 and this is true, so I re-tried it, but the GPIO-Output seems to be just HIGH or LOW ... So I have to use the I2C or the SPI ... According to this Project, I decided to try it out with SPI: http://microsoft.hackster.io/windowsiot/temperature-sensor-sample and making steps forward ... The difficulty now is to translate the above linked C-Library to the C-Sharp-SDK to receive the right data ...
private async void InitSPI()
{
try
{
var settings = new SpiConnectionSettings(SPI_CHIP_SELECT_LINE);
settings.ClockFrequency = 500000;
settings.Mode = SpiMode.Mode0;
string spiAqs = SpiDevice.GetDeviceSelector(SPI_CONTROLLER_NAME);
var deviceInfo = await DeviceInformation.FindAllAsync(spiAqs);
SpiDisplay = await SpiDevice.FromIdAsync(deviceInfo[0].Id, settings);
}
catch(Exception ex)
{
Debug.WriteLine("SPI Initialization failed: " + ex.Message);
}
}
This works not so well, to be clear: It works just once on starting up the raspberry pi2, then starting / remote debugging the application, but after exiting the application and re-start them, the SPI Initialization fails.
And now Im working on reading the data from the pin and will show some Code in a future update. Any comments, answers and or advices are still welcome.
DHT22 requires very precise timing. Although Raspberry PI/Windows 10 IoT core is extremely fast, since it's an operating system where other things need to happen unless you write some sort of low-level driver (not C#) you won't be able to generate the timings necessary to communicate with a DHT22.
What I do is use a cheap Arduino Mini Pro for about $5 with the sole purpose to generate and send the correct timings between the microcontroller and the Raspberry Pi, then setup some sort of communication channel between the Arduino Mini Pro (I2C, Serial) to pull the data from the Arduino.
Background Info:
I've implemented a Bluetooth LE Peripheral for OSX which exposes two characteristics (using CoreBluetooth). One is readable, and one is writable (both with indications on). I've implemented a Bluetooth LE Central on iOS which will read from the readable characteristic and write to the writable characteristic. I've set it up so that every time the characteristic value is read, the value is updated (in a way similar to this example). The transfer rates I get with this set up are pathetically slow (topping out at a measured sustained speed of roughly 340 bytes / second). This speed is the actual data, and not a measure including the packet details, ACKs and so on.
Problem:
This sustained speed is too slow. I've considered two solutions:
There's some parameter in CoreBluetooth that I've missed that will help me increase the speed.
I'll need to implement a custom Bluetooth LE service using the IOBluetooth classes instead of CoreBluetooth.
I believe, I've exhausted option 1. I don't see any other parameters I can tweak. I'm limited to sending 20 bytes per message. Anything else and I get cryptic errors on the iOS device concerning Unknown Errors, Unlikely Errors, or the value being "Not Long". Since the demo project also indicates a 20 byte MTU, I'll accept that this likely isn't possible.
So I'm left with option 2. I'm trying to somehow modify the connection parameters for Bluetooth LE on OSX to hopefully allow me to increase the transfer speed (by setting the min and max conn intervals to be 20ms and 40ms respectively - as well as sending multiple BT packets per connection interval). It looks like providing my own SDP Service on IOBluetooth is the only way to achieve this on OSX. The problem with this is the documentation for how to do this is negligible to non-existent.
This tells me how to implement my own service (albeit using deprecate API), however, it doesn't explain the required parameters for registering an SDP service. So I'm left wondering:
Where can I find the required parameters for this dictionary?
How do I define these parameters in a way to offer a Bluetooth LE service?
Is there any alternative to providing a Bluetooth LE Peripheral on OSX via another framework (Python library? Linux in a VM with access to the Bluetooth stack? I'd like to avoid this altogether.)
I decided my best course of action was to attempt to use Linux in a VM as there is more documentation available and access to the source code would hopefully guarantee that I could find a solution. For anyone who is also facing this problem, here's how you can issue a Connection Parameter Update Request on OS X (sort of).
Step 1
Install a Linux VM. I used Virtual Box with Linux Mint 15 (64-bit Cinnamon).
Step 2
Allow usage of the OS X Bluetooth device in your VM. Attempting to forward the Bluetooth USB Controller to your VM will give an error message. To allow this, you need to stop everything that is using the controller. On my machine, that included issuing the following commands from the command line:
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.blued.plist
This will kill the OS X Bluetooth daemon. Attempting to kill blued from the Activity Monitor will just cause it to be automatically relaunched.
sudo kextunload -b com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport
On my MacBook, I've got a Broadcom controller and this is the kernel module that OS X uses for it. Don't worry about issuing these commands. To undo the changes, you can power down and reboot your machine (note, in some cases when playing with the BT controller and it got into a bad state, I had to actually leave the machine powered down for ~10 seconds before rebooting to clear volatile memory).
If after running these two commands you still can't mount the BT controller, you can run kextstat | grep Bluetooth and see other Bluetooth related kernel modules and then try to unload them as well. I've got ones named IOBluetoothFamily and IOBluetoothSerialManager that don't need to be unloaded.
Step 3
Launch your VM and get your Linux BT stack. I checked out the bluez Git repo from here. I specifically grabbed the 5.14 release tag using git checkout tags/5.14 just to be sure it was at least a tagged version and less likely to be broken. 5.14 is the newest tag as of writing this answer.
Step 4
Build bluez. This was done using bootstrap, then configure, then make and make install. I used the --prefix=/opt/bluez flag on configure to prevent overwriting the install bluetooth stack. Also, I used the --enable-maintainer-mode configure flag for the reason stated in the next step. You also might need to use --disable-systemd to get it to configure. Bluez has a bunch of tools and utilities you can use for various things. In order to use the built Bluetooth daemon, you need to stop the system daemon using sudo service bluetooth stop. You can then launch the built one using sudo /opt/bluez/libexec/bluetooth/bluetoothd -n -d (this launches in non-daemon mode with debug output).
Step 5
Get your LE service running via bluez. You can view the bluez/plugins/gatt-example.c for how to do this. I directly modified this by removing the unnecessary code and using the battery service code as a template for my own service and characteristics. You need to recompile bluez to have this code added to the bluetooth daemon. One thing to note (that caused my a day or two of trouble getting this working) was that iOS caches the GATT service listing and this is not read/refreshed on each connection. If you add a service or characteristic or change a UUID, you'll need to disable Bluetooth on your iOS device and then re-enable it. This is undocumented in Apples docs and there is no programmatic way to do it.
Step 6
Unfortunately, this is where things get tricky. Bluez doesn't have support built-in for issuing the Connection Parameters Update Request using any of its utilities. I had to write it myself. I'm currently seeing if they want my code to be included in the bluez stack. I can't post the code currently as I'd need to first see if the bluez devs are interested in the code and then get approval from my workplace to give the code. However, I can currently explain what I did to enable support.
Step 7
Prime yourself on the Bluetooth Standard. Any version 4.0 or greater will have the details you need. Read the following sections.
See Vol. 2, Part E, 4.1 for Host to Controller HCI flow.
See Vol. 2, Part E, 5.4.2 for HCI ACL Data Packet format.
See Vol. 3, Part A, 4 for Signalling Packet format.
See Vol. 3, Part A, 4.20 for Connection Parameter Update Request format.
You're basically going to need to write the code to format the packets and then write them to the hci device. The HCI ACL Data Packet header will contain 4 bytes. This is followed by 4 bytes for the Signalling command's length and channel id. This is then followed by your signal payload which in my case was 12 bytes (for the Connection Parameter Update Request).
You can then write them to the device similar to hci_send_cmd in bluez/lib/hci.c. I did each packet header as it's own struct and wrote them each as iovecs to the device. I put my new function in the hci.c file and exposed it with a function prototype in bluez/lib/hci_lib.h. I then modified bluez/tools/hcitool.c to allow me to call this method from the command line. In my case, I made it so that the command was nearly identical to the lecup command as it requires the same parameters (lecup can't be used as it's meant to be called on the master side, not the slave).
Recompiled all of this and then, voila, I can use my new command on hcitool to send the parameters to the bluetooth controller. After sending my command, it then re-negotiates with the iOS device as expected.
Comments
This process is not for the faint of heart. Hopefully, either this, or some other method of setting the connection parameters is added to bluez to simplify this process. Ideally, Apple will allow the ability to do so via CoreBluetooth or IOBluetooth at some point as well (it could be possible, but undocumentated / difficult to do so, I gave up with the Apple libraries). I've journeyed down the rabbit hole and learned much more about the Bluetooth Spec then I thought I'd have to to simply change the connection parameters between a MacBook and an iPhone. Hopefully this will be helpful to somebody at some point (even if it's me checking back on how I did this).
I know I've left out a lot of details in this in order to keep it somewhat brief (i.e. usage on the bluez tools). Please comment if something isn't clear.
If you are implementing your Peripheral using CoreBluetooth, you can request somewhat customized connection parameters by calling -[CBPeripheralManager setDesiredConnectionLatency:forCentral:] to Low, Medium, or High (where Low latency means higher bandwidth). The documentation does not specify what this means, so we have to test it ourselves.
On an OSX Peripheral, when you set the desired latency to Low, the interval is still 22.5ms which is far from the minimum of 7.5ms.
On OSX Yosemite 10.10.4, this is what the CBPeripheralManagerConnectionLatency values mean:
Low: Min Interval: 18 (22.5ms), Max Interval: 18 (22.5ms), Slave Latency: 4 events, Timeout: 200 (2s).
Medium: Min Interval: 32 (40ms), Max Interval: 32 (40ms), Slave Latency: 6 events, Timeout: 200 (2s)
High: Min Interval: 160 (200ms), Max Interval: 160 (200ms), Slave Latency: 2 events, Timeout: 300 (3s)
Here is the code that I used to run a CBPeripheralManager on OSX. I used an Android device as central using BLE Explorer and dumped the Bluetooth traffic to a Btsnoop file.
// clang main.m -framework Foundation -framework IOBluetooth
#import <Foundation/Foundation.h>
#import <IOBluetooth/IOBluetooth.h>
#interface MyPeripheralManagerDelegate: NSObject<CBPeripheralManagerDelegate>
#property (nonatomic, assign) CBPeripheralManager* peripheralManager;
#property (nonatomic) CBPeripheralManagerConnectionLatency nextLatency;
#end
#implementation MyPeripheralManagerDelegate
+ (NSString*)stringFromCBPeripheralManagerState:(CBPeripheralManagerState)state {
switch (state) {
case CBPeripheralManagerStatePoweredOff: return #"PoweredOff";
case CBPeripheralManagerStatePoweredOn: return #"PoweredOn";
case CBPeripheralManagerStateResetting: return #"Resetting";
case CBPeripheralManagerStateUnauthorized: return #"Unauthorized";
case CBPeripheralManagerStateUnknown: return #"Unknown";
case CBPeripheralManagerStateUnsupported: return #"Unsupported";
}
}
+ (CBUUID*)LatencyCharacteristicUuid {
return [CBUUID UUIDWithString:#"B81672D5-396B-4803-82C2-029D34319015"];
}
- (void)peripheralManagerDidUpdateState:(CBPeripheralManager *)peripheral {
NSLog(#"CBPeripheralManager entered state %#", [MyPeripheralManagerDelegate stringFromCBPeripheralManagerState:peripheral.state]);
if (peripheral.state == CBPeripheralManagerStatePoweredOn) {
NSDictionary* dict = #{CBAdvertisementDataLocalNameKey: #"ConnLatencyTest"};
// Generated with uuidgen
CBUUID *serviceUuid = [CBUUID UUIDWithString:#"7AE48DEE-2597-4B4D-904E-A3E8C7735738"];
CBMutableService* service = [[CBMutableService alloc] initWithType:serviceUuid primary:TRUE];
// value:nil makes it a dynamic-valued characteristic
CBMutableCharacteristic* latencyCharacteristic = [[CBMutableCharacteristic alloc] initWithType:MyPeripheralManagerDelegate.LatencyCharacteristicUuid properties:CBCharacteristicPropertyRead value:nil permissions:CBAttributePermissionsReadable];
service.characteristics = #[latencyCharacteristic];
[self.peripheralManager addService:service];
[self.peripheralManager startAdvertising:dict];
NSLog(#"startAdvertising. isAdvertising: %d", self.peripheralManager.isAdvertising);
}
}
- (void)peripheralManagerDidStartAdvertising:(CBPeripheralManager *)peripheral
error:(NSError *)error {
if (error) {
NSLog(#"Error advertising: %#", [error localizedDescription]);
}
NSLog(#"peripheralManagerDidStartAdvertising %d", self.peripheralManager.isAdvertising);
}
+ (CBPeripheralManagerConnectionLatency) nextLatencyAfter:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return CBPeripheralManagerConnectionLatencyMedium;
case CBPeripheralManagerConnectionLatencyMedium: return CBPeripheralManagerConnectionLatencyHigh;
case CBPeripheralManagerConnectionLatencyHigh: return CBPeripheralManagerConnectionLatencyLow;
}
}
+ (NSString*)describeLatency:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return #"Low";
case CBPeripheralManagerConnectionLatencyMedium: return #"Medium";
case CBPeripheralManagerConnectionLatencyHigh: return #"High";
}
}
- (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request {
if ([request.characteristic.UUID isEqualTo:MyPeripheralManagerDelegate.LatencyCharacteristicUuid]) {
[self.peripheralManager setDesiredConnectionLatency:self.nextLatency forCentral:request.central];
NSString* description = [MyPeripheralManagerDelegate describeLatency: self.nextLatency];
request.value = [description dataUsingEncoding:NSUTF8StringEncoding];
[self.peripheralManager respondToRequest:request withResult:CBATTErrorSuccess];
NSLog(#"didReceiveReadRequest:latencyCharacteristic. Responding with %#", description);
self.nextLatency = [MyPeripheralManagerDelegate nextLatencyAfter:self.nextLatency];
} else {
NSLog(#"didReceiveReadRequest: (unknown) %#", request);
}
}
#end
int main(int argc, const char * argv[]) {
#autoreleasepool {
MyPeripheralManagerDelegate *peripheralManagerDelegate = [[MyPeripheralManagerDelegate alloc] init];
CBPeripheralManager* peripheralManager = [[CBPeripheralManager alloc] initWithDelegate:peripheralManagerDelegate queue:nil];
peripheralManagerDelegate.peripheralManager = peripheralManager;
[[NSRunLoop currentRunLoop] run];
}
return 0;
}
I have an iPhone app which is pulling just about all it's data from ASP.NET MVC services.
Basically just returning JSON.
When I run the app in the simulator, the data is pulled down very fast etc.. however when I use the actual device (3G or WiFi) it's extremely slow. To the point that the app crashes for taking too long.
a) Should I not be calling a service from the FinishedLaunching method in AppDelegate?
b) Am I calling the service incorrectly?
The method I'm using goes something like this:
public static JsonValue GetJsonFromURL(string url) {
var request = (HttpWebRequest)WebRequest.Create (url);
request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
using(var response = (HttpWebResponse)request.GetResponse()) {
using(var streamReader = new StreamReader(response.GetResponseStream())) {
return JsonValue.Load(streamReader);
}
}
}
Is there a better or quicker way I should be querying a service? I've read about doing things on different threads or performing async calls to not lock the UI, but I'm not sure what the best approach or how that code would work.
a) Should I not be calling a service from the FinishedLaunching method in AppDelegate?
You get limited time to get your application up and running, i.e. returning from FinishedLaunching or the iOS watchdog will kill your application. That's about 17 seconds total (but could vary between devices/iOS versions).
Anything that takes some time is better done in another thread, launched from FinishedLaunching. It's even more important if you use networking services as you cannot be sure how much time (or even if) you'll get an answer.
b) Am I calling the service incorrectly?
That looks fine. However remember that the simulator has a faster access to the network (likely), much more RAM and CPU power. Large data set can take a lot of memory / CPU time to decode.
Running from another thread will, at least, cover the extra time required. It can be as simple as adding the code (below) inside your FinishedLaunching.
ThreadPool.QueueUserWorkItem (delegate {
window.BeginInvokeOnMainThread (delegate {
// run your code
});
});
You can have a look at how Touch.Unit does it by looking at its TouchRunner.cs source file.
note: you might want to test not using (asking) for compressed data since the time/memory to decompress it might not be helpful on devices (compared to the simulator). Actual testing needed to confirm ;)
The following code throws an exception...
private void EnsureDiskSpace()
{
using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForSite())
{
const long NEEDED = 1024 * 1024 * 100;
if (file.AvailableFreeSpace < NEEDED)
{
if (!file.IncreaseQuotaTo(NEEDED))
{
throw new Exception();
}
}
}
}
But this code does not (it displays the silverlight "increase quota" dialog)...
private void EnsureDiskSpace()
{
using (IsolatedStorageFile file = IsolatedStorageFile.GetUserStoreForSite())
{
const long NEEDED = 1024 * 1024 * 100;
if (file.Quota < NEEDED)
{
if (!file.IncreaseQuotaTo(NEEDED))
{
throw new Exception();
}
}
}
}
The only difference in the code is that the first one checks file.AvailableFreeSpace and the second checks file.Quota.
Are you not allowed to check the available space before requesting more? It seems like I've seen a few examples on the web that test the available space first. Is this no longer supported in SL3? My application allows users to download files from a server and store them locally. I'd really like to increase the quota by 10% whenever the user runs out of sapce. Is this possible?
I had the same issue. The solution for me was something written in the help files. The increase of disk quota must be initiated from a user interaction such as a button click event. I was requesting increased disk quota from an asynchronous WCF call. By moving the space increase request to a button click the code worked.
In my case, if the WCF detected there was not enough space, the silverlight app informed the user they needed to increase space by clicking a button. When the button was clicked, and the space was increased, I called the WCF service again knowing I now had more space. Not as good a user experience, but it got me past this issue.
There is a subtle bug in your first example.
There may not be enough free space to add your new storage, triggering the request - but the amount you're asking for may be less than the existing quota. This throws the exception and doesn't show the dialog.
The correct line would be
file.IncreaseQuotaTo(file.Quota + NEEDED);
I believe that there were some changes to the behavior in Silverlight 3, but not having worked directly on these features, I'm not completely sure.
I did take a look at this MSDN page on the feature and the recommended approach is definitely the first example you have; they're suggesting:
Get the user store
Check the AvailableFreeSpace property on the store
If needed, call IncreaseQuotaTo
It isn't ideal, since you can't implement your own growth algorithm (grow by 10%, etc.), but you should be able to at least unblock your scenario using the AvailableFreeSpace property, like you say.
I believe reading the amount of total space available (the Quota) to the user store could be in theory an issue, imagine a "rogue" control or app that simply wants to fill every last byte it can in the isolated storage space, forcing the user eventually to request more space, even when not available.
It turns out that both code blocks work... unless you set a break point. For that matter, both code blocks fail if you do set a break point.