OSX Bluetooth LE Peripheral transfer rates are slow - ios

Background Info:
I've implemented a Bluetooth LE Peripheral for OSX which exposes two characteristics (using CoreBluetooth). One is readable, and one is writable (both with indications on). I've implemented a Bluetooth LE Central on iOS which will read from the readable characteristic and write to the writable characteristic. I've set it up so that every time the characteristic value is read, the value is updated (in a way similar to this example). The transfer rates I get with this set up are pathetically slow (topping out at a measured sustained speed of roughly 340 bytes / second). This speed is the actual data, and not a measure including the packet details, ACKs and so on.
Problem:
This sustained speed is too slow. I've considered two solutions:
There's some parameter in CoreBluetooth that I've missed that will help me increase the speed.
I'll need to implement a custom Bluetooth LE service using the IOBluetooth classes instead of CoreBluetooth.
I believe, I've exhausted option 1. I don't see any other parameters I can tweak. I'm limited to sending 20 bytes per message. Anything else and I get cryptic errors on the iOS device concerning Unknown Errors, Unlikely Errors, or the value being "Not Long". Since the demo project also indicates a 20 byte MTU, I'll accept that this likely isn't possible.
So I'm left with option 2. I'm trying to somehow modify the connection parameters for Bluetooth LE on OSX to hopefully allow me to increase the transfer speed (by setting the min and max conn intervals to be 20ms and 40ms respectively - as well as sending multiple BT packets per connection interval). It looks like providing my own SDP Service on IOBluetooth is the only way to achieve this on OSX. The problem with this is the documentation for how to do this is negligible to non-existent.
This tells me how to implement my own service (albeit using deprecate API), however, it doesn't explain the required parameters for registering an SDP service. So I'm left wondering:
Where can I find the required parameters for this dictionary?
How do I define these parameters in a way to offer a Bluetooth LE service?
Is there any alternative to providing a Bluetooth LE Peripheral on OSX via another framework (Python library? Linux in a VM with access to the Bluetooth stack? I'd like to avoid this altogether.)

I decided my best course of action was to attempt to use Linux in a VM as there is more documentation available and access to the source code would hopefully guarantee that I could find a solution. For anyone who is also facing this problem, here's how you can issue a Connection Parameter Update Request on OS X (sort of).
Step 1
Install a Linux VM. I used Virtual Box with Linux Mint 15 (64-bit Cinnamon).
Step 2
Allow usage of the OS X Bluetooth device in your VM. Attempting to forward the Bluetooth USB Controller to your VM will give an error message. To allow this, you need to stop everything that is using the controller. On my machine, that included issuing the following commands from the command line:
sudo launchctl unload /System/Library/LaunchDaemons/com.apple.blued.plist
This will kill the OS X Bluetooth daemon. Attempting to kill blued from the Activity Monitor will just cause it to be automatically relaunched.
sudo kextunload -b com.apple.iokit.BroadcomBluetoothHostControllerUSBTransport
On my MacBook, I've got a Broadcom controller and this is the kernel module that OS X uses for it. Don't worry about issuing these commands. To undo the changes, you can power down and reboot your machine (note, in some cases when playing with the BT controller and it got into a bad state, I had to actually leave the machine powered down for ~10 seconds before rebooting to clear volatile memory).
If after running these two commands you still can't mount the BT controller, you can run kextstat | grep Bluetooth and see other Bluetooth related kernel modules and then try to unload them as well. I've got ones named IOBluetoothFamily and IOBluetoothSerialManager that don't need to be unloaded.
Step 3
Launch your VM and get your Linux BT stack. I checked out the bluez Git repo from here. I specifically grabbed the 5.14 release tag using git checkout tags/5.14 just to be sure it was at least a tagged version and less likely to be broken. 5.14 is the newest tag as of writing this answer.
Step 4
Build bluez. This was done using bootstrap, then configure, then make and make install. I used the --prefix=/opt/bluez flag on configure to prevent overwriting the install bluetooth stack. Also, I used the --enable-maintainer-mode configure flag for the reason stated in the next step. You also might need to use --disable-systemd to get it to configure. Bluez has a bunch of tools and utilities you can use for various things. In order to use the built Bluetooth daemon, you need to stop the system daemon using sudo service bluetooth stop. You can then launch the built one using sudo /opt/bluez/libexec/bluetooth/bluetoothd -n -d (this launches in non-daemon mode with debug output).
Step 5
Get your LE service running via bluez. You can view the bluez/plugins/gatt-example.c for how to do this. I directly modified this by removing the unnecessary code and using the battery service code as a template for my own service and characteristics. You need to recompile bluez to have this code added to the bluetooth daemon. One thing to note (that caused my a day or two of trouble getting this working) was that iOS caches the GATT service listing and this is not read/refreshed on each connection. If you add a service or characteristic or change a UUID, you'll need to disable Bluetooth on your iOS device and then re-enable it. This is undocumented in Apples docs and there is no programmatic way to do it.
Step 6
Unfortunately, this is where things get tricky. Bluez doesn't have support built-in for issuing the Connection Parameters Update Request using any of its utilities. I had to write it myself. I'm currently seeing if they want my code to be included in the bluez stack. I can't post the code currently as I'd need to first see if the bluez devs are interested in the code and then get approval from my workplace to give the code. However, I can currently explain what I did to enable support.
Step 7
Prime yourself on the Bluetooth Standard. Any version 4.0 or greater will have the details you need. Read the following sections.
See Vol. 2, Part E, 4.1 for Host to Controller HCI flow.
See Vol. 2, Part E, 5.4.2 for HCI ACL Data Packet format.
See Vol. 3, Part A, 4 for Signalling Packet format.
See Vol. 3, Part A, 4.20 for Connection Parameter Update Request format.
You're basically going to need to write the code to format the packets and then write them to the hci device. The HCI ACL Data Packet header will contain 4 bytes. This is followed by 4 bytes for the Signalling command's length and channel id. This is then followed by your signal payload which in my case was 12 bytes (for the Connection Parameter Update Request).
You can then write them to the device similar to hci_send_cmd in bluez/lib/hci.c. I did each packet header as it's own struct and wrote them each as iovecs to the device. I put my new function in the hci.c file and exposed it with a function prototype in bluez/lib/hci_lib.h. I then modified bluez/tools/hcitool.c to allow me to call this method from the command line. In my case, I made it so that the command was nearly identical to the lecup command as it requires the same parameters (lecup can't be used as it's meant to be called on the master side, not the slave).
Recompiled all of this and then, voila, I can use my new command on hcitool to send the parameters to the bluetooth controller. After sending my command, it then re-negotiates with the iOS device as expected.
Comments
This process is not for the faint of heart. Hopefully, either this, or some other method of setting the connection parameters is added to bluez to simplify this process. Ideally, Apple will allow the ability to do so via CoreBluetooth or IOBluetooth at some point as well (it could be possible, but undocumentated / difficult to do so, I gave up with the Apple libraries). I've journeyed down the rabbit hole and learned much more about the Bluetooth Spec then I thought I'd have to to simply change the connection parameters between a MacBook and an iPhone. Hopefully this will be helpful to somebody at some point (even if it's me checking back on how I did this).
I know I've left out a lot of details in this in order to keep it somewhat brief (i.e. usage on the bluez tools). Please comment if something isn't clear.

If you are implementing your Peripheral using CoreBluetooth, you can request somewhat customized connection parameters by calling -[CBPeripheralManager setDesiredConnectionLatency:forCentral:] to Low, Medium, or High (where Low latency means higher bandwidth). The documentation does not specify what this means, so we have to test it ourselves.
On an OSX Peripheral, when you set the desired latency to Low, the interval is still 22.5ms which is far from the minimum of 7.5ms.
On OSX Yosemite 10.10.4, this is what the CBPeripheralManagerConnectionLatency values mean:
Low: Min Interval: 18 (22.5ms), Max Interval: 18 (22.5ms), Slave Latency: 4 events, Timeout: 200 (2s).
Medium: Min Interval: 32 (40ms), Max Interval: 32 (40ms), Slave Latency: 6 events, Timeout: 200 (2s)
High: Min Interval: 160 (200ms), Max Interval: 160 (200ms), Slave Latency: 2 events, Timeout: 300 (3s)
Here is the code that I used to run a CBPeripheralManager on OSX. I used an Android device as central using BLE Explorer and dumped the Bluetooth traffic to a Btsnoop file.
// clang main.m -framework Foundation -framework IOBluetooth
#import <Foundation/Foundation.h>
#import <IOBluetooth/IOBluetooth.h>
#interface MyPeripheralManagerDelegate: NSObject<CBPeripheralManagerDelegate>
#property (nonatomic, assign) CBPeripheralManager* peripheralManager;
#property (nonatomic) CBPeripheralManagerConnectionLatency nextLatency;
#end
#implementation MyPeripheralManagerDelegate
+ (NSString*)stringFromCBPeripheralManagerState:(CBPeripheralManagerState)state {
switch (state) {
case CBPeripheralManagerStatePoweredOff: return #"PoweredOff";
case CBPeripheralManagerStatePoweredOn: return #"PoweredOn";
case CBPeripheralManagerStateResetting: return #"Resetting";
case CBPeripheralManagerStateUnauthorized: return #"Unauthorized";
case CBPeripheralManagerStateUnknown: return #"Unknown";
case CBPeripheralManagerStateUnsupported: return #"Unsupported";
}
}
+ (CBUUID*)LatencyCharacteristicUuid {
return [CBUUID UUIDWithString:#"B81672D5-396B-4803-82C2-029D34319015"];
}
- (void)peripheralManagerDidUpdateState:(CBPeripheralManager *)peripheral {
NSLog(#"CBPeripheralManager entered state %#", [MyPeripheralManagerDelegate stringFromCBPeripheralManagerState:peripheral.state]);
if (peripheral.state == CBPeripheralManagerStatePoweredOn) {
NSDictionary* dict = #{CBAdvertisementDataLocalNameKey: #"ConnLatencyTest"};
// Generated with uuidgen
CBUUID *serviceUuid = [CBUUID UUIDWithString:#"7AE48DEE-2597-4B4D-904E-A3E8C7735738"];
CBMutableService* service = [[CBMutableService alloc] initWithType:serviceUuid primary:TRUE];
// value:nil makes it a dynamic-valued characteristic
CBMutableCharacteristic* latencyCharacteristic = [[CBMutableCharacteristic alloc] initWithType:MyPeripheralManagerDelegate.LatencyCharacteristicUuid properties:CBCharacteristicPropertyRead value:nil permissions:CBAttributePermissionsReadable];
service.characteristics = #[latencyCharacteristic];
[self.peripheralManager addService:service];
[self.peripheralManager startAdvertising:dict];
NSLog(#"startAdvertising. isAdvertising: %d", self.peripheralManager.isAdvertising);
}
}
- (void)peripheralManagerDidStartAdvertising:(CBPeripheralManager *)peripheral
error:(NSError *)error {
if (error) {
NSLog(#"Error advertising: %#", [error localizedDescription]);
}
NSLog(#"peripheralManagerDidStartAdvertising %d", self.peripheralManager.isAdvertising);
}
+ (CBPeripheralManagerConnectionLatency) nextLatencyAfter:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return CBPeripheralManagerConnectionLatencyMedium;
case CBPeripheralManagerConnectionLatencyMedium: return CBPeripheralManagerConnectionLatencyHigh;
case CBPeripheralManagerConnectionLatencyHigh: return CBPeripheralManagerConnectionLatencyLow;
}
}
+ (NSString*)describeLatency:(CBPeripheralManagerConnectionLatency)latency {
switch (latency) {
case CBPeripheralManagerConnectionLatencyLow: return #"Low";
case CBPeripheralManagerConnectionLatencyMedium: return #"Medium";
case CBPeripheralManagerConnectionLatencyHigh: return #"High";
}
}
- (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request {
if ([request.characteristic.UUID isEqualTo:MyPeripheralManagerDelegate.LatencyCharacteristicUuid]) {
[self.peripheralManager setDesiredConnectionLatency:self.nextLatency forCentral:request.central];
NSString* description = [MyPeripheralManagerDelegate describeLatency: self.nextLatency];
request.value = [description dataUsingEncoding:NSUTF8StringEncoding];
[self.peripheralManager respondToRequest:request withResult:CBATTErrorSuccess];
NSLog(#"didReceiveReadRequest:latencyCharacteristic. Responding with %#", description);
self.nextLatency = [MyPeripheralManagerDelegate nextLatencyAfter:self.nextLatency];
} else {
NSLog(#"didReceiveReadRequest: (unknown) %#", request);
}
}
#end
int main(int argc, const char * argv[]) {
#autoreleasepool {
MyPeripheralManagerDelegate *peripheralManagerDelegate = [[MyPeripheralManagerDelegate alloc] init];
CBPeripheralManager* peripheralManager = [[CBPeripheralManager alloc] initWithDelegate:peripheralManagerDelegate queue:nil];
peripheralManagerDelegate.peripheralManager = peripheralManager;
[[NSRunLoop currentRunLoop] run];
}
return 0;
}

Related

OTA for FreeRTOS for device in deep sleep from AWS

Background
I have a small battery powered system running on freeRTOS. I need to periodically run OTA updates, as per any proper internet connected device. The problem is that, being battery powered, the device spends 99.9% of it's life in deep sleep.
When the device is awake, it's possible to post an OTA update from AWS, by publishing to the device's OTA/update topic. From the console, you can only use QOS = 0. But from inside, say a lambda, I believe it's possible to use QOS = 1.
{
"state": {
"desired": {
"ota_url":"https://s3-ap-southeast-2.amazonaws.com/my.awesome.bucket/signed_binary_v21.bin"
}
}
}
Questions
How can I modify this approach to successfully update a device that sleeps for 15 minutes at a time, and wakes for maybe 10 s. During the waking period, it sends a message. Is there some way of leaving the desired OTA/update in the shadow that somehow is included in the response from AWS. I've not managed to figure out how shadows really work. OR can you specify a retry period and time to keep trying perhaps?
Is this approach essentially consistent with the most recent best practice from a security perspective : signed binary, encrypted flash & secure boot etc.
Many thanks.
IDF comes with an OTA example that is pretty straight forward. As suggested in the comments on your question, the example downloads the firmware update, and it will automatically revert to the previous image if the updated version has errors that cause a reset. This is the operative part of the example:
extern const uint8_t server_cert_pem_start[] asm("_binary_ca_cert_pem_start");
extern const uint8_t server_cert_pem_end[] asm("_binary_ca_cert_pem_end");
esp_err_t _ota_http_event_handler(esp_http_client_event_t* evt);
#define FIRMWARE_UPGRADE_URL "https://yourserver.com/somefolder/somefirmware.bin"
void simple_ota_task(void* pvParameter)
{
esp_http_client_config_t config = {
.url = FIRMWARE_UPGRADE_URL,
.cert_pem = (char*)server_cert_pem_start,
.event_handler = _ota_http_event_handler,
};
esp_err_t ret = esp_https_ota(&config);
if (ret == ESP_OK)
ESP_LOGW("ota", "Upgrade success");
else
ESP_LOGE(TAG, "Upgrade failure");
esp_restart();
}
void foo()
{
// after initializing WiFi, create a task to execute OTA:
//
xTaskCreate(&simple_ota_task, "ota_task", 8192, NULL, 5, NULL);
}
// event handler callback referenced above, it has no functional purpose
// just dumps info log output
esp_err_t _ota_http_event_handler(esp_http_client_event_t* evt)
{
switch (evt->event_id) {
case HTTP_EVENT_ERROR:
ESP_LOGD(TAG, "HTTP_EVENT_ERROR");
break;
case HTTP_EVENT_ON_CONNECTED:
ESP_LOGD(TAG, "HTTP_EVENT_ON_CONNECTED");
break;
case HTTP_EVENT_HEADER_SENT:
ESP_LOGD(TAG, "HTTP_EVENT_HEADER_SENT");
break;
case HTTP_EVENT_ON_HEADER:
ESP_LOGD(TAG, "HTTP_EVENT_ON_HEADER, key=%s, value=%s", evt->header_key, evt->header_value);
break;
case HTTP_EVENT_ON_DATA:
ESP_LOGD(TAG, "HTTP_EVENT_ON_DATA, len=%d", evt->data_len);
break;
case HTTP_EVENT_ON_FINISH:
ESP_LOGD(TAG, "HTTP_EVENT_ON_FINISH");
break;
case HTTP_EVENT_DISCONNECTED:
ESP_LOGD(TAG, "HTTP_EVENT_DISCONNECTED");
break;
}
return ESP_OK;
}
To setup embedding the cert, obtain the public key by examining the SSL cert for the site on which the bin file will reside, with a web browser, save it, then convert it to Base64, aka PEM format. (I used some web page.) In my case I created a directory at the same level as the /main directory, called /server_certs, and saved the Base64 conversion in that directory as ca_cert.pem. (That was way too pedantic, wasn't it?)
Then added these lines to CMakeFiles.txt in the /main directory:
# Embed the server root certificate into the final binary
set(COMPONENT_EMBED_TXTFILES ${IDF_PROJECT_PATH}/server_certs/ca_cert.pem)
register_component()
The thing that was unclear to me was how to determine that a newer version is available, if there is a built-in way I couldn't discern it, and did not want to download the update for nothing, ever. So I rolled my own, embedded a version string in the firmware, created a WebAPI entry point on my server that returns the current version (both maintained by hand until I can think of a better way... that I feel like implementing) and call the OTA functionality when the strings don't match. It looks a lot like this:
// The WebAPI returns data in JSON format,
// so if a method returns a string it is quoted.
#define FW_VERSION "\"00.09.015\""
// the HTTP request has just completed, payload stored in recv_buf
//
// tack a null terminator using an index maintained by the HTTP transfer
recv_buf[pos + 1] = 0;
// test for success
if (strncmp(recv_buf, "HTTP/1.1 200 OK", 15) == 0)
{
// find the end of the headers
char *p = strstr(recv_buf, "\r\n\r\n");
if (p)
{
ESP_LOGI("***", "version: %s content %s", FW_VERSION, (p + 4));
if (strcmp((p + 4), FW_VERSION) > 0)
{
// execute OTA task
foo();
}
else
{
// Assumes the new version has run far enough to be
// considered working, commits the update. It doesn't
// hurt anything to call this any number of times.
esp_ota_mark_app_valid_cancel_rollback();
}
}
}
If you're not using the CMake/Ninja build tools, consider checking it out, it's way, way faster than the MingW32-based tool set.
As you have mentioned, you do not fully understand how shadows work, let me explain how using shadows may fit in scenario similar to yours.
Normally, a device that sleeps for long and wakes up for a short duration, usually makes a request to device shadow at wake up. If the shadow contains
{"state": {"desired":{"ota_url": "xxx", "do_ota": true, ...}}},
In this case, device should initiate ota update code inside device.
Further, the approach mentioned here has nothing to do with security and best practices (signed binary, encrypted flash and secure boot). All these three things should be handled by ota update program. Shadows are only used to indicate that there is a ota update available. Using shadow help you avoid making unnecessary GET Request to OTA server, in turn saving precious power and compute resources.

ESP8266 5v Relay USB Disconnection issue

Issue
-When using the ESP8266 wired up in this way it will randomly disconnect the USB interface when it powers the relay. It may then re-connect but is sporadic.
-The code can be viewed below, but essentially the relay is powered for 300ms then waits 10 seconds to loop.
Wiring Diagram https://i.stack.imgur.com/4mycx.png
Tests:
I have swapped out the relay, pump, ESP8266, aswell as re-wiring the circuit multiple times to check for a short. I also have a integer incrementing every loop cycle, when the ESP8266 is able to re-connect it will print this variable, which shows the board is not crashing:
Serial output
https://i.stack.imgur.com/ziM8g.png
I then modified the diagram so the 5v power was not in parallel, but where two different power sources, one for the ESP8266 and one for the pump circuit, however the same issue was observed:
Test Wiring Diagram https://i.stack.imgur.com/7S0aP.png
Question:
Why does the USB disconnect when sending the control signal to the relay?
Is there a way to mitigate this?
Code:
int relayInput = 5; // the input to the relay pin
int debug_test = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(115200);
pinMode(relayInput, OUTPUT); // initialize pin as OUTPUT
}
void loop() {
// put your main code here, to run repeatedly:
debug_test ++ ;
Serial.println(debug_test);
digitalWrite(relayInput, HIGH); // turn relay on
Serial.println("Water on!");
delay(300);
digitalWrite(relayInput, LOW); // turn relay off
Serial.println("Water off!");
Serial.println("Waiting 10 seconds");
delay(10000);
}
Parts:
Pump - https://www.ebay.co.uk/itm/Mini-Water-Pump-DC-3V-4-5V-Fish-Tank-Fountain-Aquarium-Submersible-White-Parts/174211676084?hash=item288fd337b4:g:128AAOSwfQteYWF3
ESP8255 - https://www.amazon.co.uk/gp/product/B07F5FJSYZ/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Relay - https://www.amazon.co.uk/gp/product/B07BVXT1ZK/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Ok, so researching in to this, it seems when the pump is on it pulls more current (amps) than the PC can provide.
This will be used connected to a external power source which should supply enough current to it, however I also wanted the flexibility to connect it to a PC with a serial connection to troubleshoot.
So in the end something like this:
https://i.stack.imgur.com/MKD1h.png
You are driving a 5v relay module with 3.3v output, which works perfectly for some people but it depends on the relay module and the board, this might be the problem. or the relay draws more than 12mA which is the maximum current can the ESP8266's GPIO deliver.
so I suggest you use an external power source for the relay and control it through the pin (D1 in your case).
Or just use a generic 5v relay with an external 5v power source and control it using a transistor, here is a circuit.
Additional information: https://electronics.stackexchange.com/questions/213051/how-do-i-use-a-5v-relay-with-a-3-3v-arduino-pro-mini?

Delphi - Is it possible to detect if the Screen monitor is ON or OFF by software? [duplicate]

Does anyone know if there is an API to get the current monitor state (on or off) in Windows (XP/Vista/2000/2003)?
All of my searches seem to indicate there is no real way of doing this.
This thread tries to use GetDevicePowerState which according to Microsoft's docs does not work for display devices.
In Vista I can listen to GUID_MONITOR_POWER_ON but I do not seem to get events when the monitor is turned off manually.
In XP I can hook into WM_SYSCOMMAND SC_MONITORPOWER, looking for status 2. This only works for situations where the system triggers the power off.
The WMI Win32_DesktopMonitor class does not seem to help out as well.
Edit: Here is a discussion on comp.os.ms-windows.programmer.win32 indicating there is no reliable way of doing this.
Anyone else have any other ideas?
GetDevicePowerState sometimes works for monitors. If it's present, you can open the \\.\LCD device. Close it immediately after you've finished with it.
Essentially, you're out of luck—there is no reliable way to detect the monitor power state, short of writing a device driver and filtering all of the power IRPs up and down the display driver chain. And that's not very reliable either.
You could hook up a webcam, point it at your screen and do some analysis on the images you receive ;)
Before doing anything based on the monitor state, just remember that users can use a machine with remote desktop of other systems that don't require a monitor connected to the machine - so don't turn off any visualization based on the monitor state.
You can't.
Look like all monitor power capabilities connected to the "power safe mode"
After searching i found here code that connecting between SC_MONITORPOWER message and system values (post number 2)
I use the code to testing if the system values is changing when i am manually switch off the monitor.
int main()
{
for(;monitorOff()!=1;)
Sleep(500);
return 0;
}//main
And the code is never stopped, no matter how long i am switch off my monitor.
There the code of monitorOff function:
int monitorOff()
{
const GUID MonitorClassGuid =
{0x4d36e96e, 0xe325, 0x11ce,
{0xbf, 0xc1, 0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18}};
list<DevData> monitors;
ListDeviceClassData(&MonitorClassGuid, monitors);
list<DevData>::iterator it = monitors.begin(),
it_end = monitors.end();
for (; it != it_end; ++it)
{
const char *off_msg = "";
//it->PowerData.PD_PowerStateMapping
if (it->PowerData.PD_MostRecentPowerState != PowerDeviceD0)
{
return 1;
}
}//for
return 0;
}//monitorOff
Conclusion : when you manually switch of the the monitor, you cant catch it by windows (if there is no unusual driver interface for this), because all windows capabilities is connected to "power safe mode".
In Windows XP or later you can use the IMSVidDevice Interface.
See
http://msdn.microsoft.com/en-us/library/dd376775(VS.85).aspx
(not sure if this works in Sever 2003)
With Delphi code, you can detect invalid monitor geomerty while standby in progress:
i := 0
('Monitor'+IntToStr(i)+': '+IntToStr(Screen.Monitors[i].BoundsRect.Left)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Top)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Right)+', '+
IntToStr(Screen.Monitors[i].BoundsRect.Bottom))
Results:
Monitor geometry before standby:
Monitor0: 0, 0, 1600, 900
Monitor geometry while standby in Deplhi7:
Monitor0: 1637792, 4210405, 31266576, 1637696
Monitor geometry while standby in DeplhiXE:
Monitor0: 4211194, 40, 1637668, 1637693
This is a really old post but if it can help someone, I have found a solution to detect a screen being available or not : the Connecting and Configuring Displays (CCD) API of Windows.
It's part of User32.ddl and the interesting functions are GetDisplayConfigBufferSizes and QueryDisplayConfig. It give us all informations that can be viewed in the Configuration Panel of windows.
In particular the PathInfo contains a TargetInfo property that have a targetAvailable flag. This flag seems to be correctly updated on all the configurations I have tried so far.
This allow you to know the state of every screens connected to the PC and set their configurations.
Here a CCD wrapper for .Net
If your monitor has some sort of built-in USB hub, you could try and use that to detect if the monitor is off/on.
This will of course only work if the USB hub doesn't stay connected when the monitor is consider "off".

get keyboard input contiki

I want to know how can I get a keyboard input in contiki os.
I already tried getchar(),getch(),scanf(),gets() and none worked, so I want to know if somebody can help me.
getchar,getch,scanf,gets are sort of POSIX things that read from files (e.g. stdin) --- these don't exist in Contiki (all though you could probably use them with the native platform).
So the first question to ask is what platform are you using and what do you mean by "keyboard". If keyboard means typing characters that are sent via a serial port from a computer then you have to know where they are received on the thing running Contiki. A typical arrangement is to receive characters on a uart, say, uart1.
In this case, contiki uses a callback such as uart1_input_handler that will be defined by the application. Platform main loops will check if there are characters to send to the input_handler and then check that an input_handler is defined. If so, will call something like uart1_input_handler(c).
You can see this code for the various platforms by grepping for uart1_input_handler:
platform/redbee-econotag/contiki-mc1322x-main.c: uart1_input_handler(uart1_getc());
cpu/msp430/dev/uart1x.c: if(uart1_input_handler(c)) {
cpu/stm32w108/dev/uart1.c: uart1_input_handler(c);
etc...
Some examples that register an input handler and process the characters:
example/shell:
/* set up the shell */
uart1_set_input(serial_line_input_byte);
serial_line_init();
serial_shell_init();
slip, in examples/ipv6/rpl-border-router/slip-bridge.c
slip_set_input_callback(slip_input_callback);
My guess for what you want to do would be to start with the shell examples and try to get those working.
The example cited below is from the Wiki pages of contiki on github. It demonstrates how the contiki specific mechanism for serial input works. Like mariano mentioned above that a callback has to be defined for the serial drivers specific to the platform you are using. I have used for ex. "rs232_set_input(RS232_PORT_0, serial_line_input_byte) ; " for my atmega128 MCU. The serial i/o drivers use this callback mechanism to post input characters to the "serial_line_process" defined in serial-line.c file. This process then broadcasts the serial_line_event_message to all processes along with the data read on the serial line. A process like the eg. stated below, can catch this event and process the input as per the requirements.
The callback mentioned above is defined in $(CONTIKI)/core/dev/serial-line.c. Check that out.
Once you initialise it using serial_line_init(), you are good to go.
#include "contiki.h"
#include "dev/serial-line.h"
#include <stdio.h>
PROCESS(test_serial, "Serial line test process");
AUTOSTART_PROCESSES(&test_serial);
PROCESS_THREAD(test_serial, ev, data)
{
PROCESS_BEGIN();
for(;;) {
PROCESS_YIELD();
if(ev == serial_line_event_message) {
printf("received line: %s\n", (char *)data);
}
}
PROCESS_END();
}
I assume you use COOJA (or maybe you connected a keyboard to your device so my answer will not be correct).
COOJA is an emulator, not a simulator.
If you want a responsive design, use the sensor button (on sky platform for example)
SENSORS_ACTIVATE(button_sensor);
/* Wait until we get a sensor event with the button sensor as data. */
PROCESS_WAIT_EVENT_UNTIL(ev == sensors_event &&
data == &button_sensor);
Hope it helped.

UDPClient not receiving packets on HTC G2

I'm attempting to receive a UDP Broadcast under Mono for Android and I am seeing no data coming in. This is somewhat perplexing because it works fine on the Galaxy Tab 7 and Galaxy Tab 10 (Android v 3.2) I have, but fails on an HTC G2 (Android v2.3.4).
The code is straightforward:
public void BeginDiscover()
{
var packet = new DiscoverPacket();
lock (m_syncRoot)
{
var localEndpoint = new IPEndPoint(m_local, 0);
using (var udp = new UdpClient(localEndpoint))
{
var remoteEndpoint = new IPEndPoint(IPAddress.Broadcast, DiscoverPort);
udp.Send(packet.Data, packet.Data.Length, remoteEndpoint);
Thread.Sleep(100);
}
}
}
I have verified that the manifest includes this line:
<uses-permission android:name="android.permission.INTERNET" />
Though this is happening in Debug, so that should be implicitly set anyway.
Other very strange observations:
Again, this is working just fine on another type of device
The handler listening for UDP broadcasts (which list listening for the response) does see this outbound packet. The code for this listener is also straightforward:
[listener code]
private void Start()
{
m_discoverListener = new UdpClient(DiscoverPort);
m_discoverListener.BeginReceive(DiscoverCallback, m_discoverListener);
}
private void DiscoverCallback(IAsyncResult result)
{
try
{
var ep = new IPEndPoint(IPAddress.Any, DiscoverPort);
var data = m_discoverListener.EndReceive(result, ref ep);
// filter out what we send
var add = AddressWithoutPort(ep.Address);
if (add == m_local.ToString()) return;
// parse discover response
// [clipped for clarity]
}
finally
{
m_discoverListener.BeginReceive(DiscoverCallback, m_discoverListener);
}
}
Wireshark running on a separate PC on the same network does see the discover request packet (from above)
The "discovered" device is also seeing it, because Wireshark is also seeing the reply
The Android device UDP listener is not receiving the response packet
The only major differences between devices that I can think of (other than different OEMs implementing the platform) is that the G2 has a cellular radio built in and the Galaxy Tab does not. In my specific test case, I have no SIM card in the phone, though, so no cellular connection is being made. Note that the code above is explicitly using the local endpoint that is on the WiFi network.
Is there a known issue with UDP on the G2 specifically or generally on older implementations of the Android platform?
It took a bit of work as the UDP response in question is coming from a microcontroller on the device and I wanted to make absolutely certain that it wasn't an issue on the micro end (though I suspected it wasn't). I created a PC-based simulator for the microcontroller device that handles my Android UDP request and that sends back the exact same UDP response that the microcontroller does, then verified all of the traffic looks fine with Wireshark.
The net result is that I see he exact same behavior with the simulator. The Galaxy Tab 7 and 10 devices receive the UDP response no problem. The HTC G2 never does. This leads me to conclude that one of the following is true:
a) The HTC G2 specifically has an implementation bug preventing it from receiving (or at least passing along) UDP broadcasts on the network
or
b) The older Android build has his bug.
Until I find different hardware with the same Android version as the G2 (v2.3) I can't tell which is the case. In either event, it's a bug that makes this (and potentially other) hardware unusable for my specific solution.
I have a couple of applications on the market based on UDP communication.
I have problems with HTC phones not receiving the UDP broadcast packets sent from another device... if sent from the same device, the packets arrive.
so, I think the problem is in HTC, and I found a possible solutions online (even though I have not tried it):
http://www.flattermann.net/2010/09/fix-udp-broadcasts-on-htc-phones-running-stock-firmware/

Resources