I have an application that uses an ESP8266 running ESP_RTOS_SDK version 3.4 and an STM8. It is solar powered, so minimising current consumption is crucial. It works in three modes:
between events: the ESP8266 is in deep sleep and the STM8 is collecting data
during an event: the ESP8266 is in light sleep and the STM8 wakes it up every 10 seconds with some data
after an event: the ESP8266 wakes up fully, connects to wifi, sends all of the collected data.
If I disable light sleep, everything works fine. With light sleep enabled, The light sleep itself works fine but the ESP8266 does not connect to wifi.
ESP-IDF light sleep is documented here. This is my light sleep function:
// -------------------------------------------------------------------
// in light sleep, the processor is stopped.
// we wake up on a WAKE=low
void light_sleep (void) {
gpio_wakeup_enable(GPIO_WAKE_PIN, GPIO_INTR_LOW_LEVEL);
esp_sleep_enable_gpio_wakeup();
esp_sleep_enable_timer_wakeup (10000000L);
esp_light_sleep_start ();
vTaskDelay (1);
esp_sleep_disable_wakeup_source (ESP_SLEEP_WAKEUP_GPIO);
esp_sleep_disable_wakeup_source (ESP_SLEEP_WAKEUP_TIMER);
}
This is the code that I use to start the wifi:
// ------------------------------------------------------------------------------
static void app_wifi_start (void) {
wifi_config_t config = {};
ESP_ERROR_CHECK(esp_event_handler_register(WIFI_EVENT, ESP_EVENT_ANY_ID, &wifi_event_handler, NULL));
ESP_ERROR_CHECK(esp_event_handler_register(IP_EVENT, IP_EVENT_STA_GOT_IP, &ip_event_handler, NULL));
ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_STA));
strncpy((char *)&config.sta.ssid, wifi_config.remote_ssid, sizeof (config.sta.ssid));
strncpy((char *)&config.sta.password, wifi_config.remote_password, sizeof (config.sta.password));
if (strlen((char *)config.sta.password)) {
config.sta.threshold.authmode = WIFI_AUTH_WPA2_PSK;
}
config.sta.pmf_cfg.capable = true;
config.sta.pmf_cfg.required = false;
ESP_ERROR_CHECK(esp_wifi_set_config(ESP_IF_WIFI_STA, &config) );
ESP_ERROR_CHECK(esp_wifi_start());
esp_wifi_connect();
}
The return code from esp_wifi_connect () is ESP_OK.
My question is: how do I make wifi start after a light sleep?
Update: this is how I stop the wifi.
// ----------------------------------------------------------
static void app_wifi_stop (void) {
ESP_ERROR_CHECK(esp_event_handler_unregister(IP_EVENT, IP_EVENT_STA_GOT_IP, &ip_event_handler));
ESP_ERROR_CHECK(esp_event_handler_unregister(WIFI_EVENT, ESP_EVENT_ANY_ID, &wifi_event_handler));
switch (current_mode) {
case WIFI_MODE_STA:
case WIFI_MODE_APSTA:
esp_wifi_disconnect ();
break;
case WIFI_MODE_AP:
break;
default:
break;
}
ESP_ERROR_CHECK(esp_wifi_stop ());
ESP_ERROR_CHECK(esp_wifi_set_mode(WIFI_MODE_NULL));
current_mode = WIFI_MODE_NULL;
sta_connected = false;
ap_connections = 0;
}
I don't know if you solved the problem. It's been a long time. Well... Before going into light sleep, you should put the wifi in power saving mode. Also, you must call esp_wifi_stop() to stop the wifi task. However, you can only call esp_wifi_stop() when you are sure that no service is using wifi in the background. For example, if you are connected to an MQTT broker, you must make sure your MQTT queue is empty and call esp_mqtt_client_stop(). So after that, you're ready to call esp_wifi_stop() and go into light sleep by calling esp_light_sleep_start();.
Related
I tried to use hardware timer to read data from an external device periodically.
More specifically, I realized a custom driver using gpio to simulate SPI protocol, whenever an hardtimer interrupt happens, the driver is called to read gpio status. The timer is set to 2k.
When an interrupt happens, the isr shall put sample data into a buffer. When the buffer is full, the application will pause the timer and send these data out through mqtt protocol. Using signal generator and oscilloscope, I found the data was good. The whole process worked as expected.
The problem is that the sample process is not continual. When data is sending out through wifi, the timer is paused, and no data can be read into buffer.
To solve this problem, I create a special task responsible for transmitting data out. And then I use ping-pong buffers to store sample data. When one buffer is full, the sending task is notified to send these data out, meanwhile the timer isr is continually to put data into another buffer.
At first I wanted to send notify just from the isr (using xQueueSendFromISR()), which was proved not reliable. I found only a few notifies were able to be sent to the sending task. So I am obliged to using a flag. When one buffer is full, the flag is set to true, While a special task is looping this flag, whenever it find the flag is true, it will notify the sending task.
timer_isr()
{
read_data_using_gpio;
if(one buffer is full)
{
set the flag to true
}
}
task_1()
{
while(1)
{
if(the flag is true)
{
set the flag to false;
xQueueSend;
}
vTaskDelay(50ms)//it will cost 200ms to fill up the buffer
}
}
task_2()
{
while(1)
{
xStatus = xQueueReceive;
if(xStatus==pdPASS) // A message from other tasks is received.
{
transmitting data out using mqtt protocol.
}
}
}
Then I got the terrible data as below.
terroble data
I used oscilloscope to check the gpio operation in the isr.
oscilloscope1
oscilloscope2
So it seems like some isr not triggered? but what happened?
More weird thing: I added another task to get data from an audio chip through i2s. Again I used ping-pong buffers and send notify to the same sending task.
timer_isr()
{
read_data_using_gpio;
if(one buffer is full)
{
set the flag to true
}
}
task_1()
{
while(1)
{
if(the flag is true)
{
set the flag to false;
xQueueSend;
}
vTaskDelay(50ms)
}
}
task_3()
{
while(1)
{
i2s_read_to_buffer;
xQueueSend;
}
}
task_2()
{
while(1)
{
xStatus = xQueueReceive;
if(xStatus==pdPASS) // A message from other tasks is received.
{
if(data from task_1)
{
do something;
transmitting data out using mqtt protocol
}
if(data from task_2)
{
do something;
transmitting data out using mqtt protocol
}
}
}
}
And this time the data from former task turned ok!
data_ok
And what's more, after I commened task2-related code in the sending task, Again the data become bad!
So what happened? Can somebody give any hint?
task_2()
{
while(1)
{
xStatus = xQueueReceive;
if(xStatus==pdPASS) // A message from other tasks is received.
{
if(data from task_1)
{
do something;
transmitting data out using mqtt protocol
}
if(data from task_2)
{
// do something;
// transmitting data out using mqtt protocol
}
}
}
}
I have solved this problem.
If you enable power management(idf.py menuconfig-->component config-->power management), the APB(advanced peripheral bus) will low its frequency automatically, which is the clock source of hardware timer.Thus you will see the timer interrupt is not stable.
Just disable the power management.
How can I avoid the Soft WDT reset Error in this loop.
The Error consistently occurs when reaching the number 3190.
unsigned long TimeFrame = 10000;
void setup() {
Serial.begin(9600);
}
void loop() {
unsigned long StartTime = millis();
while (millis() - StartTime <= TimeFrame){
Serial.println(millis() - StartTime);
}
}
I could count 4 times to 2500 but would this be the correct approach to this error?
Thanks for the explanation. I added a delay(10) to the code and it works.
void setup() {
Serial.begin(9600);
}
void loop() {
unsigned long StartTime = millis();
while (millis() - StartTime <= TimeFrame){
Serial.println(millis() - StartTime);
delay(10);
}
}
WDT is the "watchdog timer". Watchdog timers are used to get control back when something goes wrong in a system - say, an infinite loop or some other unexpected condition. When the underlying system gets control back it resets these timers so that they start counting up from zero again. When they hit their maximum value they trigger a hardware reset on the chip.
Your code measures the duration of the ESP8266 software watchdog timer - in this case 3.19 seconds.
The ESP8266 has both hardware and software watchdog timers. loop() isn't intended to run indefinitely - it's intended to do a small amount of work and then return. When it returns, the ESP8266 SDK gets to reset the watchdog timers.
Both the delay() and yield() functions give the SDK a chance to say "things are okay" and reset the timers. If you need to have long running code in loop() you should call one of those occasionally to give the rest of the system a chance to run.
Keeping loop() brief isn't just about the watchdog timer, either. It also gives the network stack a chance to run and do processing that it needs to do.
You should always design your program so that loop() does a small batch of repetitive processing and then returns. It should never contain an infinite loop or long loop of code.
For instance, suppose you need to do something every 20 seconds. This is the wrong way to do it:
void loop() {
unsigned long start = millis();
while(millis() - start < 20*1000) ;
do_something();
}
That breaks the way the software is designed to work - with loop() executing briefly. It doesn't allow any other software to run while it's waiting. The watchdog timer will fire and reset your CPU.
This is better:
void loop() {
delay(20*1000);
do_something();
}
because delay() lets the underlying system get control back, reset the watchdog timer and also do network-related processing.
In my opinion, this is best:
static unsigned long start_time;
void setup() {
start_time = millis();
}
void loop() {
if(millis() - start_time > 20*1000) {
do_something();
start_time = millis();
}
}
because it does the least possible work inside loop(), only when it's time to do the work.
I am updating an existing iOS VOIP application to use CallKit with PJSIP 2.6 and PJSUA2.
After some effort, the CallKit implementation seems to be working as expected. Incoming calls can be accepted or declined, and if accepted, will be connected and controlled with an in-app active call view controller.
The audio, however, does not appear to be properly connected at the pjsip end. There is no audio coming in from, or going out to the remote caller. The microphone audio appears to be routed back to the iPhone speaker.
The SIP audio ports should be connecting in callback function onCallMediaState:
virtual void onCallMediaState(OnCallMediaStateParam &prm) {
CallInfo ci = getInfo();
AudioMedia* audio_media = 0;
for (unsigned i = 0; i < ci.media.size(); i++) {
if (ci.media[i].type==PJMEDIA_TYPE_AUDIO && ( ci.media[i].status == PJSUA_CALL_MEDIA_ACTIVE ||
ci.media[i].status ==PJSUA_CALL_MEDIA_REMOTE_HOLD)) {
try {
audio_media = static_cast<AudioMedia*>(getMedia(i));
if(audio_media != 0)
{
Endpoint::instance().audDevManager().getCaptureDevMedia().startTransmit(*audio_media);
audio_media->startTransmit(Endpoint::instance().audDevManager().getPlaybackDevMedia());
}
} catch (std::exception ex) {
continue;
}
}
}
}
As described in Ticket#1941 at:
https://trac.pjsip.org/repos/ticket/1941:
I set the audio devices using:
ep->audDevManager().setNullDev();
immediately after the initialization of the Endpoint class (ep->libInit(epConfig);), and then:
I attempt to set the devices using pjsua_set_snd_dev() in CXProvider’s didActivate function, like this:
-(void) setSipSoundDevices {
pj_status_t status;
int captDev, playDev;
pjsua_get_snd_dev(&captDev, &playDev);
Endpoint::instance().audDevManager().setPlaybackDev(playDev);
Endpoint::instance().audDevManager().setCaptureDev(captDev);
}
pjsua_get_snd_dev(&captDev, &playDev) returns -99, -99 and the audio does not connect.
My question is this. How can I properly hook up the remote audio sources or ports, on an incoming call using PJSIP 2.6 and CallKit?
Might 2.5.5 work better in this regard?
Any insights are appreciated.
By and by I got the incoming call audio working properly. The crux of the matter was that even though the documentation from both Apple and SIP say that the audio has to be handled on the iOS end, you still have to set the SIP audio devices in the SIP layer in the provider delegate 'didActivate' and 'didDeactivate' functions. Because I use the PJSUA C++ layer, I had to drill down through the objc-c++ bridging layer to provide this functionality. ie.
-(void) activateSipSoundDevices {
pj_status_t status = pjsua_set_snd_dev(0, 0);
}
-(void) deactivateSipSoundDevices {
pj_status_t status = pjsua_set_null_snd_dev();
}
When initializing the SIP Account, be sure to set the null sound devices like:
ep->audDevManager().setNullDev();
Hope this helps.
I have a raspberry pi 3 with Windows 10 IoT. I would like to get the data from a sensor that sends pulses. Namely the Swiss Flow SF800 link. This sensor will send out an amount of pulses equal to the amount of flow through the sensor. The datasheet says that I will send up to 2kHz.
My question is will the GPIO on the raspberry pi handle an interrupt frequency this high? I have looked into the lightning provider https://developer.microsoft.com/en-us/windows/iot/docs/lightningproviders which is supposed to be a huge performance gain but cannot find any documentation about what kind of performance I should expect.
There is no official bench marks of GPIO interrupt for now.
Here is Windows IoT Lightning Performance Testing. It tested GPIO performance by toggling GPIO 5 between 0 and 1 at the fastest possible speed. It seems at least 17.4 kHz can be achieved.
And GPIO interrupt event should be pushed into the queue and will not be lost.
So, based on above information, for 2kHz, app will be able to handle such speed interrupt event in time and without missing.
Feel free to use it and if there is any concern please let me know.
Initially I suspected that I would need to use the lightning driver in order to achieve the interrupt frequency that I needed. It turns out that the standard Inbox Driver is adequate for what I need.
Here are the steps to reproduce my situation:
I created a simple Arduino sketch that would send out pulses at the rate of 10,000 Hz.
int dataPin = 12;
void setup() {
pinMode(dataPin, OUTPUT);
}
void loop() {
int count = 0;
while (count < 400)
{
//pulse
digitalWrite(dataPin, HIGH);
digitalWrite(dataPin, LOW);
//This delay presumably makes the pulse be 10000 Hz
delayMicroseconds(100);
count++;
}
delay(5000);
}
Created a UWP app with a simple UI that had a TextBlock in the center of the page.
public sealed partial class MainPage : Page
{
private GpioController gpio;
private const int inputPinNumber = 17;
private GpioPin inputPin;
private int count;
private I2cController i2cController;
private SpiController spiController;
public MainPage()
{
this.InitializeComponent();
this.Setup();
}
private void Setup()
{
if (LightningProvider.IsLightningEnabled)
{
LowLevelDevicesController.DefaultProvider = LightningProvider.GetAggregateProvider();
}
this.gpio = GpioController.GetDefault();
this.inputPin = this.gpio.OpenPin(inputPinNumber);
if (this.inputPin.IsDriveModeSupported(GpioPinDriveMode.InputPullUp))
{
this.inputPin.SetDriveMode(GpioPinDriveMode.InputPullUp);
}
else
{
this.inputPin.SetDriveMode(GpioPinDriveMode.Input);
}
this.inputPin.ValueChanged += InputPinOnValueChanged;
}
private void InputPinOnValueChanged(GpioPin sender, GpioPinValueChangedEventArgs args)
{
var task = Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => {
if (args.Edge == GpioPinEdge.FallingEdge)
{
this.count++;
this.CountBlock.Text = this.count.ToString();
}
else
{
}
});
}
}
}
Set Windows IoT to use the Direct Memory Mapped Driver.
The next step was to connect the pin on the Arduino with the pin on the Pi through a transistor. I did this so that I could take advantage of the built in Pull-Up resistor on the GPIO pins on the Pi.
When both applications were run at the same time I was only collecting about 30 pulses per cycle.
Went back into the Windows IoT setup and reset the driver back to the Inbox Driver and reran both applications. This time I did not miss a pulse.
In conclusion the Inbox Driver should be sufficient to give me up to 10khz without any issue.
I'm developing a BlackJack game for iOS. Keeping track of the current state and what needs to be done is becoming difficult. For example, I have a C++ class which keeps track of the current Game:
class Game {
queue<Player> playerQueue;
void hit();
void stand();
}
Currently I'm implementing it using events (Method A):
- (void)hitButtonPress:(id)sender {
game->hit();
}
void Game::hit() {
dealCard(playerQueue.top());
}
void Game::stand() {
playerQueue.pop();
goToNextPlayersTurn();
}
as more and more options are added to the game, creating events for each one is becoming tedious and hard to keep track of.
Another way I thought of implementing it is like so (Method B):
void Game::playersTurn(Player *player) {
dealCards(player);
while (true) {
string choice = waitForUserChoice();
if (choice == "stand") break;
if (choice == "hit")
dealCard(player);
// etc.
}
playerQueue.pop();
goToNextPlayersTurn();
}
Where waitForUserChoice is a special function that lets the user interact with the UIViewController and once the user presses a button, only then returns control back to the playersTurn function. In other words, it pauses the program until the user clicks on a button.
With method A, I need to split my functions up every time I need user interaction. Method B lets everything stay a bit more in control.
Essentially the difference between method A and B is the following:
A:
function A() {
initialize();
// now wait for user interaction by waiting for a call to CompleteA
}
function CompleteA() {
finalize();
}
B:
function B() {
initialize();
waitForUserInteraction();
finalize();
}
Notice how B keeps the code more organized. Is there even a way to do this with Objective-C? Or is there a different method which I haven't mentioned recommended instead?
A third option I can think of is using a finite state machine. I have heard a little about them, but I'm sure if that will help me in this case or not.
What is the recommended design pattern for my problem?
I understand the dilemma you are running into. When I first started iOS I had a very hard time wrapping my head around relinquishing control to and from the operating system.
In general iOS would encourage you to go with method A. Usually you have variables in your ViewController which are set in method A(), and then they are checked in CompleteA() to verify that A() ran first etc.
Regarding your question about Finite State Machines, I think that it may help you solve your problem. The very first thing I wrote in iOS was a FSM (there for this is pretty bad code) however you can take a look here (near the bottom of FlipsideViewController.m:
https://github.com/esromneb/ios-finite-state-machine
The general idea is that you put this in your .h file inside an #interface block
static int state = 0;
static int running = 0;
And in your .m you have this:
- (void) tick {
switch (state) {
case 0:
//this case only runs once for the fsm, so setup one time initializations
// next state
state = 1;
break;
case 1:
navBarStatus.topItem.title = #"Connecting...";
state = 2;
break;
case 2:
// if something happend we move on, if not we wait in the connecting stage
if( something )
state = 3;
else
state = 1;
break;
case 3:
// respond to something
// next state
state = 4;
break;
case 4:
// wait for user interaction
navBarStatus.topItem.title = #"Press a button!";
state = 4;
globalCommand = userInput;
// if user did something
if( globalCommand != 0 )
{
// go to state to consume user interaction
state = 5;
}
break;
case 5:
if( globalCommand == 6 )
{
// respond to command #6
}
if( globalCommand == 7 )
{
// respond to command #7
}
// go back and wait for user input
state = 4;
break;
default:
state = 0;
break;
}
if( running )
{
[self performSelector:#selector(tick) withObject:nil afterDelay:0.1];
}
}
In this example (modified from the one on github) globalCommand is an int representing the user's input. If globalCommand is 0, then the FSM just spins in state 4 until globalCommand is non zero.
To start the FSM, simply set running to 1 and call [self tick] from the viewController. The FSM will "tick" every 0.1 seconds until running is set to 0.
In my original FSM design I had to respond to user input AND network input from a windows computer running it's own software. In my design the windows PC was also running a similar but different FSM. For this design, I built two FIFO queue objects of commands using an NSMutuableArray. User interactions and network packet would enqueue commands into the queues, while the FSM would dequeue items and respond to them. I ended up using https://github.com/esromneb/ios-queue-object for the queues.
Please comment if you need any clarification.