My first PIC32MX ISR not firing, code is hanging - mplab

I'm just getting started with a PIC32MX340F12, and MPLABX. My first attempt was to write a timer interrupt, so I worked with the datasheet, compiler manual, and examples and came up with the below. But it doesn't work... the interrupt never fires, and in fact if I leave both the timer interrupt enable (T1IE=1) and the general interrupt enable active ("ei"), it runs for a few seconds then hangs (says "target halted" in debug mode). If I remove either of those, it just runs indefinitely but still no timer interrupt. So I appear to have a pretty bad problem somewhere in my ISR syntax. Does it jump out at anyone?
Like I said I'm just getting started so I'm sure it's a pretty dumb oversight. And as you may notice I like to work as directly as possible with registers and compiler directives (rather than manufacturer supplied functions), I feel like I learn the most that way.
Thanks!
#include <stdio.h>
#include <stdlib.h>
#include "p32mx340f512h.h"
#include <stdint.h>
int x = 0;
int main(int argc, char** argv)
{
INTCONbits.MVEC = 1; // turn on multi-vector interrupts
T1CON = 0; // set timer control to 0
T1CONbits.TCKPS = 1; // set T1 prescaler to 8
PR1 = 62499; // set t1 period
TMR1 = 0; // initialize the timer
T1CONbits.ON = 1; // activate the timer
IPC1bits.T1IP = 5; // T1 priority to 5
IPC1bits.T1IS = 0; // T1 secondary priority to
IFS0bits.T1IF = 0; // clear the T1 flag
IEC0bits.T1IE = 1; // enable the T1 interrupts
asm volatile("ei"); // enable interrupts
while (1)
{
x++;
if (x > 10000)
{
x = 0;
}
}
return (EXIT_SUCCESS);
}
bool zzz = false;
void __attribute__((interrupt(IPL5AUTO))) T1Handler(void)
{
IFS0bits.T1IF = 0;
zzz = true;
}

Embedded systems are somewhat specialized, and this is a specific one I'm not familiar with.
However, from working with other systems, you may have to associate the Int Handler function address (T1Handler) with the interrupt it is handling. (Unless the framework you are using is doing that for you under the covers when building?)
Are all those names you are using automatically mapped for you by the build system?
If not, you may need to call some kind HW init or framework init at the top of main, before using them.
Some HW init/reset may be needed also, before the HW can be programmed.
Hope some of this helps.

Related

What are the Malloc memory leaks shown in Instruments in Xcode, and how can I fix them?

I recently started working on optimizing the memory usage in my Swift application. When I started using the Leaks Instrument, I got over 100 "Malloc" leaks with no descriptions. I've looked around, but cannot find an explanation.
I'm running iOS 12.0 and Xcode 10.2
I went as far as commenting out all of the functions that were being called in my ViewDidLoad, and I'm still getting around 50 Malloc leaks.
I researched what causes memory leaks, and there's nothing in my code to suggest a leak, but I'm fairly new to memory management.
It's important for my app to not have leaks, so any help would be appreciated!
Let's say you have a very simple code
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int mem(long int mem) {
char *buffer = malloc(sizeof(char) * mem);
if(buffer == NULL) {
return 1;
} else {
return 0;
}
}
int main(int argc, const char * argv[]) {
long int mem_to_alloc = 2;
for(int i=0; i<30; i++, mem_to_alloc *= 2) {
printf("Allocation: %d Allocating: %ld\n", i, mem_to_alloc);
if( mem(mem_to_alloc) == 1 )
break;
sleep(1);
}
return 0;
}
First of all, make sure to set proper configuration in Build Scheme
Once in Instruments choose Leaks
After you have run your code, you should see suspicious allocations with the calls stack (Stack Trace) on the right.

My ISR is executed only once even if I attempt to clear the Timer Flag

I'm learning PIC32 and making some tests to make sure my ISRs are working properly. My test code is just turning a LED on and off in a time interval controlled by the ISR.
Problem is, the ISR executes only once, and I don't get why.
I've tried the actual code to turn the LED on and off in the main function, with a timer between the turning on and off, it's working fine.
I've tried different ways of clearing the interrupt flag to make sure it's cleared. but i can't say if it's actually ok or not.
here is the code
#include <p32xxxx.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <plib.h>
#include <xc.h>
#define PIN_bypassled PORTEbits.RE4
#define PIN_bypassledwrite LATEbits.LATE4
//-------- ISRs -----------------
//low speed timer
void __ISR( _TIMER_2_VECTOR, ipl7) Swiching_loop( void)
{
// INTDisableInterrupts();
mT2ClearIntFlag(); // first attempt to clear interrput flag
IFS0bits.T2IF = 0; // second attempt to clear interrupt flag
PIN_bypassledwrite = !PIN_bypassled;
// INTEnableInterrupts();
}
void main() {
TRISE = 0x000000;
SYSTEMConfigPerformance(80000000L); // INTEnableSystemSingleVectoredInt();
// T1CON = 0X0010; //TMR1 Off, PRESCALE 1:256, 16b mode
PR1 = 100000;
mT1SetIntPriority(7); //set priority interrupt level for timer 1
T1CON = 0x8030; //TMR1 ON, PRESCALE 1:256, 16b mode
mT1IntEnable(1); //
// T2CON = 0X0030; //TMR2 Off, PRESCALE 1:256, 16b mode
PR2 = 100000;
mT2SetIntPriority(7); //set priority interrupt level for timer 2
T2CON = 0x8070; //TMR2 ON, PRESCALE 1:256, 16b mode
mT2IntEnable(1); //
INTEnableInterrupts();
while (1){
}
}
expected result would be my LED turning on and off, but instead it's just turning on. I interpret this as the ISR only executing once, but maybe i'm missing something since i'm under the impression that i'm clearing the interrupt flag correctly.
What am i missing?

How to implement ticker callback at 100 micro seconds for ESP8266?

I have an ESP8266 NodeMCU 12E development board and I'm using the Arduino IDE. I'm trying to use a Ticker.h to sample an analog input consistently at a frequency of 10khz, which is one sample every 100us. I noticed that Ticker sampler; sampler.attach(0.0001,callbackfunc); didn't work because attach() won't take the value 0.0001.
So then I wrote the following code based on some guides that I saw:
#include <ESP8266WiFi.h>
#include <Ticker.h>
bool s = true;
void getSample()
{
s = !s;
}
Ticker tickerObject(getSample, 100, 0, MICROS_MICROS);
const char *ssid = "___"; // Change it
const char *pass = "___"; // Change it
void setup()
{
Serial.begin(115200);
Serial.println(0); //start
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, pass);
tickerObject.start();
}
void loop()
{
if(s == true)
{
Serial.println("True");
}
else
{
Serial.println("False");
}
}
However, this did not compile because tickerObject.start() method did not exist. So what I did next was:
Download the latest ticker package as a zip file
Unzip the package from point 1
Made a back up of C:\Users\john\Documents\ArduinoData\packages\esp8266\hardware\esp8266\2.5.0-beta2\libraries\Ticker
Replaced the folder mentioned in point 3 with the Ticker folder in point 2.
Restarted my Arduino IDE
Compiled and ran the code
Opened up the Serial Monitor
However, when I inspect the serial monitor, all it prints is "True". I was expecting the value s to toggle between true and false at a 10khz frequency.
What did I do wrong?
From the documentation of this library:
The library use no interupts of the hardware timers and works with the micros() / millis() function.
This library implements timers in software by polling the micros() and millis() functions. It requires the update() method to be called in loop().
So the start of loop() should be:
void loop()
{
tickerObject.update();
if(s == true)
I'm trying to use a Ticker.h to sample an analog input consistently at a frequency of 10khz
It is worth a go but this is a software based solution that is prone to jitter depending on how often the event loop can be called.

pthread: locking mutex with timeout

I try to implement following logic (a kind of pseudo-code) using pthread:
pthread_mutex_t mutex;
threadA()
{
lock(mutex);
// do work
timed_lock(mutex, current_abs_time + 1 minute);
}
threadB()
{
// do work in more than 1 minute
unlock(mutex);
}
I do expect threadA to do the work and wait untill threadB signals but not longer than 1 minute. I have done similar a lot of time in Win32 but stuck with pthreads: a timed_lock part returns imediately (not in 1 minute) with code ETIMEDOUT.
Is there a simple way to implement the logic above?
even following code returns ETIMEDOUT immediately
pthread_mutex_t m;
// Thread A
pthread_mutex_init(&m, 0);
pthread_mutex_lock(&m);
// Thread B
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
struct timespec time = {now.tv_sec + 5, now.tv_nsec};
pthread_mutex_timedlock(&m, &time); // immediately return ETIMEDOUT
Does anyone know why? I have also tried with gettimeofday function
Thanks
I implemented my logic with conditional variables with respect to other rules (using wrapping mutex, bool flag etc.)
Thank you all for comments.
For the second piece of code: AFAIK pthread_mutex_timedlock only works with CLOCK_REALTIME.
CLOCK_REALTIME are seconds since 01/01/1970
CLOCK_MONOTONIC typically since boot
Under these premises, the timeout set is few seconds into 1970 and therefore in the past.
try something like this :
class CmyClass
{
boost::mutex mtxEventWait;
bool WaitForEvent(long milliseconds);
boost::condition cndSignalEvent;
};
bool CmyClass::WaitForEvent(long milliseconds)
{
boost::mutex::scoped_lock mtxWaitLock(mtxEventWait);
boost::posix_time::time_duration wait_duration = boost::posix_time::milliseconds(milliseconds);
boost::system_time const timeout=boost::get_system_time()+wait_duration;
return cndSignalEvent.timed_wait(mtxEventWait,timeout); // wait until signal Event
}
// so inorder to wait then call the WaitForEvent method
WaitForEvent(1000); // it will timeout after 1 second
// this is how an event could be signaled:
cndSignalEvent.notify_one();

Design pattern for waiting for user interaction in iOS?

I'm developing a BlackJack game for iOS. Keeping track of the current state and what needs to be done is becoming difficult. For example, I have a C++ class which keeps track of the current Game:
class Game {
queue<Player> playerQueue;
void hit();
void stand();
}
Currently I'm implementing it using events (Method A):
- (void)hitButtonPress:(id)sender {
game->hit();
}
void Game::hit() {
dealCard(playerQueue.top());
}
void Game::stand() {
playerQueue.pop();
goToNextPlayersTurn();
}
as more and more options are added to the game, creating events for each one is becoming tedious and hard to keep track of.
Another way I thought of implementing it is like so (Method B):
void Game::playersTurn(Player *player) {
dealCards(player);
while (true) {
string choice = waitForUserChoice();
if (choice == "stand") break;
if (choice == "hit")
dealCard(player);
// etc.
}
playerQueue.pop();
goToNextPlayersTurn();
}
Where waitForUserChoice is a special function that lets the user interact with the UIViewController and once the user presses a button, only then returns control back to the playersTurn function. In other words, it pauses the program until the user clicks on a button.
With method A, I need to split my functions up every time I need user interaction. Method B lets everything stay a bit more in control.
Essentially the difference between method A and B is the following:
A:
function A() {
initialize();
// now wait for user interaction by waiting for a call to CompleteA
}
function CompleteA() {
finalize();
}
B:
function B() {
initialize();
waitForUserInteraction();
finalize();
}
Notice how B keeps the code more organized. Is there even a way to do this with Objective-C? Or is there a different method which I haven't mentioned recommended instead?
A third option I can think of is using a finite state machine. I have heard a little about them, but I'm sure if that will help me in this case or not.
What is the recommended design pattern for my problem?
I understand the dilemma you are running into. When I first started iOS I had a very hard time wrapping my head around relinquishing control to and from the operating system.
In general iOS would encourage you to go with method A. Usually you have variables in your ViewController which are set in method A(), and then they are checked in CompleteA() to verify that A() ran first etc.
Regarding your question about Finite State Machines, I think that it may help you solve your problem. The very first thing I wrote in iOS was a FSM (there for this is pretty bad code) however you can take a look here (near the bottom of FlipsideViewController.m:
https://github.com/esromneb/ios-finite-state-machine
The general idea is that you put this in your .h file inside an #interface block
static int state = 0;
static int running = 0;
And in your .m you have this:
- (void) tick {
switch (state) {
case 0:
//this case only runs once for the fsm, so setup one time initializations
// next state
state = 1;
break;
case 1:
navBarStatus.topItem.title = #"Connecting...";
state = 2;
break;
case 2:
// if something happend we move on, if not we wait in the connecting stage
if( something )
state = 3;
else
state = 1;
break;
case 3:
// respond to something
// next state
state = 4;
break;
case 4:
// wait for user interaction
navBarStatus.topItem.title = #"Press a button!";
state = 4;
globalCommand = userInput;
// if user did something
if( globalCommand != 0 )
{
// go to state to consume user interaction
state = 5;
}
break;
case 5:
if( globalCommand == 6 )
{
// respond to command #6
}
if( globalCommand == 7 )
{
// respond to command #7
}
// go back and wait for user input
state = 4;
break;
default:
state = 0;
break;
}
if( running )
{
[self performSelector:#selector(tick) withObject:nil afterDelay:0.1];
}
}
In this example (modified from the one on github) globalCommand is an int representing the user's input. If globalCommand is 0, then the FSM just spins in state 4 until globalCommand is non zero.
To start the FSM, simply set running to 1 and call [self tick] from the viewController. The FSM will "tick" every 0.1 seconds until running is set to 0.
In my original FSM design I had to respond to user input AND network input from a windows computer running it's own software. In my design the windows PC was also running a similar but different FSM. For this design, I built two FIFO queue objects of commands using an NSMutuableArray. User interactions and network packet would enqueue commands into the queues, while the FSM would dequeue items and respond to them. I ended up using https://github.com/esromneb/ios-queue-object for the queues.
Please comment if you need any clarification.

Resources