I am using BeagleBone Black in a project and wanted to ask if anyone knows the limits of the internal WDT (WatchDog Timer). What can it and what can it not do? I'm a beginner with BeagleBone and WDT...
Thanks!
Quoting from "AM335x Sitaraâ„¢ Processors - Technical Reference Manual":
The watchdog timer is an upward counter capable of generating a pulse on the reset pin and an interrupt
to the device system modules following an overflow condition. The watchdog timer serves resets to the
PRCM module and serves watchdog interrupts to the host ARM. The reset of the PRCM module causes a
warm reset of the device.
Substantially the WDT is a clock device, that is a hw. register whose value is automatically increased regularly with accurate frequency. There is also an hardware comparator whose goal is to trigger an IRQ every time the WDT overflows. The difference with a traditional timer is in the default action done on IRQ: in this case (WDT) is to reset the board.
The main goal of the WDT is to react to error situations in which the runtime environment (or the kernel) is freezed and is not responding anymore. When this happens the runtime does not reset the WDT, so it overflows, launch an IRQ and the board is resetted so that the runtime environment can regain control of the board.
To use this feature (you're obliged if you don't want your board to be resetted every x seconds) you'll have to write any value in the WDT_WTGR register (hw. address - 0x44E35030) to cause a time counter reload and avoid reset of the board. I noticed that WDT overflows after approximately 50 seconds on the Beaglebone Black, so you'll have to write a value every x < 50 seconds.
However this is valid if you plan to implement a bare metal application to be loaded on the board. In other words the WDT is correctly handled by UBoot (the BBB default boot loader) or by the Linux kernel so you'll not have to worry about this.
I hope I have taken away your doubts! :-)
Further reading:
http://www.ti.com/lit/ug/spruh73m/spruh73m.pdf - section 20.4
Related
I'm using FreeRTOS 10.0.1 and have a really hard problem, trying to solve it for days, getting my code to run on a CC1310 (Arm Cortex M3).
I use the TI SDK and read data from a I2C device, first time is successful, second gets stuck in the vListInsert, with the pxIterator->pxNext points to itself, so the for loop is infinite.
The driver is waiting for a SemaphoreP_pend(), if I set a breakpoint, I can see that the post gets called, but the kernel is just stuck.
I have set the SysTick and PendSV isr prio to 7 (lowest).
The i2c interrupt is prio 6.
configMAX_SYSCALL_INTERRUPT_PRIORITY is set to 1.
There is no stack overflow as far as I can tell.
Please help, how do I debug this problem ?
Best regards
Jakob
This is almost certainly a problem with interrupt priorities and the list getting corrupted. The interrupt priority is stored in the top 3 bits in your case (as there are 3 priority bits). So 7 is stored as 7 << 5 (11100000b) (you can pad the lower bits with 1 if you like so priority 7 == 255). This is handled by FreeRTOS.
What I suspect is happening is your I2C interrupt of priority 6, is not being << 5 so you have 00000110b which gives a priority of 0 (highest, as its the top 3 bits)
I solved the Issue, after getting help from #realtime-rik, I decided to check all my interrupt priorities again. They where all ok, but in the process I discovered two things.
The TI-SDK had structs with buffers in some of the drivers, which where rtos dependent, so their size should be set manually for each driver depending on the rtos usage.
I called the board init function in main before the scheduler was started, and inside board init, one of the drivers was using FreeRTOS queues. I have moved board init to my thread now.
I ran into an issue with FreeRTOS getting stuck in vListInsert() when I accidentally disabled interrupts and tried to disable interrupts again. Make sure you don't have a call to taskENTER_CRITICAL() followed by portDISABLE_INTERRUPTS().
I inserted multiple same boards into the system. Device PCIe is implemented with Xilinx IP core. After each FPGA program is programed, manually refresh the device manager to check whether the device and driver are working properly.
My confusion is that it seems that this approach can only work on two boards at the same time. After the third board is programmed , and then refresh the task manager, the system prompts insufficient resources, "This device cannot find enough free resources that it can use (Code 12)"
I tried to disable the other two boards, but the device still prompted conflicts. I don't know how to query conflicting resources.
My boards have 2 BARs(BAR0: 2KB, BAR1:16MB), and 1 IRQ.
I did a few experiments and felt that it was caused by the conflict of memory resources. The conflicting party was the AMD integrated graphics card that came with the motherboard.
1st, 2nd, 3rd all indicate my board number.
3rd is not recognized because of conflicts.
After I shut it down, I plugged in all three boards and then turned it on again. As a result, during the startup process, the system restarted again after a sudden power failure, and then all three boards were normal. At this time, it is found that the memory address of the graphics card has changed
I want to know how to resolve the conflict? modify my driver code or FPGA configuration?
PCIe enumeration should resolve memory allocations issues, however there are a couple implementation issues to be aware of. Case in point, I have used Xilinx XDMA's with a 64 bit BAR of 2GB in size and I have literally bricked a DELL XPS motherboard. But I have done the same with an IBM system and it just worked. The point here is that enumeration can be done with Firmware, Hardware or OS driven event. If you doing hardware manager, that sounds like OS driven, but when I toasted the XPS board, that was some kind of FW issue that was related to BAR size that resulted in a permanent failure. 16MB isn't big and it should not be a problem, but I would recommend going with the Xilinx defaults first and show reliability there. I think it's one BAR at 1M. I have run 3x 64 bit BARS at 1MB a piece without issue, but keep it simple and show reliability. Then move up. This will help isolate if it is a system flakiness or not.
I have seen some system use FW based enumeration that comes up really fast, before the FPGA has configured, in which case there is no PCIe target to ID. If you frequently find that your FPGA is not detected on power up, but detected on a rescan, this can be a symptom. How to resolve this is a bit of a pain. We ended up using partial reconfiguration. Start with the PCIe interface, then have a reconfig to load the remaining image. Let's hope it isn't this problem
The next thing is to be aware of is your reset mechanism within the FPGA. You probably hooked the PCIe IP reset to the bus reset which is great, however, I have in the past also hooked that reset to internal PLL locked signals that may not be up. For troubleshooting purposes, keep that reset simple and get rid of everything else to show that just the PCIe IP by itself is reliable first.
You also have to be careful here too. If you strip things down, make sure it is clean. If you ignore the PLL lock and try to use a Xilinx driver, such as the XDMA driver, it has a routine where it tries to identify the XDMA with data transactions. It is looking for the DMA BAR one BAR at a time. But when it does this, the transaction it attempts may go out on the AXI bus if the BAR isn't the XDMA control BAR. If the AXI bus isn't out of reset or clocked when this happens you will lock the AXI bus and I have locked up a Linux box this way on several occasions. AXI requires that transactions complete, otherwise it just sits there waiting.
BTW, on a Linux box, you can look at the enumeration output in the kernel log. I'm not sure if Windows shows you the same thing or not. But this can be helpful if you see that the device was initially probed but then something invalid was detected in the config register, versus not being seen at all.
So a couple things to look at.
We have added custom code to run on Z1 motes, simulate everything in Cooja and are tracking power consumption using Powertrace.
Since we added additional code to run we would expect the power consumption to go up, but instead it goes down.
Based on the CPU Time reported by Powertrace the CPU seems to be idle for longer time periods.
So we have some questions for a better understanding:
When enabling Powertrace, will the CPU Time of custom code be tracked automatically?
Is it possible, that our code requires to many resources and Powertrace is thus not capable to report correctly?
Consider the following scenario:
in a POSIX system, some thread from a user program is running and timer_interrupt has been disabled.
to my understanding, unless it terminates - the currently running thread won't willingly give away control over the CPU.
My question is as follows: will calling pthread_yield() from within the thread give the kernel control over the CPU?
any kind of help with this question would be greatly appreciated.
Turning off the operating system's timer interrupt would change it into a cooperative multitasking system. This is how Windows 1,2,3 and Mac OS 9 worked. The running task only changed when the program made a system call.
Since pthread_yield results in a system call, yes, the kernel would get control back from the program.
If you are writing programs on a cooperative multitasking system, it is very important to not hog the CPU. If your program does hog the CPU the entire system comes to a halt.
This is why Windows MFC has the idle message in its message loop. Programs doing long term tasks would do it in that message handler by operating on one or two items and then returning to the operating system to check if the user had clicked on something.
It can easily relinquish control by issuing system calls that perform blocking inter-thread comms or requesting I/O. The timer interrupt is not absolutely required for a multithreaded OS, though it is very useful for providing timeouts for such system calls and helping out if the system is overloaded with ready threads.
What coding tricks, compilation flags, software-architecture considerations can be applied in order to keep powerconsumption low in an AIR for iOS application (or to reduce powerconsumption in an existing application that burns too much battery)?
One of the biggest things you can do is adjust the framerate based off of the app state.
You can do this by adding handlers inside your App.mxml
<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
activate="activate()" deactivate="close()" />
Inside your activate and close methods
//activate
FlexGlobals.topLevelApplication.frameRate = 24;
//deactivate
FlexGlobals.topLevelApplication.frameRate = 2;
You can also play around with this number depending on what your app is currently doing. If you're just displaying text try lowering your fps. This should give you the most bang for your buck power savings.
Generally, high power consumption can be the result of:
intensive network usage
no sleep mode for display while app idle
un-needed location services usage
continously high usage of cpu
Regarding (flex/flash) AIR I would suggest that:
First you use the Flex profiler + task-manager and monitor CPU and Memory usage. Try to reduce them as much as possible. As soon as you have this low on windows/mac they will go lower (theoretically on mobile devices)
Next step would be to use a network monitor and reduce the amount and size of the network (webservice) calls. Try to identify unneeded network activity and eliminate it.
Try to detect any idle state of the app (possible in flex, not sure about flash) and maybe put the whole app in an idle mode (if you have fireworks animation running then just call stop())
Also I am not sure about it, but will reduce for sure cpu and use more gpu: by using Stage3D (now available with air 3.2 also for mobile) when you do complex anymations. It may reduce execution time since HW accel is there, therefore power consumption may be lower.
If I am wrong about something please comment/downvote (as you like) but this is my personal impression.
Update 1
As prompted in the comments, there is not a 100% link between cpu usage on a desktop and on a mobile device, but "theoreticaly" on the low level we should have at least the same cpu usage trend.
My tips:
Profile your App in a first step with the profiler from the Flash Builder
If you have a Mac, profile your app with Instruments from XCode
And important:
behaviors of Simulator, IPA-Interpreter packages and IPA-Test builds are different.
Simulator - pro forma optimizations
IPA-Interpreter - Get a feeling of performance
IPA-Test - "real" performance behavior
And finally test the AppStore-Build, it is the fastest (in meaning of performance) package mode.
Additional we saw, that all this modes can vary.