I have to fight that problem for a long time now.
There are 2 MCP2515 CAN Interface Chips connected to each other. The one is controlled by Arduino, the other one by STM32 board.
Scheme: (-> := send)
Arduino->MCP2515->MCP2515->STM32
If I set the baudrate on Arduino to 50k and on STM32 to 50k there is no receive interrupt on the second MCP2515.
When I double the baudrate on Arduino to 100k there will be an interrupt and the data is correctly transferred.
The strange thing is: CFG1 CFG2 CFG3 Register Settings are identical on both MCP2515 Chips!
Sure I can double the Frequency all the time but baud's like 31K25 need 62K5 which is not in the library.
Hope someone encountered the same issue or can help out with this.
I also tried this code for Baudrate references:
https://github.com/latonita/arduino-canbus-monitor/blob/master/mcp_can.cpp
by the way: both run on 8MHz Crystal Oscillators
problem solved partially, the double frequency was because Arduino IDE was using headers in the lib directory not the custom directory outside of the folder!
if I go to 10kBaud or below the interrupt now doesn't respond. Is it maybe too low to be handled?
Related
First of all; I'm using ESP-IDF 4.2 with the ESP-ADF and have two CMM4030D microphones connected to an ESP32-WROVER-E on a custom board. These microphones should record a wav file to an SPI-Connected SD card.
And that works flawless! But not when PSRAM is enabled, should initialize on boot (which it does) and is allocatable using heap_caps_alloc(). The frequency of PSRAM, as wel as that of the SPI Flash, is set at 80MHz and there isn't anything connected to pins 16 and 17.
The SDK-Configuration most likely isn't the issue, as I took the configuration for wifi-ble coexistence example as base.
So, to conclude; when PSRAM is enabled (boots on startup and is allocatable using heap_caps_alloc), the recording is choppy, but when it's disabled (but still running the same code) it works fine... What on earth could be the cause of this issue?
Kind regards,
A confused Jochem
I've managed to implement a workaround for this problem that consists of disabling initialization of PSRAM on boot. One can then initialize the PSRAM first thing in the main with the following function.
static void psram_init(){
esp_spiram_init();
esp_spiram_init_cache();
esp_spiram_test();
esp_spiram_add_to_heapalloc();
}
One prevents the ESP-ADF from using PSRAM for allocation of buffers by disabling the initialize on boot option in the sdk config. This of course results in less memory being available (which is undesired), but it at least enables me to work towards an MVP.
One can take a look at the issue on GitHub for more details.
Kind regards,
Jochem
I have been struggling for some time now trying to get my ESP8266 ESP-12 to work. I was able to get it loaded with the NodeMCU software. Now, the board constantly restarts itself. Whether I have a script loaded on it or not, the module seems to continually restart. I am using ESPlorer, and can see it get connection to NodeMCU. Then the board restarts several seconds to several mins later. I have tried various pinout, capacitors, etc. with no luck in solving this problem. I have been searching all over and have had no luck finding a solution. Any help is greatly appreciated. Here is my current pinout:
ESP-12 ----------- TTY 3.3v Serial
================================================
TX ----------------------------- RX
RX ----------------------------- TX
GND, GPIO15 -------------------- GND
VCC, CH_PD, GPIO0, (RST) ------- LD1117v33 voltage regulator +3.3v
GND, GPIO15 -------------------- LD1117v33 voltage regulator GND
Thanks so much in advance for any help!
Assuming the hardware is okay and the right binary is loaded it's almost surly a power problem.
1) Make sure what ever voltage regulator you're using is rated for 200mA or more. In your case the LD1117 can source 800mA so that's good.
2) Make sure you're upstream power supply can source 200mA or more. If you're powering from a USB hub make sure the hub is powered.
3) Make sure you have some large low ESR capacitors across GND and 3.3v. Two capacitors: 10uF and 100uF worked for me (there's nothing magic about these exact values, 10-100uF should work). The ESP8266 can draw huge (relatively) amounts of current for short periods while booting or transmitting. This can cause a bad transient on the power supply, which will cause the system to reboot, which can lead to an infinite reboot cycle.
ESP8266 running lua goes to panic mode if the program loaded on it is has some bug.
Look at your code again. Reflash the firmware and upload code again. Try to upload code bit by bit. So that you know which part is causing the issue.
fix the setup in such way that flashing firmware is super easy. Trust me you will need to reflash it many times if you wanna play with code on it.
I had a NodeMCU dev board which worked fine for some hours, then suddenly restarted and wouldn't stay up. I tried adding power-supply capacitors and using a different power supply, to no avail.
What fixed it for me was resetting the watchdog timer every second:
tmr.alarm(6, 1000, 1, function() tmr.wdclr() end)
The watchdog timer needs to be reset periodically. I don't know how often. My device was resetting after about 35-40 seconds uptime. My code (which ran every 30 seconds from timer) was resetting the watchdog itself. This was not enough, somehow.
Use a pullup resistor on the RST line rather than just connecting it directly to VCC. I used 4.7K, but the actual value is not critical.
Get the serial terminal program named "terminal v1.9b by br#y++". While I wrote this answer I was not able to download. When I find the link I'll add in a comment.
Run the program and set the baud rate to custom and enter the value 74880 or 74400. With this you'll be able to see the fw messages. In this messages there is the reboot reason code. The codes are :
0 -> normal startup by power on
1 -> hardware watch dog reset
2 -> software watch dog reset (From an exception)
3 -> software watch dog reset system_restart (Possibly unfed wd got angry)
4 -> soft restart (Possibly with a restart command)
5 -> wake up from deep-sleep
Looking at the provided code you can decide from what reason the chip is restarting.
If your hardware is good, then the problem should be inside your code.
And sometime your code takes too long to finish, then it will trigger the watchdog to restart.
I suggest that you connect your reset pin to 3.3v via a 10K ohm resistor and to ground via a push button. This way your reset pin is always pulled high to prevent the random resets. I assume that your code has no bugs.
First of all, this is my first question on SO - if I made any mistakes, please do not tar and feather me ;)
I have a simple test application to play with the Mitov AudioLab components (www.mitov.com) version 7 in Delphi XE6. On my form, there is a TALWavePlayer, a TALSpeexCompressor, a TALSpeexDecompressor, a TALAudioMixer and a TALAudioOut, building a simple audio processing chain. I can connect the inputs and outputs visually at design time (in the OpenWire view). when I run my test application, I can hear the wave file through the speaker - whithout a single line of code. That's the easy (working) part.
(grrrr... can't post images, would have made things much clearer ;)
Now I disconnect the TALSpeexDecompressor output pin from the TALAudioMixer input pin visually at design time (OpenWire view). I want to replace this same connection in code at run time. (For the sake of simplicity I keep the single input pin and channel of the TALAudioMixer, so they do not need to be created in code).
I tried exactly the same optoins that work to connect other AudioLab components at run time (audio output pin -> audio input pin).
1.) decomp.OutputPin.Connect(mixer.InputPins[0]);
2.) decomp.OutputPin.Connect(mixer.Channels.Items[0].InputPin);
But with the TALSpeexDecompressor, this does not work - there is no signal leaving the decompressor. I do not have the source code of the components, so I cannot debug the application to find out what's going wrong.
Solution:
Stop and then start the wave player again after connecting the decompressor and the mixer dynamically. This somehow solves the issue. I do not know what happens under the hood, but after restarting the TALWavePlayer, the signal leaves the TALSpeexDecompressor and enters the TALAudioMixer. I stumbled over the solution when I set the "filename" property of the TALWavePlayer component in code, not in the property editor. Because of another (default) setting "RestartOnNewFile" = True, the wave player was restarted internally and the signal flow worked.
procedure Tform1.Button1Click(Sender: TObject);
var
channel: TALAudioMixerChannelItem;
begin
channel := mixer.Channels.Add;
waveplayer.Stop;
channel.InputPin.Connect(decomp.OutputPin);
waveplayer.Start;
end;
It is obvious that the AudioLab components can make simple tasks even simpler, but due to the poor documentation in their DocuWiki you have to follow the "try and error" path often, sometimes even for days. Unfortunately my real issue is more complicated than the simple test case I provided. I have an UDP client and server in the chain, so I have no control over the wave player on the client side when I dynamically connect the decompressor to the mixer on the server side. Obviously a deeper knowledge of these components is required, perhaps coming from experience. So this will be my next question here on SO.
Apologies to everyone for the insufficient documentation in the components :-( .
We are working to get a new release in the next 3-4 weeks that will contain again the F1 help, and we are working to make it as complete as possible.
Unfortunately we had to release the 7.0 without documentation in order to have it available on time for the RAD Studio XE6 :-( .
Please contact me directly - mitov#mitov.com so I can help you with the Speex issue, and connecting the pins.
With best regards,
Boian Mitov
I am a complete CAN bus newbie. I'm hoping someone with CAN experience can point me in the right direction. I was given a Vector VN1610 USB to CAN adapter and a Continental ARS-308 radar sensor. The goal is to read some velocity and distance information from the sensor. Right now I am just trying to see any data but all I get are messages with an id of 0 or 0x80000000. The data payloads all report as 8 bytes of 0.
What Works
I have been able to use the sample .NET code provided and set up the VN1610. The ARS-308 has a single CAN channel so in the Vector Hardware Config for my application I just map "CAN 1" to VN16101 Channel 1. (I leave CAN 2 unassigned) I then assume I use that one channel for both transmit and receive. The code reports that the channel sets up an activates and no errors are reported.
I then have a thread looking for incoming messages. If I don't debug out the two IDs mentioned above I can actually process all of them and then I get XL_ERR_QUEUE_IS_EMPTY messages. So it looks like its all working, I'm just not getting any real data.
What Doesn't
I would think a slew of data messages in the 0x200 - 0x702 range would be coming in for the Continental ARS device. Now I'm more used to ethernet type protocols where I would send a command and then read a response. None of my docs talk about how CAN works so I am ASSUMING that in CAN the device just sends data. I certainly can't find any commands that tell the device to send me the particular msg ID I'm interested in.
Am I missing some basic CAN configuration step that informs the device it should start sending data? Any suggestions at all would be appreciated.
If it matters I'm writing in VS2013, .NET on a Win 7 64 Ultimate machine.
The answer is No. It turns out that CAN devices will indeed just start streaming out messages when you turn them on (well at least this one does). The messages with ids of 0x0 and 0x8000000 are bogus. Even with the radar sensor turned off I continued to see those messages.
It turns out I had a hardware problem. The CAN bus requires a 120 Ohm resistor which was installed. The problem was when the shell was put back on the cable the resistor got cracked. Once we repaired this, everything started working as expected.
I have a PIC32MX340F512 board developed by another company for us, The board has a DS1338 RTCC and 24LC32A eeprom, and display unit on an I2C bus, on this bus i included a TSL2561 I2C light sensor, i wrote code in c to poll the light sensor continously , when the light sensor reaches a certain level i save the time and date and light sensor value on SD card. This all works fine but if i leave the system without exposure to light inside tunnel where incident light on one end of the tunnel is ought to be monitored the system becomes unresponsive no matter how much amount of light you apply and then if i switch power off and back on again everything starts to work normal. i am a one man development team and have been trying to find out the problem for months, i activated the watchdog timer to prevent the system from hanging but the problem still persisted. i then decided to find out if the problem is with the sensor by including a push button to activate light measurement but still when 4-5 hours elapse the PIC cant even detect a change in the the input pin. Under the impression that a hardware reset overrides anything going on i included a reset button and it also works ok for the first few hours after that the PIC doesn't seem to be responding to anything including a reset. I was getting convinced that there is nothing wrong with the firmware but also with all this happening the display unit (pic16f1933 and lcd) on the I2C shares power with the main unit and doesn't seem to be affected as it alternates between different messages constantly Does anybody have an idea what could be wrong (hardware/firmware or my sensor). I am using a 24v DC power supply purchased seperately. The PIC seems to go into a deep sleep although i dd not implement any kind of SLEEP mode in my code. Nb We use the same board for many other projects and i haven't come across such a problem . Thanks in advance.
I think you need to (if you haven't already) explore the wonderful world of in-circuit-debugging (such as with the ICD3 or PICkit 2/3). It allows you to run the processor in a special mode that lets you pause execution, see exactly which line of code is being executed, inspect variable values, and step through the code to see which parts are running and not running, or see exactly where execution takes a wrong turn. If the problem takes hours to reproduce, that's okay. You can just leave it overnight running in debug mode and hopefully it will be locked-up or 'sleeping' in the morning. At this point, you will be able to pause the processor and poke around to see if you got caught in some kind of infinite loop or something. This is often the only way to dig inside a running piece of code to see why things aren't working as you expect. But as you say, those bugs that take hours or days to manifest are the trickiest. Good luck!
It sounds like you can break up your design into two main parts, sd card interfacing, reading the rtc and reading the light sensor. If it were me I would upload a version of the code that mimics reading the light sensor but only returns fake data and see if that cures the problem. Additionally do the same with the other two modules separately and see if any of the three versions of your project not show this problem. From there just keep narrowing it down until you find the block of code thats causing problems.
if Two or more versions of your debug code show the same problem then my guess is it has to do with one of the communication protocols. I had a problem with a Pic32 silicon version blocking when using the DMA in conjunction with the SPI peripherals. So I would suggest checking the errata for your chip.
If you still cant find the problem, my only suggestion would be to check for memory leaks or arrays that are growing into reserved memory.
Hope that helps, good luck!