Could I Test CAN Tranceiver with External Loopback Mode? - can-bus

Hello I am using STM32H753 over a custom PCB.
I have no any CAN Analyzer except oscilloscope, multimeter and logic analyzer.
We have a problem with CAN BUS communication.
I am suspecting of CAN Tranceiver but we couldn't find the root cause yet.
There are external and internal loopback modes in STM32H7 to test CAN, However these modes also allow us to see the messages what we sent on CAN TX.
I have two different product which has different STM32H753 variant. One is working and the other one is not. Their tranceivers are different but almost same pinout and behavior.
My assumption was if I see my messages on CAN TX then I have to see them CAN H and CAN L.
I wrote test code.
I tried both test code in both device.
On working one, I saw messages on CAN Tx and also see messages with in a suppress way which I thought because of no termination resistor at the end.
On the not-working one, I also saw messages at CAN TX and there is no any move at CAN H or CAN L.
My question is my test was is true or still I could miss something or this test does not imply anyhting ?
HAL_Delay(100);
if(HAL_FDCAN_AddMessageToTxBuffer(&hfdcan1, &TxHeader, TxData_Node1_To_Node2, FDCAN_TX_BUFFER0 ) != HAL_OK)
{
Error_Handler();
}
/* Send Tx buffer message */
if(HAL_FDCAN_EnableTxBufferRequest(&hfdcan1, FDCAN_TX_BUFFER0) != HAL_OK)
{
Error_Handler();
}
while(HAL_FDCAN_IsTxBufferMessagePending(&hfdcan1, FDCAN_TX_BUFFER0) == 1);
/* Polling for reception complete on buffer index 0 */
while(HAL_FDCAN_IsRxBufferMessageAvailable(&hfdcan1, FDCAN_RX_BUFFER0) == 0);
/* Retrieve message from Rx buffer 0. Rec msg from Node 2 */
if(HAL_FDCAN_GetRxMessage(&hfdcan1, FDCAN_RX_BUFFER0, &RxHeader, RxData_From_Node2) != HAL_OK)
{
Error_Handler();
}
hfdcan1.Instance = FDCAN1;
hfdcan1.Init.FrameFormat = FDCAN_FRAME_CLASSIC;
hfdcan1.Init.Mode = FDCAN_MODE_EXTERNAL_LOOPBACK;
hfdcan1.Init.AutoRetransmission = DISABLE;
hfdcan1.Init.TransmitPause = DISABLE;
hfdcan1.Init.ProtocolException = DISABLE;
hfdcan1.Init.NominalPrescaler = 1;
hfdcan1.Init.NominalSyncJumpWidth = 13;
hfdcan1.Init.NominalTimeSeg1 = 86;
hfdcan1.Init.NominalTimeSeg2 = 13;
hfdcan1.Init.DataPrescaler = 2;
hfdcan1.Init.DataSyncJumpWidth = 12;
hfdcan1.Init.DataTimeSeg1 = 12;
hfdcan1.Init.DataTimeSeg2 = 12;
hfdcan1.Init.MessageRAMOffset = 0;
hfdcan1.Init.StdFiltersNbr = 1;
hfdcan1.Init.ExtFiltersNbr = 0 ;
hfdcan1.Init.RxFifo0ElmtsNbr = 0;
hfdcan1.Init.RxFifo0ElmtSize = FDCAN_DATA_BYTES_8;
hfdcan1.Init.RxFifo1ElmtsNbr = 0;
hfdcan1.Init.RxFifo1ElmtSize = FDCAN_DATA_BYTES_8;
hfdcan1.Init.RxBuffersNbr = 1;
hfdcan1.Init.RxBufferSize = FDCAN_DATA_BYTES_8;
hfdcan1.Init.TxEventsNbr = 0;
hfdcan1.Init.TxBuffersNbr = 1;
hfdcan1.Init.TxFifoQueueElmtsNbr = 32;
hfdcan1.Init.TxFifoQueueMode = FDCAN_TX_FIFO_OPERATION;
hfdcan1.Init.TxElmtSize = FDCAN_DATA_BYTES_8;
if (HAL_FDCAN_Init(&hfdcan1) != HAL_OK)
{
Error_Handler();
}

Related

to send the CAN frames after recieving ACK in CAPL (using delay/timer in CAPL)

I wrote a CAPl code, where it has to send the CAN frames to control the stepper motor step. stepper motor used is TMCM-1311 module.
variables{
message step myMsg1;
message llmmodule myMsg2;
message llmmodule myMg3;
}
on start{
int i=0;
for (i=0:;i<=5000:i++)
{
myMsg1.DLC = 8;
myMsg1.byte(0) = 0x04;
myMsg1.byte(1) = 0x01;
myMsg1.byte(2) = 0x00;
myMsg1.byte(3) = 0x00;
myMsg1.byte(4) = 0x00;
myMsg1.byte(5) = 0x00;
myMsg1.byte(6) = 0x0A;
myMsg1.byte(7) = 0x00;
output(mymsg1);
myMsg2.DLC = 8;
myMsg2.byte(0) = 0x07;
myMsg2.byte(1) = 0x01;
myMsg2.byte(2) = 0x03;
myMsg2.byte(3) = 0x11;
output(mymsg2);
myMsg3.DLC = 8;
myMsg3.byte(0) = 0x07;
myMsg3.byte(1) = 0x03;
myMsg3.byte(2) = 0x01;
myMsg3.byte(3) = 0x00;
output(mymsg3);
}
}
In the for loop of the above code, the CAN frame is continuously sent without waiting for the ACK, the 1st data frame i.e., myMsg1 is sent for rotating the stepper motor to certain step size, so once that frame is sent it takes some time for the stpper motor to rotate to that position, once it is at that position then the other 2 CAN frames i.e., myMsg2 and myMsg3 should be sent. After the ACK recieved for all the 3 data frames sent then i in the for loop should increment and the next iteration should take place. But in the above code the for loop is executing without waiting for the ACK from the CAN frame, so i think timer should be used there to give a delay and to execute the next iterations after the ACK is received. I tried using timers but i could not find a solution as required, so it would be great help if anyone can let me know how it should be executed.
You have to use the on message event for this use case.
variables
{
message step myMsg1;
message llmmodule myMsg2;
message llmmodule myMg3;
}
on start
{
myMsg1.DLC = 8;
myMsg1.byte(0) = 0x04;
myMsg1.byte(1) = 0x01;
myMsg1.byte(2) = 0x00;
myMsg1.byte(3) = 0x00;
myMsg1.byte(4) = 0x00;
myMsg1.byte(5) = 0x00;
myMsg1.byte(6) = 0x0A;
myMsg1.byte(7) = 0x00;
myMsg2.DLC = 8;
myMsg2.byte(0) = 0x07;
myMsg2.byte(1) = 0x01;
myMsg2.byte(2) = 0x03;
myMsg2.byte(3) = 0x11;
myMsg3.DLC = 8;
myMsg3.byte(0) = 0x07;
myMsg3.byte(1) = 0x03;
myMsg3.byte(2) = 0x01;
myMsg3.byte(3) = 0x00;
output(mymsg1);
}
on message ACK_Msg1
{
output(mymsg2);
}
on message ACK_Msg2
{
output(mymsg3);
}
on message ACK_Msg3
{
output(mymsg1);
}
This code will go on until the ack messages stop. So, if you want to do this for a definite amount of times, you might have to use some kind of flags in every event.

LibSvm add features using the JAVA api

I have a text and I want to train by adding feature using the java API. Looking at the examples the main class to build the training set is the svm_problem. It appear like the svm_node represents a feature (the index is the feature and the value is the weight of the feature).
What I have done is to have a map (just to simplify the problem) that keeps an association between the feature and an index. For each of my weight> example I do create a new node :
svm_node currentNode = new svm_node();
int index = feature.getIndexInMap();
double value = feature.getWeight();
currentNode.index = index;
currentNode.value = value;
Is my intuition correct? What does the svm_problem.y refers to? Does it refer to the index of the label? Is the svm_problem.l just the length of the two vectors?
Your intuition is very close, but svm_node is a pattern not a feature. The variable svm_problem.y is an array that contains the labels of each pattern and svm_problem.l is the size of the training set.
Also, beware that svm_parameter.nr_weight is the weight of each label (useful if you have an unbalanced training set) but if you are not going to use it you must set that value to zero.
Let me show you a simple example in C++:
#include "svm.h"
#include <iostream>
using namespace std;
int main()
{
svm_parameter params;
params.svm_type = C_SVC;
params.kernel_type = RBF;
params.C = 1;
params.gamma = 1;
params.nr_weight = 0;
params.p= 0.0001;
svm_problem problem;
problem.l = 4;
problem.y = new double[4]{1,-1,-1,1};
problem.x = new svm_node*[4];
{
problem.x[0] = new svm_node[3];
problem.x[0][0].index = 1;
problem.x[0][0].value = 0;
problem.x[0][1].index = 2;
problem.x[0][1].value = 0;
problem.x[0][2].index = -1;
}
{
problem.x[1] = new svm_node[3];
problem.x[1][0].index = 1;
problem.x[1][0].value = 1;
problem.x[1][1].index = 2;
problem.x[1][1].value = 0;
problem.x[1][2].index = -1;
}
{
problem.x[2] = new svm_node[3];
problem.x[2][0].index = 1;
problem.x[2][0].value = 0;
problem.x[2][1].index = 2;
problem.x[2][1].value = 1;
problem.x[2][2].index = -1;
}
{
problem.x[3] = new svm_node[3];
problem.x[3][0].index = 1;
problem.x[3][0].value = 1;
problem.x[3][1].index = 2;
problem.x[3][1].value = 1;
problem.x[3][2].index = -1;
}
for(int i=0; i<4; i++)
{
cout << problem.y[i] << endl;
}
svm_model * model = svm_train(&problem, &params);
svm_save_model("mymodel.svm", model);
for(int i=0; i<4; i++)
{
double d = svm_predict(model, problem.x[i]);
cout << "Prediction " << d << endl;
}
/* We should free the memory at this point.
But this example is large enough already */
}

Portaudio MME device behaviour issue

I am using the multiple-output-device feature provided by paMME host API to output audio through multiple stereo devices. I also need to use a single multichannel input device using MME.
- When I configure just the output device and play internally generated audio, there is no problem.
- However problem starts to occur when I configure both the input device and the mulitple-stereo output devices. The application crashes when I try to use more than two channels on the output. That is, if I try to increment the 'out' pointer for more than 2*frames_per_buffer , it crashes, which indicates that buffer has been allocated only to two output channels.
Can anybody throw some light on what could be the problem. The configuration code is given below:
outputParameters.device = paUseHostApiSpecificDeviceSpecification;
outputParameters.channelCount = 8;
outputParameters.sampleFormat = paInt16;
outputParameters.hostApiSpecificStreamInfo = NULL;
wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo);
wmmeStreamInfo.hostApiType = paMME;
wmmeStreamInfo.version = 1;
wmmeStreamInfo.flags = paWinMmeUseMultipleDevices;
wmmeDeviceAndNumChannels[0].device = selectedDeviceIndex[0];
wmmeDeviceAndNumChannels[0].channelCount = 2;
wmmeDeviceAndNumChannels[1].device = selectedDeviceIndex[1];
wmmeDeviceAndNumChannels[1].channelCount = 2;
wmmeDeviceAndNumChannels[2].device = selectedDeviceIndex[2];
wmmeDeviceAndNumChannels[2].channelCount = 2;
wmmeDeviceAndNumChannels[3].device = selectedDeviceIndex[3];
wmmeDeviceAndNumChannels[3].channelCount = 2;
wmmeStreamInfo.devices = wmmeDeviceAndNumChannels;
wmmeStreamInfo.deviceCount = 4;
outputParameters.suggestedLatency = Pa_GetDeviceInfo( selectedDeviceIndex[0] )->defaultLowOutputLatency;
outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo;
inputParameters.device = selectedInputDeviceIndex; /* default output device */
inputParameters.channelCount = 8; /* stereo output */
inputParameters.sampleFormat = paInt16; /* 32 bit floating point output */
inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputParameters.device )->defaultLowInputLatency;
inputParameters.hostApiSpecificStreamInfo = NULL;
Thanks and regards,
Siddharth Kumar.

Data appearing as Ethernet Trailer in self made SKB

Im trying to make a custom skb in a linux kernel module and then send it over the network.
I succeed in making a SKB but when I send it over the network it does not reach the destination.
If I run wireshark on the local machine that is sending my SBK over the network, it shows my packet. However if I examine the contents of my packet, it shows that the data is being placed as 'Ethernet Trailer'
Also, if I remove all data from my SKB and only send a Header-only SKB, it still does not reach its destination
Here is the code:
u_int32_t local_ip;
u_int32_t remote_ip;
struct udphdr *udph;
struct iphdr *iph;
struct ethhdr *eth;
unsigned short udp_len;
char remote_mac[6];
char local_mac[6];
Allocate a skb:
int header_len = sizeof(*iph) + sizeof(*udph) + sizeof(*eth);
skb = sock_wmalloc(sock->sk, /*payload len*/ len + header_len + LL_RESERVED_SPACE(pfr->ring_netdev->dev), 0, GFP_KERNEL);
Since I am using skb_push, I move down data and tail all the way down:
skb_reserve(skb,
len + header_len + LL_RESERVED_SPACE(pfr->ring_netdev->dev));
Push UDP header:
skb_push(skb, sizeof(*udph));
Reset the transport_pointer accordingly:
skb_reset_transport_header(skb);
Set and populate udp header:
udph = udp_hdr(skb);
udph->source = htons(5123);
udph->dest = htons(5123);
udp_len = 14;
udph->len = htons(udp_len);
udph->check = 0;
local_ip = htonl(0xCB873F2A); /*203.135.63.42*/
remote_ip = htonl(0xCB873F29); /*203.135.61.41*/
udph->check = csum_tcpudp_magic(local_ip,
remote_ip,
udp_len, IPPROTO_UDP,
csum_partial(udph, udp_len, 0));
if (udph->check == 0) {
printk("mangled checksum\n");
udph->check = CSUM_MANGLED_0;
}
Now to push IP header:
skb_push(skb, sizeof(*iph));
Reset the network_pointer:
skb_reset_network_header(skb);
Set and populate the network header:
iph = ip_hdr(skb);
put_unaligned(0x45, (unsigned char *)iph);
iph->tos = 0;
ip_len = 40;
put_unaligned(htons(ip_len), &(iph->tot_len));
//iph->id = htons(atomic_inc_return(&ip_ident));
iph->frag_off = 0;
iph->ttl = 64;
iph->protocol = IPPROTO_UDP;
iph->check = 0;
put_unaligned(local_ip /*"\xC0\xA8\x00\x01"*/, &(iph->saddr));
put_unaligned(remote_ip /*"\xC0\xA8\x00\x01"*/, &(iph->daddr));
iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
Push the Ethernet Header:
eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
Reset the mac_pointer accordingly:
skb_reset_mac_header(skb);
set and populate the mac_header:
skb->protocol = eth->h_proto = htons(ETH_P_IP);
remote_mac[0] = 0x4C;
remote_mac[1] = 0x72;
remote_mac[2] = 0xB9;
remote_mac[3] = 0x24;
remote_mac[4] = 0x14;
remote_mac[5] = 0x1E;
local_mac[0] = 0x00;
local_mac[1] = 0x1E;
local_mac[2] = 0xE3;
local_mac[3] = 0xED;
local_mac[4] = 0xD4;
local_mac[5] = 0xA9;
memcpy(eth->h_source, remote_mac, ETH_ALEN);
memcpy(eth->h_dest, remote_mac, ETH_ALEN);
Set device and protocol:
skb->protocol = htons(ETH_P_IP);
skb->dev = pfr->ring_netdev->dev;
skb->priority = sock->sk->sk_priority;
if(!err)
goto out_free;
Now send it
if (dev_queue_xmit(skb) != NETDEV_TX_OK) {
err = -ENETDOWN; /* Probably we need a better error here */
goto out;
}

Return error when trying to copy the render target's backbuffer

I have one WDDM user mode display driver for DX9. Now I would like to dump the
render target's back buffer to a bmp file. Since the render target resource is
not lockable, I have to create a resource from system buffer and bitblt from the
render target to the system buffer and then save the system buffer to the bmp
file. However, calling the bitblt always return the error code E_FAIL. I also
tried to call the pfnCaptureToSysMem which also returned the same error code.
Anything wrong here?
D3DDDI_SURFACEINFO nfo;
nfo.Depth = 0;
nfo.Width = GetRenderSize().cx;
nfo.Height = GetRenderSize().cy;
nfo.pSysMem = NULL;
nfo.SysMemPitch = 0;
nfo.SysMemSlicePitch = 0;
D3DDDIARG_CREATERESOURCE resource;
resource.Format = D3DDDIFMT_A8R8G8B8;
resource.Pool = D3DDDIPOOL_SYSTEMMEM;
resource.MultisampleType = D3DDDIMULTISAMPLE_NONE;
resource.MultisampleQuality = 0;
resource.pSurfList = &nfo;
resource.SurfCount = 1;
resource.MipLevels = 1;
resource.Fvf = 0;
resource.VidPnSourceId = 0;
resource.RefreshRate.Numerator = 0;
resource.RefreshRate.Denominator = 0;
resource.hResource = NULL;
resource.Flags.Value = 0;
resource.Flags.Texture = 1;
resource.Flags.Dynamic = 1;
resource.Rotation = D3DDDI_ROTATION_IDENTITY;
HRESULT hr = m_pDevice->m_deviceFuncs.pfnCreateResource(m_pDevice->GetDrv(), &resource);
HANDLE hSysSpace = resource.hResource;
D3DDDIARG_BLT blt;
blt.hSrcResource = m_pDevice->m_hRenderTarget;
blt.hDstResource = hSysSpace;
blt.SrcRect.left = 0;
blt.SrcRect.top = 0;
blt.SrcRect.right = GetRenderSize().cx;
blt.SrcRect.bottom = GetRenderSize().cy;
blt.DstRect = blt.SrcRect;
blt.DstSubResourceIndex = 0;
blt.SrcSubResourceIndex = 0;
blt.Flags.Value = 0;
blt.ColorKey = 0;
hr = m_pDevice->m_deviceFuncs.pfnBlt(m_pDevice, &blt);
You are on the right track, but I think you can use the DirectX functions for this.
In order to copy the render target from video memory to system memory you should use the IDirect3DDevice9::GetRenderTargetData() function.
This function requires that the destination surface is an offscreen plain surface created with pool D3DPOOL_SYSTEMMEM. This surface also must have the same dimensions as the render target (no stretching allowed). Use IDirect3DDevice9::CreateOffscreenPlain() to create this surface.
Then this surface can be locked and the color data can be accessed by the CPU.

Resources