I am using the following library <flash.h> to Erase/Write/Read from memory but unfortunately the data I am trying to save doesn't seem to be written to flash memory. I am using PIC18F87j11 with MPLAB XC8 compiler. Also when I read the program memory from PIC after attempting to write to it, there is no data on address 0x1C0CA. What am I doing wrong?
char read[1];
/* set FOSC clock to 8MHZ */
OSCCON = 0b01110000;
/* turn off 4x PLL */
OSCTUNE = 0x00;
TRISDbits.TRISD6 = 0; // set as ouput
TRISDbits.TRISD7 = 0; // set as ouput
LATDbits.LATD6 = 0; // LED 1 OFF
LATDbits.LATD7 = 1; // LED 2 ON
EraseFlash(0x1C0CA, 0x1C0CA);
WriteBytesFlash(0x1C0CA, 1, 0x01);
ReadFlash(0x1C0CA, 1, read[0]);
if (read[0] == 0x01)
LATDbits.LATD6 = 1; // LED 1 ON
while (1) {
}
I don't know what WriteFlashBytes does but the page size for your device is 64 bytes and after writing you need to write an ulock sequence to EECON2 and EECON1 registers to start programming the flash memory
Related
I have some questions regarding the flash memory with a dspic33ep512mu810.
I'm aware of how it should be done:
set all the register for address, latches, etc. Then do the sequence to start the write procedure or call the builtins function.
But I find that there is some small difference between what I'm experiencing and what is in the DOC.
when writing the flash in WORD mode. In the DOC it is pretty straightforward. Following is the example code in the DOC
int varWord1L = 0xXXXX;
int varWord1H = 0x00XX;
int varWord2L = 0xXXXX;
int varWord2H = 0x00XX;
int TargetWriteAddressL; // bits<15:0>
int TargetWriteAddressH; // bits<22:16>
NVMCON = 0x4001; // Set WREN and word program mode
TBLPAG = 0xFA; // write latch upper address
NVMADR = TargetWriteAddressL; // set target write address
NVMADRU = TargetWriteAddressH;
__builtin_tblwtl(0,varWord1L); // load write latches
__builtin_tblwth(0,varWord1H);
__builtin_tblwtl(0x2,varWord2L);
__builtin_tblwth(0x2,varWord2H);
__builtin_disi(5); // Disable interrupts for NVM unlock sequence
__builtin_write_NVM(); // initiate write
while(NVMCONbits.WR == 1);
But that code doesn't work depending on the address where I want to write. I found a fix to write one WORD but I can't write 2 WORD where I want. I store everything in the aux memory so the upper address(NVMADRU) is always 0x7F for me. The NVMADR is the address I can change. What I'm seeing is that if the address where I want to write modulo 4 is not 0 then I have to put my value in the 2 last latches, otherwise I have to put the value in the first latches.
If address modulo 4 is not zero, it doesn't work like the doc code(above). The value that will be at the address will be what is in the second set of latches.
I fixed it for writing only one word at a time like this:
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
I want to know why I'm seeing this behavior?
2)I also want to write a full row.
That also doesn't seem to work for me and I don't know why because I'm doing what is in the DOC.
I tried a simple write row code and at the end I just read back the first 3 or 4 element that I wrote to see if it works:
NVMCON = 0x4002; //set for row programming
TBLPAG = 0x00FA; //set address for the write latches
NVMADRU = 0x007F; //upper address of the aux memory
NVMADR = 0xE7FA;
int latchoffset;
latchoffset = 0;
__builtin_tblwtl(latchoffset, 0);
__builtin_tblwth(latchoffset, 0); //current = 0, available = 1
latchoffset+=2;
__builtin_tblwtl(latchoffset, 1);
__builtin_tblwth(latchoffset, 1); //current = 0, available = 1
latchoffset+=2;
.
. all the way to 127(I know I could have done it in a loop)
.
__builtin_tblwtl(latchoffset, 127);
__builtin_tblwth(latchoffset, 127);
INTCON2bits.GIE = 0; //stop interrupt
__builtin_write_NVM();
while(NVMCONbits.WR == 1);
INTCON2bits.GIE = 1; //start interrupt
int testaddress;
testaddress = 0xE7FA;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
testaddress += 2;
status = NVMemReadIntH(testaddress);
status = NVMemReadIntL(testaddress);
What I see is that the value that is stored in the address 0xE7FA is 125, in 0xE7FC is 126 and in 0xE7FE is 127. And the rest are all 0xFFFF.
Why is it taking only the last 3 latches and write them in the first 3 address?
Thanks in advance for your help people.
The dsPIC33 program memory space is treated as 24 bits wide, it is
more appropriate to think of each address of the program memory as a
lower and upper word, with the upper byte of the upper word being
unimplemented
(dsPIC33EPXXX datasheet)
There is a phantom byte every two program words.
Your code
if(Address % 4)
{
__builtin_tblwtl(0, 0xFFFF);
__builtin_tblwth(0, 0x00FF);
__builtin_tblwtl(2, ValueL);
__builtin_tblwth(2, ValueH);
}
else
{
__builtin_tblwtl(0, ValueL);
__builtin_tblwth(0, ValueH);
__builtin_tblwtl(2, 0xFFFF);
__builtin_tblwth(2, 0x00FF);
}
...will be fine for writing a bootloader if generating values from a valid Intel HEX file, but doesn't make it simple for storing data structures because the phantom byte is not taken into account.
If you create a uint32_t variable and look at the compiled HEX file, you'll notice that it in fact uses up the least significant words of two 24-bit program words. I.e. the 32-bit value is placed into a 64-bit range but only 48-bits out of the 64-bits are programmable, the others are phantom bytes (or zeros). Leaving three bytes per address modulo of 4 that are actually programmable.
What I tend to do if writing data is to keep everything 32-bit aligned and do the same as the compiler does.
Writing:
UINT32 value = ....;
:
__builtin_tblwtl(0, value.word.word_L); // least significant word of 32-bit value placed here
__builtin_tblwth(0, 0x00); // phantom byte + unused byte
__builtin_tblwtl(2, value.word.word_H); // most significant word of 32-bit value placed here
__builtin_tblwth(2, 0x00); // phantom byte + unused byte
Reading:
UINT32 *value
:
value->word.word_L = __builtin_tblrdl(offset);
value->word.word_H = __builtin_tblrdl(offset+2);
UINT32 structure:
typedef union _UINT32 {
uint32_t val32;
struct {
uint16_t word_L;
uint16_t word_H;
} word;
uint8_t bytes[4];
} UINT32;
I am trying to read data from channel 0,1,2 and 3 of ADC1. The issue is that when I read from channel 1 and channel 2 and run the code in debug mode it shows the correct valueenter image description heres but when I modified it for doing the same for channel 3 and 4 also,it does not show any value.Following is the code and the screen shot of memory map:
#include "stm32f4xx.h"
#include "stm32f4_discovery.h"
uint16_t ADC1ConvertedValue[4] = {0,0,0,0};//Stores converted vals [2] = {0,0}
void DMA_config()
{
RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_DMA2 ,ENABLE);
DMA_InitTypeDef DMA_InitStruct;
DMA_InitStruct.DMA_Channel = DMA_Channel_0;
DMA_InitStruct.DMA_PeripheralBaseAddr = (uint32_t) 0x4001204C;//ADC1's data register
DMA_InitStruct.DMA_Memory0BaseAddr = (uint32_t)&ADC1ConvertedValue;
DMA_InitStruct.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStruct.DMA_BufferSize = 4;//2
DMA_InitStruct.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStruct.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStruct.DMA_PeripheralDataSize = DMA_PeripheralDataSize_HalfWord;//Reads 16 bit values _HalfWord
DMA_InitStruct.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;//Stores 16 bit values _Halfword
DMA_InitStruct.DMA_Mode = DMA_Mode_Circular;
DMA_InitStruct.DMA_Priority = DMA_Priority_High;
DMA_InitStruct.DMA_FIFOMode = DMA_FIFOMode_Enable;
DMA_InitStruct.DMA_FIFOThreshold = DMA_FIFOThreshold_HalfFull;//_HalfFull
DMA_InitStruct.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStruct.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
DMA_Init(DMA2_Stream0, &DMA_InitStruct);
DMA_Cmd(DMA2_Stream0, ENABLE);
}
void ADC_config()
{
/* Configure GPIO pins ******************************************************/
ADC_InitTypeDef ADC_InitStruct;
ADC_CommonInitTypeDef ADC_CommonInitStruct;
GPIO_InitTypeDef GPIO_InitStruct;
RCC_AHB1PeriphClockCmd( RCC_AHB1Periph_GPIOA, ENABLE);
RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE);//ADC1 is connected to the APB2 peripheral bus
GPIO_InitStruct.GPIO_Pin = GPIO_Pin_0 | GPIO_Pin_1 | GPIO_Pin_2 | GPIO_Pin_3;// PA0,PA1,PA3,PA3
GPIO_InitStruct.GPIO_Mode = GPIO_Mode_AN;//The pins are configured in analog mode
GPIO_InitStruct.GPIO_PuPd = GPIO_PuPd_NOPULL ;//We don't need any pull up or pull down
GPIO_Init(GPIOA, &GPIO_InitStruct);//Initialize GPIOA pins with the configuration
/* ADC Common Init **********************************************************/
ADC_CommonInitStruct.ADC_Mode = ADC_Mode_Independent;
ADC_CommonInitStruct.ADC_Prescaler = ADC_Prescaler_Div2;
ADC_CommonInitStruct.ADC_DMAAccessMode = ADC_DMAAccessMode_Disabled;
ADC_CommonInitStruct.ADC_TwoSamplingDelay = ADC_TwoSamplingDelay_5Cycles;
ADC_CommonInit(&ADC_CommonInitStruct);
/* ADC1 Init ****************************************************************/
ADC_InitStruct.ADC_Resolution = ADC_Resolution_12b;//Input voltage is converted into a 12bit int (max 4095)
ADC_InitStruct.ADC_ScanConvMode = ENABLE;//The scan is configured in multiple channels
ADC_InitStruct.ADC_ContinuousConvMode = ENABLE;//Continuous conversion: input signal is sampled more than once
ADC_InitStruct.ADC_ExternalTrigConv = DISABLE;
ADC_InitStruct.ADC_ExternalTrigConvEdge = ADC_ExternalTrigConvEdge_None;
ADC_InitStruct.ADC_DataAlign = ADC_DataAlign_Right;//Data converted will be shifted to right
ADC_InitStruct.ADC_NbrOfConversion = 4;
ADC_Init(ADC1, &ADC_InitStruct);//Initialize ADC with the configuration
/* Select the channels to be read from **************************************/
ADC_RegularChannelConfig(ADC1, ADC_Channel_0, 1, ADC_SampleTime_144Cycles);//PA0
ADC_RegularChannelConfig(ADC1, ADC_Channel_1, 2, ADC_SampleTime_144Cycles);//PA1
ADC_RegularChannelConfig(ADC1, ADC_Channel_2, 3, ADC_SampleTime_144Cycles);//PA2
ADC_RegularChannelConfig(ADC1, ADC_Channel_3, 4, ADC_SampleTime_144Cycles);//PA3
/* Enable DMA request after last transfer (Single-ADC mode) */
ADC_DMARequestAfterLastTransferCmd(ADC1, ENABLE);
/* Enable ADC1 DMA */
ADC_DMACmd(ADC1, ENABLE);
/* Enable ADC1 */
ADC_Cmd(ADC1, ENABLE);
}
int main(void)
{
DMA_config();
ADC_config();
while(1)
{
ADC_SoftwareStartConv(ADC1);
//value=ADC_Read();
}
return 0;
}
You shouldn't start the ADC conversion multiple times (in your while(1) loop).
Since you've configured the ADC with .ADC_ContinuousConvMode = ENABLE it should be enough to start it before the loop and do nothing within it.
I measure distance using ultrasonic signal. STM32F1 generate Ultrasound signal, STM32F4 writing this signal using microphone. Both STM32 are synchronized using signal generated another device, they connected one wire.
Question: Why signal comes with different times? although I don’t move receiver or transmitter. It's gives errors 50mm.
Dispersion signals
Code of receiver is here:
while (1)
{
if(HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5) != 0x00)
{
Get_UZ_Signal(uz_signal);
Send_Signal(uz_signal);
}
}
void Get_UZ_Signal(uint16_t* uz_signal)
{
int i,j;
uint16_t uz_buf[10];
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_buf, 300000);
for(i = 0; i<lenght_signal; i++)
{
j=10;
while(j>0)
{
j--;
}
uz_signal[i] = uz_buf[0];
}
HAL_ADC_Stop_DMA(&hadc1);
}
hadc1.Instance = ADC1;
hadc1.Init.ClockPrescaler = ADC_CLOCK_SYNC_PCLK_DIV4;
hadc1.Init.Resolution = ADC_RESOLUTION_12B;
hadc1.Init.ScanConvMode = DISABLE;
hadc1.Init.ContinuousConvMode = ENABLE;
hadc1.Init.DiscontinuousConvMode = DISABLE;
hadc1.Init.ExternalTrigConvEdge = ADC_EXTERNALTRIGCONVEDGE_NONE;
hadc1.Init.ExternalTrigConv = ADC_SOFTWARE_START;
hadc1.Init.DataAlign = ADC_DATAALIGN_RIGHT;
hadc1.Init.NbrOfConversion = 1;
hadc1.Init.DMAContinuousRequests = DISABLE;
hadc1.Init.EOCSelection = ADC_EOC_SINGLE_CONV;
More information here:
https://github.com/BooSooV/Indoor-Ultrasonic-Positioning-System/tree/master/Studying_ultrasonic_signals/Measured_lengths_dispersion
PROBLEM RESOLVED
Dispersion signals final
The time errors was created by the transmitter, I make some changed in it. Now synchro signal takes with help EXTI, and PWM generated all time, and I controlee signal with Enable or Disable pin on driver. Now I have dispersion 5mm, it enough for me.
Final programs is here
https://github.com/BooSooV/Indoor-Ultrasonic-Positioning-System/tree/master/Studying_ultrasonic_signals/Measured_lengths_dispersion
Maybe, It is a problem of time processing. The speed of sound is 343 meters / sec. Then, 50mm is about 0.15 msec.
Have you thought to call HAL_ADC_Start_DMA() from the main()
uint16_t uz_buf[10];
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_buf, 10 );//300000);
// Sizeof Buffer ---------------------^
and you call
void Get_UZ_Signal(uint16_t* uz_signal) {
int i;
// For debugging - Get the time to process
// Get the Time from the System Tick. This counter wrap around
// from 0 to SysTick->LOAD every 1 msec
int32_t startTime = SysTick->VAL;
__HAL_ADC_ENABLE(&hadc1); // Start the DMA
// return remaining data units in the current DMA Channel transfer
while(__HAL_DMA_GET_COUNTER(&hadc1) != 0)
;
for(i = 0; i<lenght_signal; i++) {
// uz_signal[i] = uz_buf[0];
// 0 or i ------------^
uz_signal[i] = uz_buf[i];
__HAL_ADC_DISABLE(&hadc1); // Stop the DMA
int32_t endTime = SysTick->VAL;
// check if negative, case of wrap around
int32_t difTime = endTime - startTime;
if ( difTime < 0 )
difTime += SysTick->LOAD
__HAL_DMA_SET_COUNTER(&hadc1, 10); // Reset the counter
// If the DMA buffer is 10, the COUNTER will start at 10
// and decrement
// Ref. Manual: 9.4.4 DMA channel x number of data register (DMA_CNDTRx)
In your code
MX_USART1_UART_Init();
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_signal, 30000);
// Must be stopped because is running now
// How long is the buffer: uint16_t uz_signal[ 30000 ] ?
__HAL_ADC_DISABLE(&hadc1); // must be disable
// reset the counter
__HAL_DMA_SET_COUNTER(&hdma_adc1, 30000);
while (1)
{
if(HAL_GPIO_ReadPin(GPIOA, GPIO_PIN_5) != 0x00)
{
__HAL_ADC_ENABLE(&hadc1);
while(__HAL_DMA_GET_COUNTER(&hdma_adc1) != 0)
;
__HAL_ADC_DISABLE(&hadc1);
__HAL_DMA_SET_COUNTER(&hdma_adc1, 30000);
// 30,000 is the sizeof your buffer?
...
}
}
I made some test with uC STM32F407 # 168mHz. I'll show you the code below.
The speed of sound is 343 meters / second.
During the test, I calculated the time to process the ADC conversion. Each conversion takes about 0.35 uSec (60 ticks).
Result
Size Corresponding
Array Time Distance
100 0.041ms 14mm
1600 0.571ms 196mm
10000 3.57ms 1225mm
In the code, you'll see the start time. Be careful, the SysTick is a decremental counter starting # the uC speed (168MHz = from 168000 to 0). It could be good idea to get the msec time with HAL_GetTick(), and usec with SysTick.
int main(void)
{
...
MX_DMA_Init();
MX_ADC1_Init();
// Configure the channel in the way you want
ADC_ChannelConfTypeDef sConfig;
sConfig.Channel = ADC_CHANNEL_0; //ADC1_CHANNEL;
sConfig.Rank = 1;
sConfig.SamplingTime = ADC_SAMPLETIME_3CYCLES;
sConfig.Offset = 0;
HAL_ADC_ConfigChannel(&hadc1, &sConfig);
// Start the DMA channel and Stop it
HAL_ADC_Start_DMA(&hadc1, (uint32_t*)&uz_signal, sizeof( uz_signal ) / sizeof( uint16_t ));
HAL_ADC_Stop_DMA( &hadc1 );
// The SysTick is a decrement counter
// Can be use to count usec
// https://www.sciencedirect.com/topics/engineering/systick-timer
tickLoad = SysTick->LOAD;
while (1)
{
// Set the buffer to zero
for( uint32_t i=0; i < (sizeof( uz_signal ) / sizeof( uint16_t )); i++)
uz_signal[i] = 0;
// Reset the counter ready to restart
DMA2_Stream0->NDTR = (uint16_t)(sizeof( uz_signal ) / sizeof( uint16_t ));
/* Enable the Peripheral */
ADC1->CR2 |= ADC_CR2_ADON;
/* Start conversion if ADC is effectively enabled */
/* Clear regular group conversion flag and overrun flag */
ADC1->SR = ~(ADC_FLAG_EOC | ADC_FLAG_OVR);
/* Enable ADC overrun interrupt */
ADC1->CR1 |= (ADC_IT_OVR);
/* Enable ADC DMA mode */
ADC1->CR2 |= ADC_CR2_DMA;
/* Start the DMA channel */
/* Enable Common interrupts*/
DMA2_Stream0->CR |= DMA_IT_TC | DMA_IT_TE | DMA_IT_DME | DMA_IT_HT;
DMA2_Stream0->FCR |= DMA_IT_FE;
DMA2_Stream0->CR |= DMA_SxCR_EN;
//===================================================
// The DMA is ready to start
// Your if(HAL_GPIO_ReadPin( ... ) will be here
HAL_Delay( 10 );
//===================================================
// Get the time
tickStart = SysTick->VAL;
// Start the DMA
ADC1->CR2 |= (uint32_t)ADC_CR2_SWSTART;
// Wait until the conversion is completed
while( DMA2_Stream0->NDTR != 0)
;
// Get end time
tickEnd = SysTick->VAL;
/* Stop potential conversion on going, on regular and injected groups */
ADC1->CR2 &= ~ADC_CR2_ADON;
/* Disable the selected ADC DMA mode */
ADC1->CR2 &= ~ADC_CR2_DMA;
/* Disable ADC overrun interrupt */
ADC1->CR1 &= ~(ADC_IT_OVR);
// Get processing time
tickDiff = tickStart - tickEnd;
//===================================================
// Your processing will go here
HAL_Delay( 10 );
//===================================================
}
}
So, you'll have the start and end time. I think you have to make the formula on the start time.
Good luck
If you want to know the execution time, you could use that little function. It returns the micro Seconds
static uint32_t timeMicroSecDivider = 0;
extern uint32_t uwTick;
// The SysTick->LOAD matchs the uC Speed / 1000.
// If the uC clock is 80MHz, the the LOAD is 80000
// The SysTick->VAL is a decrement counter from (LOAD-1) to 0
//====================================================
uint64_t getTimeMicroSec()
{
if ( timeMicroSecDivider == 0)
{
// Number of clock by micro second
timeMicroSecDivider = SysTick->LOAD / 1000;
}
return( (uwTick * 1000) + ((SysTick->LOAD - SysTick->VAL) / timeMicroSecDivider));
}
i am pretty newbie on directx.
and i am stumbling with handling resource.
Okay First, i created texture that i can read/write in GPU, and it worked well.
And now, as you can check in my code, i wanted to read this texture in CPU as well(reading from application side), so i edited the usage from USAGE_DEFAULT to USAGE_DYNAMIC.
Microsoft::WRL::ComPtr<ID3D11Texture2D> outputTexture;
D3D11_TEXTURE2D_DESC outputTex_desc;
outputTex_desc.Format = DXGI_FORMAT_R32_FLOAT;
outputTex_desc.Width = 3;
outputTex_desc.Height = 3;
outputTex_desc.MipLevels = 1;
outputTex_desc.ArraySize = 1;
outputTex_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
outputTex_desc.SampleDesc.Count = msCount;
outputTex_desc.SampleDesc.Quality = msQuality;
outputTex_desc.Usage = D3D11_USAGE_DYNAMIC;
outputTex_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
outputTex_desc.MiscFlags = 0;
// CREATE 'TEXTURE'
device->CreateTexture2D( // FAIL HERE !!!
&outputTex_desc,
nullptr,
outputTexture.GetAddressOf());
// CREATE 'SRV'
...
// CREATE 'UAV'
...
and it starts failing exactly when it executes the 'device->CreateTexture2D()'
any advice would be amazing.
The first two steps in debugging any Direct3D 11 program is:
Make sure you are checking every HRESULT for either success (SUCCEEDED macro) or failure (FAILED macro). If it is safe to ignore the return value at runtime, then the function returns void. See ThrowIfFailed.
Enable the Direct3D debug device and look for debug output (a.k.a. use D3D11_CREATE_DEVICE_DEBUG in your Debug configuration).
If you enable the Direct3D debug device, you'll get detailed information on why the API returns failure codes in many cases.
If you did that, you'd see the error for this code:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DYNAMIC Resource
may only have the D3D11_CPU_ACCESS_WRITE CPUAccessFlags set.
[ STATE_CREATION ERROR #98: CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]
In order to READ it on the CPU, you must copy to a D3D11_USAGE_STAGING Resource first. For example source code on doing that, see the ScreenGrab code from DirectX Tool Kit.
You don't mention what your msCount or msQuality values are here. I assumed 1, and 0 respectively. If you use any other values, you'll get:
D3D11 ERROR: ID3D11Device::CreateTexture2D: Multisampling is not supported
with the D3D11_BIND_UNORDERED_ACCESS BindFlag. SampleDesc.Count must be 1
and SampleDesc.Quality must be 0.
[ STATE_CREATION ERROR #99: CREATETEXTURE2D_INVALIDBINDFLAGS]
In order to access a texture from cpu (read mode), you need you create a separate staging texture, then copy your texture into it.
These are the flags for the gpu only texture (please note I force sample count to 1, as UAV access is not allowed for multisampled textures)
D3D11_TEXTURE2D_DESC gpuTexDesc;
gpuTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gpuTexDesc.Width = 3;
gpuTexDesc.Height = 3;
gpuTexDesc.MipLevels = 1;
gpuTexDesc.ArraySize = 1;
gpuTexDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS |
D3D11_BIND_SHADER_RESOURCE;
gpuTexDesc.SampleDesc.Count = 1;
gpuTexDesc.SampleDesc.Quality = 0;
gpuTexDesc.Usage = D3D11_USAGE_DEFAULT;
gpuTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_NONE;
gpuTexDesc.MiscFlags = 0;
Then you create a second texture for reading
D3D11_TEXTURE2D_DESC readTexDesc;
readTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
readTexDesc.Width = 3;
readTexDesc.Height = 3;
readTexDesc.MipLevels = 1;
readTexDesc.ArraySize = 1;
readTexDesc.BindFlags = 0; //No bind flags allowed for staging
readTexDesc.SampleDesc.Count = 1;
readTexDesc.SampleDesc.Quality = 0;
readTexDesc.Usage = D3D11_USAGE_STAGING; //need staging flag for read
readTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
readTexDesc.MiscFlags = 0;
then you can use CopyResource:
deviceContext->CopyResource(readTex, gpuTex);
Once done, you can finally access texture data for reading using Map
D3D11_MAPPED_SUBRESOURCE MappedResource;
deviceContext->Map(readTex, 0, D3D11_MAP_READ, 0, &MappedResource);
MappedResource will give you access to data in cpu, once you done processing, don't forget to Unmap the resource.
deviceContext->Unmap(readTex, 0);
I am writing the CAPL script to Automise the Diagnostic services. I have read some DIDs which are bigger than 8 bytes in size. Till 8 bytes I can capture correctly the data in my CAPL script but when the data size exceeds the 8 bytes, then I get some garbage values 00 for remaining bytes.
The complete read data I can see in CANoe Trace but I am not able to capture it in my CAPL script. If someone has any ideas or solution, please share with me.
In Belo script, the issue is that I can capture value till this.byte(7) correctly. But for this.byte(8) and this.byte(9) I read 00 although the actual value in CANoe Trace is 0x54 and 0x66. So it means I cannot read more than 8 bytes in CAPL from CAN.
My script looks like:
variables
{
//Please insert your code below this comment
byte test_num;
message DTOOL_to_UPA msg_tester;
mstimer readTimerDID_2001;
mstimer defaultSession;
byte readBuf2001[8];
}
// read request
on key 'd'
{
test_num = 0;
msg_tester.dlc = 8;
msg_tester.dir = tx;
msg_tester.can = 1;
settimer(defaultSession, 2000);
}
on timer defaultSession // Request DID: 10 01
{
msg_tester.byte(0) = 0x02;
msg_tester.byte(1) = 0x10;
msg_tester.byte(2) = 0x01;
output(msg_tester);
settimer(readTimerDID_2001, 100);
canceltimer(defaultSession);
}
on timer readTimerDID_2001 // Read Request DID: 22 20 01
{
msg_tester.byte(0) = 0x03;
msg_tester.byte(1) = 0x22;
msg_tester.byte(2) = 0x20;
msg_tester.byte(3) = 0x01;
output(msg_tester);
canceltimer(readTimerDID_2001);
}
on message UPA_to_DTOOL
{
if (this.DIR == RX)
{
// Response Data for DID 2001
if (
(this.byte(0)== 0x04)&&(this.byte(1)== 0x62)&&(this.byte(2)==0x20)&&
(this.byte(3)== 0x01)&&(this.byte(4)== 0x23) &&(this.byte(5)== 0x00)&&
(this.byte(6)== 0x44)&&(this.byte(7)== 0x22) &&(this.byte(8)==0x54)&&
(this.byte(9)== 0x66)
)
{
readDID2001();
}
}
}
on message UPA_to_DTOOL
is reacting on the CAN message UPA_to_DTOOL, and this you can only access the 8 bytes of the CAN message.
If you want to react on diagnostic messages you should use
on diagResponse <serviceName>
inside of this handler you can then access the complete data of the diagnostic message
I had a similar problem accessing j1939 PGN with data length code (DLC) > 8 byte. These messages were transmitted as a j1939 Frame (DLC > 8 byte) instead of a CAN frame (DLC = 8 byte) in the trace window. I had to make use of the getThisMessage(pg pg_variable, int length) function in an on pg event like this.
on pg UPA_to_DTOOL {
pg UPA_to_DTOOL UPA_to_DTOOL_pg;
getThisMessage(UPA_to_DTOOL, UPA_to_DTOOL.dlc);
write("byte 9 = %X", UPA_to_DTOOL.byte(9));
}
Because messages with DLC > 8 are transmitted in a special way, the getThisMessage had to be used in my case, which let me access all the message bytes. I am not sure this solution for j1939 PGNs helps you because I do not know whether you have a license for j1939 in your canoe installation.