Starting more than two tasks in RTOS - freertos

I'm new in RTOS, my problem is that I can not start more than 2 tasks at the same time.
I'm using FREE-RTOS.
The task priorities are set to the same level of priority.
And configTOTAL_HEAP_SIZE is set on 8192 bytes.
Could anyone help me with this, or give me some direction?

I have 3 tasks with the same structure.
#define configTOTAL_HEAP_SIZE ( ( size_t ) ( 2 * 1024 ) )
xTaskCreate(Task3, (signed char *)"T3", ( ( unsigned short ) 100 ), NULL, 2, NULL);
vTaskStartScheduler();
static void Task3( void *pvParameters )
{
portTickType xNextWakeTime;
xNextWakeTime = xTaskGetTickCount();
for( ;; )
{
vTaskDelayUntil( &xNextWakeTime,(3 * mainQUEUE_SEND_FREQUENCY_MS));
}
}

Related

How to automatically calculate fibonnacci levels from yesterday/prev day in MQL4?

how do I calculate the fibo levels from yesterday/previous day.
This is how far I am currently:
int shift = iBarShift( NULL, PERIOD_D1, Time[0] ) + 1; // yesterday
HiPrice = iHigh( NULL, PERIOD_D1, shift);
LoPrice = iLow ( NULL, PERIOD_D1, shift);
StartTime = iTime( NULL, PERIOD_D1, shift);
if ( TimeDayOfWeek( StartTime ) == 0 /* Sunday */ )
{ // Add fridays high and low
HiPrice = MathMax( HiPrice, iHigh( NULL, PERIOD_D1, shift + 1 ) );
LoPrice = MathMin( LoPrice, iLow( NULL, PERIOD_D1, shift + 1 ) );
}
Range = HiPrice - LoPrice;
I think now I should have all values necessary for calculating it.
I am not sure on how I now can calculate the different levels now:
23.6 38.2 50.0 61.8 76.4 and -23.6 -38.2 -50.0 -61.8 -76.4 -100
All necessary Fibo-levels can be added manually as an array - this is the easiest way as far as I know. Then simply loop over such array and
+values are ( high + array[i] / 100 * range ),
values below the fibo - ( low - array[i] / 100 * range ),
where
array[] = { 23.6, 38.2, .. } ( only positive values are enough )
Fibonacci Levels need a direction, so in your above code you will either want to swap to using open and close values of the previous bar or impose a direction onto the high and low. This will let you know which way to draw the extensions and retraces.
Here is a function I have written previously for this question. This function assumes price1 is at an earlier timepoint than price2 then calculates the direction and levels, returning a FibLevel structure.
struct FibLevel {
double retrace38;
double retrace50;
double retrace61;
double extension61;
double extension100;
double extension138;
double extension161;
};
void FibLevel(double price1, double price2,FibLevel &fiblevel)
{
double range = MathAbs(price1-price2);
fiblevel.retrace38 =(price1<price2)?price2-range*0.382:price1+range*0.382;
fiblevel.retrace50 =(price1<price2)?price2-range*0.500:price1+range*0.500;
fiblevel.retrace61 =(price1<price2)?price2-range*0.618:price1+range*0.618;
fiblevel.extension61 =(price1<price2)?price2+range*0.618:price1-range*0.618;
fiblevel.extension100=(price1<price2)?price2+range :price1-range;
fiblevel.extension138=(price1<price2)?price2+range*1.382:price1-range*1.382;
fiblevel.extension161=(price1<price2)?price2+range*1.618:price1-range*1.618;
}

Is it possible to MMAP a PCI BAR memory?

I want to user access memory from a PCIe board which provides a 1GB memory with BAR0.
Currently I use only read and write functionality of my character device driver, which is VERY slow (1MB/s read and 16MB/s write) on a 8x PCIe Gen3.
static ssize_t
MPD_read(
struct file *filp,
char *buffer,
size_t bufferSize,
loff_t *offset )
{
unsigned long unusedBytes = copy_to_user(
( void * ) buffer,
MPD_AdapterBoard.bars[ 0 ].barHWAddress,
bufferSize );
return 0;
}
static ssize_t
MPD_write(
struct file *filp,
const char *buffer,
size_t bufferSize,
loff_t *offset )
{
unsigned long unusedBytes = copy_from_user(
MPD_AdapterBoard.bars[ 0 ].barHWAddress,
( void * ) buffer,
bufferSize );
return 0;
}
Is it possible to use the MMAP (with the .mmap file operation) to get more speed ?
Or is DMA the only option ?
Thanks in advance!
/Jesko
I found out how it's working:
static int
MPD_mmap(
struct file *filp,
struct vm_area_struct *vma )
{
unsigned long offset;
offset = vma->vm_pgoff << PAGE_SHIFT;
if (( offset + ( vma->vm_end - vma->vm_start )) > MPD_AdapterBoard.bars[ 0 ].barSizeInBytes )
{
return -EINVAL;
}
offset += ( unsigned long ) MPD_AdapterBoard.bars[ 0 ].mmioStart;
vma->vm_page_prot = pgprot_noncached( vma->vm_page_prot );
if ( io_remap_pfn_range(vma, vma->vm_start, offset >> PAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot ))
{
return -EAGAIN;
}
return 0;
}
Attention: This is work in progress, so error checking is fairly limited.
In the hope to help someone here, the complete code can be downloaded including a test program from here: https://github.com/jesko42/minipci

Can't make iteration with #property strict

I have this code working without error. Basically, this code is to show value of Moving Averages on five previous bars per 5 minutes. MA's current value is omitted.
int TrendMinDurationBar = 5,
SlowPeriod = 14,
FastPeriod = 7;
void OnTick()
{
if ( NewBar( PERIOD_M5 ) == true ) MA( PERIOD_M5 );
}
void MA( int TF )
{
double Slow[], Fast[];
ArrayResize( Slow, TrendMinDurationBar + 1 );
ArrayResize( Fast, TrendMinDurationBar + 1 );
for ( int i = 1; i <= TrendMinDurationBar; i++ )
{ Slow[i] = NormalizeDouble( iMA( Symbol(), TF, SlowPeriod, 0, MODE_EMA, PRICE_OPEN, i ), Digits );
Fast[i] = NormalizeDouble( iMA( Symbol(), TF, FastPeriod, 0, MODE_EMA, PRICE_OPEN, i ), Digits );
Alert( "DataSlow" + ( string )i + ": " + DoubleToStr( Slow[i], Digits ) );
}
}
bool NewBar( int TF )
{
static datetime lastbar = 0;
datetime curbar = iTime( Symbol(), TF, 0 );
if ( lastbar != curbar )
{ lastbar = curbar; return( true );
}
else return( false );
}
When #property strict is included, the code is only working once after compiled. After new bar on M5 chart exist, it doesn't make any iteration.
What's the solution if I insist to use #property strict?
Works perfectly well with #property strict as an EA in MT4 Build 950.
Are you sure you are running it as EA and not as Script or Indicator?
Welcome to another New-MQL4.56789 Catch-22
My candidate from Help > MQL4 Reference > Updated MQL4
is
this one ( column [New MQL4 with #property strict] )
Functions of any type should return a value
and
one more to be reviewed,
code simply loses the logic even for static double alternative it would be extremely inefficient under these circumstances:
Local arrays are released when exiting {} block

Output for sample code for an upcoming exam concerning pthread

pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
int token = 2;
int value = 3;
void * red ( void *arg ) {
int myid = * ((int *) arg);
pthread_mutex_lock( &mutex );
while ( myid != token) {
pthread_cond_wait( &cond, &mutex );
}
value = value + (myid + 3);
printf( "RED: id is %d \n", value);
token = (token + 1) % 3;
pthread_cond_broadcast( &cond );
pthread_mutex_unlock( &mutex );
}
void * blue ( void *arg ) {
int myid = * ((int *) arg);
pthread_mutex_lock( &mutex );
while ( myid != token) {
pthread_cond_wait( &cond, &mutex );
}
value = value * (myid + 2);
printf( "BLUE: id is %d \n", value);
token = (token + 1) % 3;
pthread_cond_broadcast( &cond );
pthread_mutex_unlock( &mutex );
}
void * white ( void *arg ) {
int myid = * ((int *) arg);
pthread_mutex_lock( &mutex );
while ( myid != token) {
pthread_cond_wait( &cond, &mutex );
}
value = value * (myid + 1);
printf( "WHITE: id is %d \n", value);
token = (token + 1) % 3;
pthread_cond_broadcast( &cond );
pthread_mutex_unlock( &mutex );
}
main( int argc, char *argv[] ) {
pthread_t tid;
int count = 0;
int id1, id2, id3;
id1 = count;
n = pthread_create( &tid, NULL, red, &id1);
id2 = ++count;
n = pthread_create( &tid, NULL, blue, &id2);
id3 = ++count;
n = pthread_create( &tid, NULL, white, &id3);
if ( n = pthread_join( tid, NULL ) ) {
fprintf( stderr, "pthread_join: %s\n", strerror( n ) );
exit( 1 );
}
}
I am just looking for comments and or notes to what the output would be. THIS IS FOR AN EXAM AND WAS OFFERED AS AN EXAMPLE. THIS IS NOT HOMEWORK OR GOING TO BE USED FOR ANY TYPE OF SUBMISSION. I am looking to understand what is going on. Any help is greatly appreciated.
I'm going to assume that you know the function of the locks, condition variables, and the waits. Basically you have three threads that each call Red, Blue, and White. Token is originally 2, and value is originally 3.
Red is called when id1 = 0, but it will stay in the while block calling wait() until the token = 0.
Blue is called when id3 = 1, and will stay in the while block called wait() until the token is 1.
White is called when id2 = 2, and will stay in the while block calling wait() until the token is 2.
So White will enter the critical section first, since it's the only one that won't enter the while loop. So value = 3 * ( 3 ) = 9; token = ( 3 ) % 3 = 0;
Broadcast wakes every waiting thread, but the only one that will enter the critical section is Red. It adds 3 to value for 12; token = ( 1 ) % 3 = 1; Broadcast wakes Blue.
Blue enters the critical section. value = 12 * 3; token = 2 ( but it doesn't matter anymore ).
This would be the order of the threads would execute, which is what I assume the test is really asking. However, what should really come out is just:
White is 9
This is because there is only one pthread_t tid. So after pthread_join( tid, NULL ), it can immediately exit. If you put different pthread_t in each of the pthread_create() then all of them would print.

cudaFree is not freeing memory

The code below calculates the dot product of two vectors a and b. The correct result is 8192. When I run it for the first time the result is correct. Then when I run it for the second time the result is the previous result + 8192 and so on:
1st iteration: result = 8192
2nd iteration: result = 8192 + 8192
3rd iteration: result = 8192 + 8192
and so on.
I checked by printing it on screen and the device variable dev_c is not freed. What's more writing to it causes something like a sum, the result beeing the previous value plus the new one being written to it. I guess that could be something with the atomicAdd() operation, but nonetheless cudaFree(dev_c) should erase it after all.
#define N 8192
#define THREADS_PER_BLOCK 512
#define NUMBER_OF_BLOCKS (N/THREADS_PER_BLOCK)
#include <stdio.h>
__global__ void dot( int *a, int *b, int *c ) {
__shared__ int temp[THREADS_PER_BLOCK];
int index = threadIdx.x + blockIdx.x * blockDim.x;
temp[threadIdx.x] = a[index] * b[index];
__syncthreads();
if( 0 == threadIdx.x ) {
int sum = 0;
for( int i= 0; i< THREADS_PER_BLOCK; i++ ){
sum += temp[i];
}
atomicAdd(c,sum);
}
}
int main( void ) {
int *a, *b, *c;
int *dev_a, *dev_b, *dev_c;
int size = N * sizeof( int);
cudaMalloc( (void**)&dev_a, size );
cudaMalloc( (void**)&dev_b, size );
cudaMalloc( (void**)&dev_c, sizeof(int));
a = (int*)malloc(size);
b = (int*)malloc(size);
c = (int*)malloc(sizeof(int));
for(int i = 0 ; i < N ; i++){
a[i] = 1;
b[i] = 1;
}
cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice);
dot<<< N/THREADS_PER_BLOCK,THREADS_PER_BLOCK>>>( dev_a, dev_b, dev_c);
cudaMemcpy( c, dev_c, sizeof(int) , cudaMemcpyDeviceToHost);
printf("Dot product = %d\n", *c);
cudaFree(dev_a);
cudaFree(dev_b);
cudaFree(dev_c);
free(a);
free(b);
free(c);
return 0;
}
cudaFree doesn't erase anything, it simply returns memory to a pool to be re-allocated. cudaMalloc doesn't guarantee the value of memory that has been allocated. You need to initialize memory (both global and shared) that your program uses, in order to have consistent results. The same is true for malloc and free, by the way.
From the documentation of cudaMalloc();
The memory is not cleared.
That means that dev_c is not initialized, and your atomicAdd(c,sum); will add to any random value that happens to be stored in memory at the returned position.

Resources