Low priority task needs to start first FreeRTOS - freertos

I am using FreeRTOS and have multiple tasks in my application of which two tasks are of least priority but needs to be executed initially.
Lets name them as Task1, Task2, Task3, Task4.
xTaskCreate(MyTask1, "Task1", 100, NULL, 1, &TaskHandle_1);
xTaskCreate(MyTask2, "Task2", 150, NULL, 1, &TaskHandle_2);
xTaskCreate(MyTask3, "Task3", 256, NULL, 2, &TaskHandle_3);
xTaskCreate(MyTask4, "Task4", 1024, NULL, 3, &TaskHandle_4);
Task1 and Task2 are of lowest priority but they needs to be executed first since Task4 consists of condition which depends on Task1 parameters.
Since Task4 is of highest priority, it starts executing immediately and Task1 is executed after sometime.
Least possible solutions I think are:
Make Task1 priority highest and then change its priority back to lowest.
Suspend current task and start Task1 then resume task
How can I make Task1 run before Task4?

Two options that I can think of:
Call vTaskSuspend() at the beginning of Task4, resume it from Task1 when given condition is met
Block Task4 on semaphore at the start of its task function, set semaphore from Task1

Maybe once Task1 is created and running its code could create Task4 after all conditions that it needs are satisfied, instead of creating Task 4 right from the start

If these tasks are created in main() I believe you can suspend the higher priority tasks before starting the scheduler: https://www.freertos.org/a00130.html then resume them after the other tasks have done what they need to do.

Related

how set airflow DAG Concept scheduling

I'm trying some airflow DAG Schedule.
I scheduled like below code.
Task1 >> [Task2, Task3] >> Task4
Then, i expected running Task4 once, when finished task2 and task3.
but.. i think... task4 ran twice.
(task1 -> task2 -> task4) and (task1 -> task3 -> task4)
reason is.. i saw airflow DAG tree view.
How to set running task4 only once?
The Tree View in the Airflow UI shows all distinct branches from root to leaf in the DAG. Based on the screenshot you provided, there are 2 branches:
print_date >> sleep >> print_date_1
print_date >> templated >> print_date_1
This does not mean that the print_date_1 task ran twice. To see the actual DAG check out the Graph View (just to the right of the Tree View button). You should see that each task is present only once.
You may find this guide helpful to understand the Airflow UI.

Reactor groupBy: What happens with remaining items after GroupedFlux is canceled?

I need to group infinite Flux by key with high cardinality.
For example:
group key is domain url
calls to one domain should be strictly sequential (next call happens after previous one is completed)
calls to different domains should be concurrent
time interval between items with same key (url) is unknown, but expected to have burst nature. Several items emitted in short period of time then long pause until next group.
queue
.groupBy(keyMapper, groupPrefetch)
.flatMap(
{ group ->
group.concatMap(
{ task -> makeSlowRemoteCall(task) },
0
)
.takeUntil { remoteCallResult -> remoteCallResult == DONE }
.timeout(groupTimeout, Mono.empty())
.then()
}
, concurrency
)
I cancel the group in two cases:
makeSlowRemoteCall() result indicates that with high probability there will be no new items in this group in near future.
Next item is not emitted during groupTimeout. I use timeout(timeout, fallback) variant to suppress TimeoutException and allow flatMap's inner publisher to complete successfully.
I want possible future items with same key to make new GroupedFlux and be processed with same flatMap inner pipeline.
But what happens if GroupedFlux has remaining unrequested items when I cancel it?
Does groupBy operator re-queue them into new group with same key or they are lost forever. If later what is the proper way to solve my problem. I am also not sure if I need to set concatMap() prefetch to 0 in this case.
I think groupBy() operator is not fit for my task with infinite source and a lot of groups. It makes infinite groups so it is necessary to somehow cancel idle groups downstream. But it is not possible to cancel GroupedFlux with guarantee that it has no unconsumed elements.
I think it will be great to have groupBy variant that emits finite groups.
Something like groupBy(keyMapper, boundryPredicate). When boundryPredicate returns true current group is complete and next element with same key will start new group.

Using the result of concurrent Rails & Sidekiq jobs

Sidekiq will run 25 concurrent jobs in our scenario. We need to get a single integer as the result of each job and tally all of the results together. In this case we are querying an external API and returning counts. We want the total count from all of the API requests.
The Report object stores the final total. Postgresql is our database.
At the end of each job, we increment the report with the additional records found.
Report.find(report_id).increment(:total, api_response_total)
Is this a good approach to track the running total? Will there be Postgresql concurrency issues? Is there a better approach?
increment shouldn't lead to concurrency issues, at sql level, it updates atomically with COALESCE(total, 0) + api_response_total. Race conditions can come only if you addition manually and then saving the object.
report = Report.find(report_id)
report.total += api_response_total
report.save # NOT SAFE
Note: Even with increment! the value at Rails level can be stale, but it will be correct at database level:
# suppose initial `total` is 0
report = Report.find(report_id) # Thread 1 at time t0
report2 = Report.find(report_id) # Thread 2 at time t0
report.increment!(:total) # Thread 1 at time t1
report2.increment!(:total) # Thread 2 at time t1
report.total #=> 1 # Thread 1 at time t2
report2.total #=> 1 # Thread 2 at time t2
report.reload.total #=> 2 # Thread 1 at time t3, value was stale in object, but correct in db
Is this a good approach to track the running total? Will there be Postgresql concurrency issues? Is there a better approach?
I will prefer to do this with Sidekiq Batches. It allows you to run a batch of jobs and assign a callback to the batch, which executes once all jobs are processed. Example:
batch = Sidekiq::Batch.new
batch.description = "Batch description (this is optional)"
batch.on(:success, MyCallback, :to => user.email)
batch.jobs do
rows.each { |row| RowWorker.perform_async(row) }
end
puts "Just started Batch #{batch.bid}"
We need to get a single integer as the result of each job and tally all of the results together.
Note that Sidekiq job doesn't do anything with the returned value and the value is GC'ed and ignored. So, in above batch strategy, you will not have data of jobs in the callback. You can tailor-made that solution. For example, have a LIST in redis with key as batch id, and push the values of each complete job (in perform). In callback, simply use the list and summate it.

Sidekiq worker processes not updating database records properly

I have a sidekiq worker that processes certain series of task in batch. Once it completes the job, it updates a tracker table on the success/failure of the task. Each batch has a unique identifier that is being passed to the worker script and the worker process queries that table for this unique id and update that particular row through a activerecord query similar to:
cpr = MODEL.find(tracker_unique_id)
cpr.update_attributes(:attempted => cpr[:attempted] + 1, :success => cpr[:success] + 1)
What I have noticed is that the tracker only get record of 1 set of task running even though I can see from the sidekiq log and another result table that x number of tasks finished running.
Anyone can help me on this?
Your update_attributes call has a race condition as you cannot increment like that safely. Multiple threads will stomp on each other. You must do a proper UPDATE SQL statement.
update models set attempted = attempted + 1 where tracker_unique_id = ?

Why is Resque incrementing a column count by 4-5 and not by 10?

Foobar.find(1).votes_count returns 0.
In rails console, I am doing:
10.times { Resque.enqueue(AddCountToFoobar, 1) }
My resque worker:
class AddCountToFoobar
#queue = :low
def self.perform(id)
foobar = Foobar.find(id)
foobar.update_attributes(:count => foobar.votes_count +1)
end
end
I would expect Foobar.find(1).votes_count to be 10, but instead it returns 4. If I run 10.times { Resque.enqueue(AddCountToFoobar, 1) } again, it returns the same behaviour. It only increments votes_count by 4 and sometimes 5.
Can anyone explain this?
This is a classic race condition scenario. Imagine that only 2 workers exist and that they each run one of your vote incrementing jobs. Imagine the following sequence.
Worker1: load foobar(vote count == 1)
Worker2: load foobar(vote count == 1, in a separate ruby object)
Worker 1: increment vote count (now == 2) and save
Worker 2: increment it's copy of foobar (vote count now == 2) and save, overwriting what worker 1 did
Although 2 workers ran 1 update job each, the count only increased by 1 because they were both operating on their own copy of foobar that wasn't aware of the change the other worker was doing
To solve this, you could either do an inplace style update, ie
UPDATE foos SET count = count + 1
or use one of the 2 forms of locking active record supports (pessimistic locking & optimistic locking)
The former works because the database ensures that you don't have concurrent updates on the same row at the same time.
Looks like ActiveRecord is not thread-safe in Resque (or rather redis, I guess). Here's a nice explanation.
As Frederick says, you're observing a race condition. You need to serialize access to the critical section from the time you read the value and update it.
I'd try to use pessimistic locking:
http://api.rubyonrails.org/classes/ActiveRecord/Transactions/ClassMethods.html
http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html
foobar = Foobar.find(id)
foobar.with_lock do
foobar.update_attributes(:count => foobar.votes_count +1)
end

Resources