Modify checksum and counter of Periodic CAN message using Python CAN module - can-bus

I am trying to modify the counter and checksum in can bus which is sending periodic cyclic message. The issue is counter is increasing from lets say 0-4. Can anyone help in updating a periodic message (i.e. update checksum based on counter after every cycle time like 20ms)
I found "can.ModifiableCyclicTaskABC" function but its working only once not after each cycle time.
Kindly assist.

Related

Flink Checkpoint Failure - Checkpoints time out after 10 mins

We got one or two CheckPoint Failure during processing data every day. The data volume is low, like under 10k, and our interval setting is '2 minutes'. (The reason for processing very slow is we need to sink the data to another API endpoint which take some time to process at the end of flink job, so the time is Streaming data + Sink to external API endpoint).
The root issue is:
Checkpoints time out after 10 mins, this caused by the data processing time longer than 10 mins, so the checkpoint time out. We might increase the parallelism to fast the processing, but if the data become bigger, we have to increase the parallelism again, so don't want to use this way.
Suggested solution:
I saw someone suggest to set the pause between old and new checkpoint, but I have some question here is, if I set the pause time there, will the new checkpoint missing the state in the pause time?
Aim:
How to avoid this issue and record the correct state that doesn't miss any data?
Failed checkpoint:
Completed checkpoint:
subtask didn't respond
Thanks
There are several related configuration variables you can set -- such as the checkpoint interval, the pause between checkpoints, and the number of concurrent checkpoints. No combination of these settings will result in data being skipped for checkpointing.
Setting an interval between checkpoints means that Flink won't initiate a new checkpoint until some time has passed since the completion (or failure) of the previous checkpoint -- but this has no effect on the timeout.
Sounds like you should extend the timeout, which you can do like this:
env.getCheckpointConfig().setCheckpointTimeout(n);
where n is measured in milliseconds. See the section of the Flink docs on enabling and configuring checkpointing for more details.

Calculating periodic checkpoints for an unbounded stream in Apache Beam/DataFlow

I am using a global unbounded stream in combination with Stateful processing and timers in order to totally order a stream per key by event timestamp. The solution is described with the answer to this question:
Processing Total Ordering of Events By Key using Apache Beam
In order to restart the pipeline after a failure or stopping for some other reason, I need to determine the lowest event timestamp at which we are guaranteed that all other events have been processed downstream. This timestamp can be calculated periodically and persisted to a datastore and used as the input to the source IO (Kinesis) so that the stream can be re-read without having to go back to the beginning. (It is ok for us to have events replayed)
I considered having the stateful transformation emit the lowest processed timestamp as the output when the timer triggers and then combine all the outputs globally to find the minimum value. However, it is not possible to use a Global combine operation because a either a Window or a Trigger must be applied first.
Assuming that my stateful transform emits a Long when the timer fires which represents the smallest timestamp, I am defining the pipeline like this:
p.apply(events)
.apply("StatefulTransform", ParDo.of(new StatefulTransform()))
.apply(Window.<Long>configure().triggering(Repeatedly.forever(AfterFirst.of(
AfterPane.elementCountAtLeast(100),
AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardMinutes(1))))))
.apply(Combine.globally(new MinLongFn()))
.apply("WriteCheckpoint", ParDo.of(new WriteCheckpoint()));
Will this ensure that the checkpoints will only be written when all of the parallel workers have emitted at least one of their panes? I am concerned that a the combine operation may operate on panes from only some of the workers, e.g. there may be a worker that has either failed or is still waiting for another event to trigger it's timer.
I'm a newbie of the Beam, but according to this blog https://beam.apache.org/blog/2017/08/16/splittable-do-fn.html, Splittable DoFn might be the thing you are looking for!
You could create an SDF to fetch the stream and accept the input element as the start point.

How to structure a Dask application that processes a fixed number of inputs from a queue?

We have a requirement to implement the following. Given a Redis channel that will provide a known number of messages:
For each message consumed from the channel:
Get a JSON document from Redis
Parse the JSON document, extracting a list of result objects
Aggregate across all result objects to produce a single result
We would like to distribute both steps 1 and 2 across many workers, and avoid collecting all results into memory. We would also like to display progress bars for both steps.
However, we can't see a nice way to structure the application such that we can see progress and keep work moving through the system without blocking as inopportune times.
For example, in step 1 if we read from the Redis channel into a queue then we can pass the queue to Dask, in which case we start processing each message as it comes in without waiting for all messages. However, we can't see a way to show progress if we use a queue (presumably because a queue typically has an unknown size?)
If we collect from the Redis channel into a list and pass this to Dask then we can see progress, but we have to wait for all messages from Redis before we can start processing the first one.
Is there a recommended way to approach this kind of problem?
If your Redis channels are concurrent-access-safe then you might submit many futures to pull an element from the channel. These would run on different machines.
from dask.distributed import Client, progress
client = Client(...)
futures = [client.submit(pull_from_redis_channel, ..., pure=False) for _ in range(n_items)]
futures2 = client.map(process, futures)
progress(futures2)

Questions about the nextTuple method in the Spout of Storm stream processing

I am developing some data analysis algorithms on top of Storm and have some questions about the internal design of Storm. I want to simulate a sensor data yielding and processing in Storm, and therefore I use Spout to push sensor data into the succeeding bolts at a constant time interval via setting a sleep method in nextTuple method of Spout. But from the experiment results, it appeared that spout didn't push data at the specified rate. In the experiment, there was no bottleneck bolt in the system.
Then I checked some material about the ack and nextTuple methods of Storm. Now my doubt is if the nextTuple method is called only when the previous tuples are fully processed and acked in the ack method?
If this is true, does it means that I cannot set a fixed time interval to emit data?
Thx a lot!
My experience has been that you should not expect Storm to make any real-time guarantees, including in your case the rate of tuple processing. You can certainly write a spout that only emits tuples on some time schedule, but Storm can't really guarantee that it will always call on the spout as often as you would like.
Note that nextTuple should be called whenever there is room available for more pending tuples in the topology. If the topology has free capacity, I would expect Storm to try to fill it up if it can with whatever it can get.
I had a similar use-case, and the way I accomplished it is by using TICK_TUPLE
Config tickConfig = new Config();
tickConfig.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 15);
...
...
builder.setBolt("storage_bolt", new S3Bolt(), 4).fieldsGrouping("shuffle_bolt", new Fields("hash")).addConfigurations(tickConfig);
Then in my storage_bolt (note it's written in python, but you will get an idea) i check if message is tick_tuple if it is then execute my code:
def process(self, tup):
if tup.stream == '__tick':
# Your logic that need to be executed every 15 seconds,
# or what ever you specified in tickConfig.
# NOTE: the maximum time is 600 s.
storm.ack(tup)
return

Getting the time of the received message

How does one get the time of the received message in Erlang?
I want to calculate something according to the frequency of the received messages to the gen_server.
e.g. message 1, some time, message 2 some time.
get the time between messages.
Thanks
You can use statistics(wall_clock) each time you receive a message.
The second member of the tuple it returns will be the time between the two receives (in milliseconds).
Edit:
As rvirding mentions in his comment, you can also use now() and then calculate the time difference accordingly. Take a look at supervisor.erl found in the $ERL_TOP/lib/stdlib/src/ directory of your Erlang/OTP distribution. The last lines of that module (functions addRestart, inPeriod and difference) calculate the frequency of restarts using now().

Resources