Task to measure clock frequency in System Verilog (pass clock signal by reference) - task

I'm new to SV for verification and as a first attempt to a object oriented testbench, I'm trying to verify a simple clock generator design.
I would like to constantly monitor the multple clock outputs of the macro, given a certain configuration and clock input. To do so I need to be able to measure the frequency of multiple clock signals.
Ideally, I thought of something like this for my checker class:
get trigger and retrieve configuration details
start a freq_meter task for each output clock I need to measure
Each freq_meter task would do the following:
receive a clk signal from the virtual interface
start time measure
count N clock posedges
return the evaluated frequency
class checker_monitor;
virtual clkgen_if clkgen_vif;
task run();
// launch freq measure
fork
freq_meter(clkgen_vif.clk_out_1);
freq_meter(clkgen_vif.clk_out_2);
freq_meter(clkgen_vif.clk_out_3);
join
// check that returned frequencies are correct
endtask
task freq_meter(input logic clk, output logic freq);
time_start = $time;
repeat(N+1) # (posedge clk);
time_end = $time;
period = (time_end - time_start)/N;
freq = 1/period;
endtask : freq_meter
Any suggestions on such a problem? I'm stuck now at passing a clock signal by reference to the task from the virtual interface.
Best,
abet

Change the input parameter of the task to the reference parameter like in following snippet.
task freq_meter(ref logic clk, output int freq);
Note: As was pointed out in the comments this will work only when clkgen_vif.clk_out_N are variables not nets. My assumption is that they are defined as logic in the clkgen_if interface.
EDIT: I've created a working example on EDAPlayground(link). I also changed the output parameter to int instead of logic as frequency measurement would return only 0 or 1.

Related

Initialize `ManipulationStationHardwareInterface` with lcm message

I am trying to use drake to control a kuka iiwa robot and use the ManipulationStationHardwareInterface for Lcm communication. Before deployment on the real robot, I use mock_station_simulation to test. One thing that I find is that after initializing the simulator (which I think should trigger the initialization event?), Eval HardwareInterface's output will give the default values instead of the current lcm message values. For example,
drake::systems::DiagramBuilder<double> builder;
auto *interface = builder.AddSystem<ManipulationStationHardwareInterface>();
interface->Connect();
auto diagram = builder.Build();
drake::systems::Simulator<double> simulator(*diagram);
auto &simulator_context = simulator.get_mutable_context();
auto &interface_context = interface->GetMyMutableContextFromRoot(&simulator_context);
interface->GetInputPort("iiwa_position").FixValue(&interface_context, current_position);
simulator.set_publish_every_time_step(false);
simulator.set_target_realtime_rate(1.0);
simulator.Initialize();
auto q = interface->GetOutputPort("iiwa_position_measured").Eval(interface_context);
std::cout << "after initialization, interface think that the position of robot is" << q << std::endl;
q will be a zero vector.
This behavior bothers me when I try to use the robot_state input port of DifferentialInverseKinematicsIntegrator. DifferentialInverseKinematicsIntegrator will use this q to initialize its internal state rather than the robot's real position. The robot will move violently. As a workaround, I need to read the robot's position first and use the SetPositions method of DifferentialInverseKinematicsIntegrator and do not connect the robot_state input port. Another thing is that LogVectorOutput will always have the default value as the first entry, which is of little use.
I think this problem should be related with the LcmSubscriberSystem. My question is that is it possible to use the Lcm message to initialize the system rather than using the default value?
Thank you.
This is an interesting (and very reasonable) question. I could imagine having an initialization event for the LcmSubscriber that blocks until the first message to arrive. But currently I don't believe that we guarantee the order of the initialization events in a Diagram (likely the order is determined by something like the order the system was added to the Diagram, and we don't have a nice mechanism for setting it). It's possible that the diff IK block could initialize before the LcmSubscriber.
In this case, I think it might be better to capture the first LcmSubscriber message yourself outside the simulation loop, and manually set the diff IK integrator initial state. Then start the simulation.
I'll see if I can get some of the other Drake developers to weigh in.

Apache Beam Streaming pipeline with sequential batches

What I am trying to do:
Consume json messages from PubSub subscription using Apache Beam Streaming pipeline & Dataflow Runner
Unmarshal payload strings into objects.
Assume 'messageId' is the unique Id of incoming message. Ex: msgid1, msgid2, etc
Retrieve child records from a database for each object resulted from #2. Same child can be applicable for multiple messages.
Assume 'childId' as the unique Id of child record. Ex: cid1234, cid1235 etc
Group child records by their unique id as shown in example below
KV.of(cid1234,Map.of(msgid1, msgid2)) and KV.of(cid1235,Map.of(msgid1, msgid2))
Write grouped result at childId level to the database
Questions:
Where should the windowing be introduced? we currently have 30minutes fixed windowing after step#1
How does Beam define start and end time of 30mins window? is it right after we start pipeline or after first message of batch?
What if the steps 2 to 5 take more than 1hour for a window and next window batch is ready. Would both windows batches gets processed in parallel?
How can make the next window messages wait until previous window batch is completed?
If we dont do this, the result at childId level will be overwritten by next batches
Code snippet:
PCollection<PubsubMessage> messages = pipeline.apply("ReadPubSubSubscription",
PubsubIO.readMessagesWithAttributes()
.fromSubscription("projects/project1/subscriptions/subscription1"));
PCollection<PubsubMessage> windowedMessages = messages.apply(Window.into(FixedWindows
.of(Duration.standardMinutes(30))));
PCollectionTuple unmarshalResultTuple = windowedMessages.apply("UnmarshalJsonStrings",
ParDo.of(new JsonUnmarshallFn())
.withOutputTags(JsonUnmarshallFn.mainOutputTag,
TupleTagList.of(JsonUnmarshallFn.deadLetterTag)));
PCollectionTuple childRecordsTuple = unmarshalResultTuple
.get(JsonUnmarshallFn.mainOutputTag)
.apply("FetchChildsFromDBAndProcess",
ParDo.of(new ChildsReadFn() )
.withOutputTags(ChildsReadFn.mainOutputTag,
TupleTagList.of(ChildsReadFn.deadLetterTag)));
// input is KV of (childId, msgids), output is mutations to write to BT
PCollectionTuple postProcessTuple = childRecordsTuple
.get(ChildsReadFn.mainOutputTag)
.apply(GroupByKey.create())
.apply("UpdateChildAssociations",
ParDo.of(new ChildsProcessorFn())
.withOutputTags(ChildsProcessorFn.mutations,
TupleTagList.of(ChildsProcessorFn.deadLetterTag)));
postProcessTuple.get(ChildsProcessorFn.mutations).CloudBigtableIO.write(...);
Addressing each of your questions.
Regarding questions 1 and 2 When you us Windowing within Apache Beam, you need to understand that the "windows existed before the job". What I mean is that the windows start at the UNIX epoch (timestamp = 0). In other words, your data will be allocated within each fixed time range, example with fixed 60 seconds windows:
PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardSeconds(60))));
First window: [0s;59s) - Second : [60s;120s)...and so on
Please refer to the documentation 1, 2 and 3
About question 3, the default of Windowing and Triggering in Apache Beam is to ignore late data. Although, it is possible to configure the handling of late data using withAllowedLateness. In order to do so, it is necessary to understand the concept of Watermarks before. Watermark is a metric of how far behind the data is. Example: you can have a 3 second watermark, then if your data is 3 seconds late it will be assigned to the right window. On the other hand, if it is passed the watermark, you define what it will happen with this data, you can reprocess or ignore it using Triggers.
withAllowedLateness
PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardDays(2)));
Pay attention that an amount of time is set for late data to arrive.
Triggering
PCollection<String> pc = ...;
pc.apply(Window.<String>into(FixedWindows.of(1, TimeUnit.MINUTES))
.triggering(AfterProcessingTime.pastFirstElementInPane() .plusDelayOf(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardMinutes(30));
Notice that the window is re-processed and re-computed event time there is late data. This trigger gives you the opportunity to react to the late data.
Finally, about question 4, which is partially explained with the concepts described above. The computations will occur within each fixed window and recomputed/processed every time a trigger is fired. This logic will guarantee your data it is in the right window.

Beam/Dataflow: maxTimestamp for topology with no Window defined

What is expected behavior of maxTimestamp for a global window?
I have a topology with unbounded source which does not specify windowing strategy. When I access maxTimestamp field of BoundedWindow, I get a timestamp which is in the future. Is this expected behavior?
Yes, this is intended behaviour. The end of the global window must be somewhat smaller than the max timestamp value what is possible in Beam, often referenced as +infinity in practice.
From the source code of GlobalWindow.java:
// Triggers use maxTimestamp to set timers' timestamp. Timers fires when
// the watermark passes their timestamps. So, the maxTimestamp needs to be
// smaller than the TIMESTAMP_MAX_VALUE.
// One standard day is subtracted from TIMESTAMP_MAX_VALUE to make sure
// the maxTimestamp is smaller than TIMESTAMP_MAX_VALUE even after rounding up
// to seconds or minutes.
private static final Instant END_OF_GLOBAL_WINDOW = extractMaxTimestampFromProto();

Predicting possible inputs leading to output satisfying certain condition

Suppose there is a data set of statistical data with a number of input columns and one output column. The predictors characterize some particular process that is repeated, so one data row is corresponding to one occasion of that process. And for these process characteristics the order and duration is important. Some of them might be absent at all, some of them are repeated, but with different speed or other parameter.
Let's say that our process is names P and it can have a lot of child parts, that form the process together. Let's say, once the process had N sub processes:
Sub process 1, with: speed = SpdA, duration = DurA, depth = DepA
Right after sub process A next sub process B happened:
Sub process 2, with: speed = SpdB, duration = DurB, depth = DepB
...
... N. Sub process N.
So there might be from 1 to N child processes in each process, that is, in each data row. And the amount of the child processes may vary from one row to another. This is about the input data.
As for the output - the output here in the simplest case is binary - either success or failure, but in reality it will be a positive number starting from 0 to positive infinity. This number represents the time by which the process has finished successfully. If the value for the output is a positive infinity - it means that the process failed to succeed.
Very important note, if we are going with the simplest case where the output is binary - in the statistical data set there will be data rows that mostly have failure in the output. The goal is to find the hypothetical parameters that values of the test predictors should be equal to, to make the process succeed.
For example, after learning we should be able to tell what is the concrete universal input parameters that will most process success. That was the simplest, binary output case.
However, in real life we will have the output that represents time by which the process finished successfully, and +infinity - if failure. So here the goal is the same - make the process succeed or as much close to success as possible. The goal is to generate the test inputs that we might use in future to prevent the output equal to +infinity.
The goal maximum is, having the target time provided, find the exact values for the inputs that will make the process finish successfully as closer to the given time as possible. Here we should expect the enumeration of child processes, their order and the values for each child process to be predicted.
Here in this problem, I guess, the output will play the role of the input and the input will play the role of the output.
What is the approach to solve these problems? How to handle the variable number of characteristics and how to handle the order that might vary in the each data row?
I am a novice in machine learning and would appreciate the concrete suggestions or examples of similar problems solved.
Any help and advice welcome!

Questions about the nextTuple method in the Spout of Storm stream processing

I am developing some data analysis algorithms on top of Storm and have some questions about the internal design of Storm. I want to simulate a sensor data yielding and processing in Storm, and therefore I use Spout to push sensor data into the succeeding bolts at a constant time interval via setting a sleep method in nextTuple method of Spout. But from the experiment results, it appeared that spout didn't push data at the specified rate. In the experiment, there was no bottleneck bolt in the system.
Then I checked some material about the ack and nextTuple methods of Storm. Now my doubt is if the nextTuple method is called only when the previous tuples are fully processed and acked in the ack method?
If this is true, does it means that I cannot set a fixed time interval to emit data?
Thx a lot!
My experience has been that you should not expect Storm to make any real-time guarantees, including in your case the rate of tuple processing. You can certainly write a spout that only emits tuples on some time schedule, but Storm can't really guarantee that it will always call on the spout as often as you would like.
Note that nextTuple should be called whenever there is room available for more pending tuples in the topology. If the topology has free capacity, I would expect Storm to try to fill it up if it can with whatever it can get.
I had a similar use-case, and the way I accomplished it is by using TICK_TUPLE
Config tickConfig = new Config();
tickConfig.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 15);
...
...
builder.setBolt("storage_bolt", new S3Bolt(), 4).fieldsGrouping("shuffle_bolt", new Fields("hash")).addConfigurations(tickConfig);
Then in my storage_bolt (note it's written in python, but you will get an idea) i check if message is tick_tuple if it is then execute my code:
def process(self, tup):
if tup.stream == '__tick':
# Your logic that need to be executed every 15 seconds,
# or what ever you specified in tickConfig.
# NOTE: the maximum time is 600 s.
storm.ack(tup)
return

Resources