First few words ignored while writing to xilinx coregen FIFO - xilinx

I am using xilinx coregen FIFO. I am writing some data (its nothing but some counter values) but I feel it is skipping first two words. I have no idea about this behavior. All the signals are generated using simple combination logic as a part of testbench.
Any help is highly appreciated. Thanks

Related

Parse batch of SequenceExample

There is function to parse SequenceExample --> tf.parse_single_sequence_example().
But it parses only single SequenceExample, which is not effective.
Is there any possibility to parse a batch of SequenceExamples?
tf.parse_example can parse many Examples.
Documentation for tf.parse_example contain a little info about SequenceExample:
Each FixedLenSequenceFeature df maps to a Tensor of the specified type (or tf.float32 if not specified) and shape (serialized.size(), None) + df.shape. All examples in serialized will be padded with default_value along the second dimension.
But it is not clear, how to do that. Have not found any examples in google.
Is it possible to parse many SequenceExamples using parse_example() or may be other function exists?
Edit:
Where can I ask question to tensorflow developers: does they plan to implement parse function for multiple SequenceExample -s?
Any help ll be appreciated.
If you have many small sequences where batching at this stage is important, I would recommend VarLenFeatures or FixedLenSequenceFeatures with regular Example protos (which, as you note, can be parsed in batches with parse_example). For examples of this, see the unit tests associated with example parsing (testSerializedContainingSparse parses Examples with FixedLenSequenceFeatures).
SequenceExamples are more geared toward cases where there is significant amounts of preprocessing work to be done for each SequenceExample (which can be done in parallel with queues). parse_example does does not support SequenceExamples.

Detecting cycles in Syntax Directed Definitions.. Exponential?

I was studying Syntax Directed Definition from the "Compilers: Principles, Techniques and Tools by Aho, Ullman, Sethi and Lam" when I came across the following line in context of circular dependencies of attributes in parse tree:
It is computationally difficult to determine whether or not there exist any circularities in any of the parse trees that a given SDD could have to translate.
(section: 5.1.2)
Also in http://cs.nyu.edu/courses/fall12/CSCI-GA.2130-001/lecture8.pdf it is mentioned that
Detecting cycles has exponential time complexity
But there is Tarjan's strongly connected components algorithm that can find cycles in O(E+V). So why are the above sources contradicting this?
Can anyone tell me what I am missing?
It's O(E+V) to find a cycle in some parse tree. But that's not the problem. The problem is to
determine whether or not there exist any circularities in any of the parse trees that a given SDD could have to translate. (Emphasis added).
That's a rather more difficult problem.

Dynamic FORMS for printing AFP

I am trying to print AFP to sysout but the FORMS parameter is not known (and cannot be known) by the JCL. My current solution is to create dynamic JCL and spin it to INTRDR, but this is a weak solution because the job will not be under the control of our scheduler... and thus, an abend or other issue will go unnoticed by night-time operators.
I started concocting a way to print the AFP via a COBOL program. I use BPXWDYN to create the SYSOUT DD dynamically, which allows me to set the FORMS parameter however I want. But the next step is dumping the AFP to that DD.
I thought I could call IEBGENER dynamically from my COBOL program, but that pulls a S0C4.
I can move the AFP records from one DD to the other in the COBOL program, but that limits me to one LRECL... and I have many different LRECL definitions for AFP throughout my system, and COBOL MUST know the LRECL at compile time.
Any thoughts? Is it possible to call IEBGENER dynamically and not get the S0C4? Any other ideas I haven't thought of?
Thanks in advance...
Have you thought of writing a small assembler program? You can specify the LRECL in your BPXWDYN call, and the DCB does not need to specify an LRECL; it will get it from the DCB parameters at OPEN time. A program to simulate IEBGENER is quite trivial.
Alternatively, look at calling SORT with a FIELDS=COPY parameter. SORT doesn't need the LRECL either. Or write a REXX script.
There are many ways of doing this; you just need to look outside the COBOL box.
Your question is not extremely clear but I'm wondering if you should consider using the ACIF utility program called APKACIF instead of IEBGENR. The utility will merge your data and resolve the AFP FORMDEF, PAGEDEF objects to a dataset or print stream.

Erlang: Compute data structure literal (constant) at compile time?

This may be a naive question, and I suspect the answer is "yes," but I had no luck searching here and elsewhere on terms like "erlang compiler optimization constants" etc.
At any rate, can (will) the erlang compiler create a data structure that is constant or literal at compile time, and use that instead of creating code that creates the data structure over and over again? I will provide a simple toy example.
test() -> sets:from_list([usd, eur, yen, nzd, peso]).
Can (will) the compiler simply stick the set there at the output of the function instead of computing it every time?
The reason I ask is, I want to have a lookup table in a program I'm developing. The table is just constants that can be calculated (at least theoretically) at compile time. I'd like to just compute the table once, and not have to compute it every time. I know I could do this in other ways, such as compute the thing and store it in the process dictionary for instance (or perhaps an ets or mnesia table). But I always start simple, and to me the simplest solution is to do it like the toy example above, if the compiler optimizes it.
If that doesn't work, is there some other way to achieve what I want? (I guess I could look into parse transforms if they would work for this, but that's getting more complicated than I would like?)
THIS JUST IN. I used compile:file/2 with an 'S' option to produce the following. I'm no erlang assembly expert, but it looks like the optimization isn't performed:
{function, test, 0, 5}.
{label,4}.
{func_info,{atom,exchange},{atom,test},0}.
{label,5}.
{move,{literal,[usd,eur,yen,nzd,peso]},{x,0}}.
{call_ext_only,1,{extfunc,sets,from_list,1}}.
No, erlang compiler doesn't perform partial evaluation of calls to external modules which set is. You can use ct_expand module of famous parse_trans to achieve this effect.
providing that set is not native datatype for erlang, and (as matter of fact) it's just a library, written in erlang, I don't think it's feasibly for compiler to create sets at compile time.
As you could see, sets are not optimized in erlang (as any other library written in erlang).
The way of solving your problem is to compute the set once and pass it as a parameter to the functions or to use ETS/Mnesia.

How to detect tabular data from a variety of sources

In an experimental project I am playing with I want to be able to look at textual data and detect whether it contains data in a tabular format. Of course there are a lot of cases that could look like tabular data, so I was wondering what sort of algorithm I'd need to research to look for common features.
My first thought was to write a long switch/case statement that checked for data seperated by tabs, and then another case for data separated by pipe symbols and then yet another case for data separated in another way etc etc. Now of course I realize that I would have to come up with a list of different things to detect - but I wondered if there was a more intelligent way of detecting these features than doing a relatively slow search for each type.
I realize this question isn't especially eloquently put so I hope it makes some sense!
Any ideas?
(no idea how to tag this either - so help there is welcomed!)
The only reliable scheme would be to use machine-learning. You could, for example, train a perceptron classifier on a stack of examples of tabular and non-tabular materials.
A mixed solution might be appropriate, i.e. one whereby you handled the most common/obvious cases with simple heuristics (handled in "switch-like" manner) as you suggested, and to leave the harder cases, for automated-learning and other types of classifier-logic.
This assumes that you do not already have a defined types stored in the TSV.
A TSV file is typically
[Value1]\t[Value..N]\n
My suggestion would be to:
Count up all the tabs
Count up all of new lines
Count the total tabs in the first row
Divide the total number of tabs by the tabs in the first row
With the result of 4, if you get a remainder of 0 then you have a candidate of TSV files. From there you may either want to do the following things:
You can continue reading the data and ignoring the error of lines with less or more than the predicted tabs per line
You can scan each line before reading to make sure all are consistent
You can read up to the line that does not fit the format and then throw an error
Once you have a good prediction of the amount of tab separated values you can use a regular expression to parse out the values [as a group].

Resources