The Head and Skip Head blocks do not appear to be writing to all file sinks - gnuradio-companion

I have the following flowgraph to capture a small specified band. It seems that the final file sink never gets written to because it always has a 0 byte size.
This is the first time I have tried to use the Head and Skip Head blocks, so I am not sure if it has something to do with them or not. Initially only the first file was being written to, but as I increased the vec_size variable the second, third, and fourth files were being written to as well. No matter how large I make the vec_size variable though, I can't seem to get any data written to the fifth file sink. Does anyone know if it's an issue with the Head/Skip Head blocks or possibly something else I'm not seeing?

Related

How to create a Save/Load function on Scratch?

Im trying to make a game on Scratch that will use a feature to generate a special code, and when that code is input into a certain area it will load the stats that were there when the code was generated. I've run into a problem however, I don't know how to make it and I couldn't find a clear cut answer for how to make it.
I would prefer that the solution be:
Able to save information for as long as needed (from 1 second to however long until it's input again.)
Doesn't take too many blocks to make, so that the project won't take forever to load it.
Of course i'm willing to take any solution in order to get my game up and running, those are just preferences.
You can put all of the programs in a custom block with "Run without screen refresh" on so that the program runs instantly.
If you save the stats using variables, you could combine those variable values into one string divided by /s. i.e. join([highscore]) (join("/") (join([kills]) (/))
NOTE: Don't add any "/" in your stats, you can probably guess why.
Now "bear" (pun) with me, this is going to take a while to read
Then you need the variables:
[read] for reading the inputted code
[input] for storing the numbers
Then you could make another function that reads the code like so: letter ([read]) of (code) and stores that information to the [input] variable like this: set [input] to (letter ([read]) of (code)). Then change [read] by (1) so the function can read the next character of the code. Once it letter ([read]) of (code) equals "/", this tells the program to set [*stat variable*] to (input) (in our example, this would be [highscore] since it was the first variable we saved) and set [input] to (0), and repeat again until all of the stats variables are filled (In this case, it repeats 2 times because we saved two variables: [highscore] and [kills]).
This is the least amount of code that it takes. Jumbling it up takes more code. I will later edit this answer with a screenshot showcasing whatever I just said before, hopefully clearing up the mess of words above.
The technique you mentioned is used in many scratch games but there is two option for you when making the save/load system. You can either do it the simpler way which makes the code SUPER long(not joking). The other way is most scratchers use, encoding the data into a string as short as possible so it's easy to transfer.
If you want to do the second way, you can have a look at griffpatch's video on the mario platformer remake where he used a encode system to save levels.https://www.youtube.com/watch?v=IRtlrBnX-dY The tips is to encode your data (maybe score/items name/progress) into numbers and letters for example converting repeated letters to a shorter string which the game can still decode and read without errors
If you are worried it took too long to load, I am pretty sure it won't be a problem unless you really save a big load of data. The common compress method used by everyone works pretty well. If you want more data stored you may have to think of some other method. There is not an actual way to do that as different data have different unique methods for things working the best. Good luck.

Dask - Understanding diagnostics - memory:list

I am working on some fairly complex application that is making use of Dask framework, trying to increase the performance. To that end I am looking at the diagnostics dashboard. I have two use-cases. On first I have a 1GB parquet file split in 50 parts, and on second use case I have the first part of the above file, split over 5 parts, which is what used for the following charts:
The red node is called "memory:list" and I do not understand what it is.
When running the bigger input this seems to block the whole operation.
Finally this is what I see when I go inside those nodes:
I am not sure where I should start looking to understand what is generating this memory:list node, especially given how there is no stack button inside the task as it often happens. Any suggestions ?
Red nodes are in memory. So this computation has occurred, and the result is sitting in memory on some machine.
It looks like the type of the piece of data is a Python list object. Also, the name of the task is list-159..., so probably this is the result of calling the list Python function.

GNURADIO 3.7.8: identify a part of a byte stream

I am feeling Stream Tags, Message Passing, Packet Data Transmission are a bit of overkill, and I have hard time to understand.
I have a simple wish: starting from a stream of bytes I would like to "extract" only a fixed number of bytes) starting from a known pattern. For example from a stream like this: ...01h 55h XXh YYh ZZh..., it should extract XXh YYh ZZh.
I utilized Correlate Access Code Tag block -- Tagged Stream Align -- Pack K Bits to convert a bit stream into a byte stream and synch to the desired Access Code (01h 55h), but how do I tell gnuradio to only process 3 bytes after every time the code is found? Likely OOT block would solve, but is it there some combinatino of standard GRC block to do this?
I think with correllate_access_code_tag_bb you can actually build this, with a bit of brain-twisting, from existing blocks alone. (Note: this does rely on stream tags, because those are the right tool to mark special points in a sample flow.)
However, your simple case might really not be worth it. Simply follow the guided tutorials up to the point where you can write your own python block.
Use self.set_history(len(preamble)+len_payload) in the constructor of your new block to make sure you always see the last samples of the previous iteration in your current call to work, and simply search for the preamble in your sample stream, outputting only the len_payload following bytes when you find it, not producing anything if you don't find it.

Parsing file in parallel

I am thinking about a way to parse a fasta-file in parallel. For those of you not knowing fasta-format an example:
>SEQUENCE_1
MTEITAAMVKELRESTGAGMMDCKNALSETNGDFDKAVQLLREKGLGKAAKKADRLAAEG
LVSVKVSDDFTIAAMRPSYLSYEDLDMTFVENEYKALVAELEKENEERRRLKDPNKPEHK
IPQFASRKQLSDAILKEAEEKIKEELKAQGKPEKIWDNIIPGKMNSFIADNSQLDSKLTL
MGQFYVMDDKKTVEQVIAEKEKEFGGKIKIVEFICFEVGEGLEKKTEDFAAEVAAQL
>SEQUENCE_2
SATVSEINSETDFVAKNDQFIALTKDTTAHIQSNSLQSVEELHSSTINGVKFEEYLKSQI
ATIGENLVVRRFATLKAGANGVVNGYIHTNGRVGVVIAAACDSAEVASKSRDLLRQICMH
So lines starting with an '>' are header lines containing an identifier for the sequence following the identifier.
I suppose you load the entire file to memory but after this i am having trouble finding a way to process these data.
The problem is: Threads can not start at an arbitrary position because they could cut sequences this way.
Does someone has any experience in parsing files in parallel when the lines depend on each other? Any idea is appreciated.
Should be easy enough, since the dependence of lines on each other is very simple in this case: just make the threads start in an arbitrary position and then just skip the lines until they get to one that starts with a '>' (i.e. starts a new sequence).
To make sure no sequence gets processed twice, keep a set of all sequence IDs that have been processed (or you could do it by line number if the sequence IDs aren't unique, but they really should be!).
Do a preprocessing step, walk through the data once, and determine all valid start points. Let's call these tasks. Then you can simply use a worker-crew model, where each worker repeatedly asks for a task (a starting point), and parses it.

Using PARSE on a PORT! value

I tried using PARSE on a PORT! and it does not work:
>> parse open %test-data.r [to end]
** Script error: parse does not allow port! for its input argument
Of course, it works if you read the data in:
>> parse read open %test-data.r [to end]
== true
...but it seems it would be useful to be able to use PARSE on large files without first loading them into memory.
Is there a reason why PARSE couldn't work on a PORT! ... or is it merely not implemented yet?
the easy answer is no we can't...
The way parse works, it may need to roll-back to a prior part of the input string, which might in fact be the head of the complete input, when it meets the last character of the stream.
ports copy their data to a string buffer as they get their input from a port, so in fact, there is never any "prior" string for parse to roll-back to. its like quantum physics... just looking at it, its not there anymore.
But as you know in rebol... no isn't an answer. ;-)
This being said, there is a way to parse data from a port as its being grabbed, but its a bit more work.
what you do is use a buffer, and
APPEND buffer COPY/part connection amount
Depending on your data, amount could be 1 byte or 1kb, use what makes sense.
Once the new input is added to your buffer, parse it and add logic to know if you matched part of that buffer.
If something positively matched, you remove/part what matched from the buffer, and continue parsing until nothing parses.
you then repeat above until you reach the end of input.
I've used this in a real-time EDI tcp server which has an "always on" tcp port in order to break up a (potentially) continuous stream of input data, which actually piggy-backs messages end to end.
details
The best way to setup this system is to use /no-wait and loop until the port closes (you receive none instead of "").
Also make sure you have a way of checking for data integrity problems (like a skipped byte, or erroneous message) when you are parsing, otherwise, you will never reach the end.
In my system, when the buffer was beyond a specific size, I tried an alternate rule which skipped bytes until a pattern might be found further down the stream. If one was found, an error was logged, the partial message stored and a alert raised for sysadmin to sort out the message.
HTH !
I think that Maxim's answer is good enough. At this moment the parse on port is not implemented. I don't think it's impossible to implement it later, but we must solve other issues first.
Also as Maxim says, you can do it even now, but it very depends what exactly you want to do.
You can parse large files without need to read them completely to the memory, for sure. It's always good to know, what you expect to parse. For example all large files, like files for music and video, are divided into chunks, so you can just use copy|seek to get these chunks and parse them.
Or if you want to get just titles of multiple web pages, you can just read, let's say, first 1024 bytes and look for the title tag here, if it fails, read more bytes and try it again...
That's exactly what must be done to allow parse on port natively anyway.
And feel free to add a WISH in the CureCode database: http://curecode.org/rebol3/

Resources