I am a newbie of the Contiki System. I am trying to upload binary data (in
txt. format, it is some recorded interference) to the TelosB node to regenerate
interference (the data file is in large size, 5M for example), in other words, I am trying to use Contiki to read binary files and send to the node. I googled
this problem, but did not found much useful information.
Could anyone give me some idea?
Thank you in advance.
The easiest way to send data from/to your TelosB is to just send it to the tty associated with the USB port it's connected to (e.g., /dev/ttyUSB0). Your TelosB will be able to simply read the data from stdin (and vice versa).
Related
I am building a python cloud video pipeline that will read video from a bucket, perform some computer vision analysis and return frames back to a bucket. As far as I can tell, there is not a Beam read method to pass GCS paths to opencv, similar to TextIO.read(). My options moving forward seem to download the file locally (they are large), use GCS fuse to mount on a local worker (possible?) or write a custom source method. Anyone have experience on what makes most sense?
My main confusion was this question here
Can google cloud dataflow (apache beam) use ffmpeg to process video or image data
How would ffmpeg have access to the path? Its not just a question of uploading the binary? There needs to be a Beam method to pass the item, correct?
I think that you will need to download the files first and then pass them through.
However instead of saving the files locally, is it possible to pass bytes through to opencv. Does it accept any sort of ByteStream or input stream?
You could have one ParDo which downloads the files using the GCS API, then passes it to a opencv through a stream, ByteChannel stdin pipe, etc.
If that is not available, you will need to save the files to disk locally. Then pass opencv the filename. This could be tricky because you may end up using too much disk space. So make sure to garbage collect the files properly and delete the files from local disk after opencv processes them.
I'm not sure but you may need to also select a certain VM machine type to ensure you have enough disk space, depending on the size of your files.
I am currently trying to read some PCAP files using WinPCAP-API.
With this example I managed to read the data, timestamp and length. But I do not get how to read the source and destination IP-adresses and ports?
But I do not get how to read the source and destination IP-adresses and ports?
By dissecting the raw packet data that WinPcap gives you; libpcap/WinPcap provide no APIs for dissecting raw packet data (because different libpcap/WinPcap applications have different needs - an intrusion detection application such as Snort and a packet analyzer such as tcpdump or Wireshark do different things with the data).
See, for example, libtins as a C++ library for doing packet dissection, or the libpcap tutorial for an example of how to do the dissecting yourself.
I am using tcpreplay and I need to synchronize two or more machines replaying recorded network traffic.
Do You know how can I synchronize replaying pcap file on multiple hosts ?
Maybe there is a better way than using tcpreplay?
Thanks for answer.
I am writing a distributed Erlang application where several nodes are connected via a network with limited bandwidth. Thus, I would like to be able to minimize the size of the packets sent across the network when processes on different nodes send each other messages.
From http://www.erlang.org/doc/apps/erts/erl_ext_dist.html, I understand that the Erlang distribution mechanism uses erlang:term_to_binary/1,2 internally to convert Erlang messages to the external binary format that is sent over the network. Now, term_to_binary/2 supports several options that are useful for reducing the size of the binaries (http://www.erlang.org/doc/man/erlang.html#term_to_binary-1), including a compression option as well as the ability to choose a minor version with more efficient encoding of floats.
I would like to be able to tell the distribution mechanism to use both of these options every time it sends a message over the network. In other words, I would like to be able to specify the Options list that the distribution mechanism calls term_to_binary with. However, I have not been able to find any documentation on this subject. Is this possible to do?
Thanks for your help! :)
If I understand the code correctly, message encoding hardcoded around the line 1565 of dist.c/dsig_send() so you can't change the way messages are encoded without patching and recompiling the emulator.
However you can change the carrier for message distribution as described here. There is an example of use SSL for Erlang distribution. So you can create a connection which compress all transmission messages (maybe it's even possible with tweaked SSL example).
There are few examples of standard distribution modules:
inet_tcp_dist.erl
inet6_tcp_dist.erl
inet_ssl_dist.erl
uds_dist example
Are you using rpc from node to node? Or OTP behaviours? if so try to compress with zlib the binary before it is sent
A client has a system which reads large files (up to 1 GB) of multiple video images. Access is via an indexing file which "points" into the larger file. This works well on a LAN. Does anyone have any suggestions as to how I can access these files through the internet if they are held on a remote server. The key constraint is that we cannot afford the time necessary to download the whole file before accessing individual images within it.
You could put your big file behind an HTTP server like Apache, then have your client side use HTTP Range headers to fetch the chunk it needs.
Another alternative would be to write a simple script in PHP, Perl or server-language-of-your-choice which takes the required offsets as input and returns the chunk of data you need, again over HTTP.
If I understand the question correctly, it depends entirely on the format chosen to contain the images as a video. If the container has been designed in such a way that the information about each image is accessible just before or just after the image, rather than at the end of the container, you could extract images from the video container and the meta-data of the images, to start working on what you have downloaded until now. You will have to have an idea of the binary format used.
FTP does let you use 'paged files' where sections of the file can be transferred independently
To transmit files that are discontinuous, FTP defines a page
structure. Files of this type are sometimes known as
"random access files" or even as "holey files".
In FTP, the sections of the file are called pages -- rfc959
I've never used it myself though.