I have buffered a zlib stream in a std:vector.
I have uncompressed programmatically this stream in a new std:vector named "UncompressedZlibStream".
I know that my stream contains multiple files.
I would like to knwo how "cut" (separate) my files in the stream.
I think zlib use a separator ? but why caracter or sequence !?
Anyone have any informations about this ?
Thanks a lot,
best regards,
CrashOverHead
Zlib itself is only a compression library. It is typically only used to compress a single file. Putting multiple files into zlib requires that you use a format like tar and then compress the result. Zlib compressed tar files are pretty common in the Unix world. You may want to have a look at LibTar. If it's anything else it's likely proprietary and you're kind of on your own for how to dice the stream.
Related
i have video uncompressed .avi video files. however they come with a known 2048kb header at the beginning of the file, such that no video program (Vlc player) recognises the file as video files. can i force VLC to ignore the first 2048kb? how?
what would be the easiest way to remove this header?
Unfortunately there is no known to me way of getting vlc to use a larger "probesize" or somehow just skip to byte X.
Also unfortunately there is no known to me way to add some "camouflage" or even add an empty chunk to the front of an avi container. Next approach could be to genereate a reference movie but also no automatic usage tools available at the best of my knowledge. And as there is also no way to just "set or change" the start offset of a file, we have to copy it.
My recommendation for copying parts of files is this sourceforge project: skf
https://sourceforge.net/projects/swissfileknife/?source=typ_redirect
some examples
Is it possible to run ffmpeg from the command line which will either place the 'moov atom' metadata in the beginning of the MP4 file or run the qt-faststart as a post processing operation in ffmpeg so the generated file is stream-able through the internet?
I can of course run it as a separate command, but would prefer it to be something
as an option within ffmpeg, or
as part of a post conversion, command line option when converting the video files via ffmpeg
Edit 1
http://ffmpeg.org/ffmpeg.html#mov
MOV / MP4 muxer
The muxer options are:
‘-moov_size bytes’
Reserves space for the moov atom at the beginning of the file instead of
placing the moov atom at the end. If the space reserved is insufficient,
muxing will fail.
Seems like faststart support has been included in ffmpeg. FFmpeg Formats Documentation:
-movflags faststart
Run a second pass moving the moov atom on top of the file. This
operation can take a while, and will not work in various situations
such as fragmented output, thus it is not enabled by default.
Ended up setting up and running qt-faststart after ffmpeg conversion process.
ffmpeg has an option for smooth streaming
-movflags isml+frag_keyframe
and it's also useful for avoiding corrupt videos when power loss during recording
Is there a way (or any kind of hack) to read input data from compressed files?
My input consists of a few hundreds of files, which are produced as compressed with gzip and decompressing them is somewhat tedious.
Reading from compressed text sources is now supported in Dataflow (as of this commit). Specifically, files compressed with gzip and bzip2 can be read from by specifying the compression type:
TextIO.Read.from(myFileName).withCompressionType(TextIO.CompressionType.GZIP)
However, if the file has a .gz or .bz2 extension, you don't have do do anything: the default compression type is AUTO, which examines file extensions to determine the correct compression type for a file. This even works with globs, where the files that result from the glob may be a mix of .gz, .bz2, and uncompressed.
The slower performance with my work around was most likely because Dataflow was putting most of the files in the same split so they weren't being processed in parallel. You can try the following to speed things up.
Create a PCollection for each file by applying the Create transform multiple times (each time to a single file).
Use the Flatten transform to create a single PCollection containing all the files from PCollections representing individual files.
Apply your pipeline to this PCollection.
I also found that for files that reside in the cloud store, setting the content type and content encoding appears to "just work" without the need for a workaround.
Specifically - I run
gsutil -m setmeta -h "Content-Encoding:gzip" -h "Content-Type:text/plain" <path>
I just noticed that specifying the compression type is now available in the latest version of the SDK (v0.3.150210). I've tested it, and was able to load my GZ files directly from GCS to BQ without any problems.
In my app i want the user to be able to download offline map content.
So I (compressed) moved all my tiles into a zip file. (I used 0 compression)
The structure is like that: {z/x/y.jpg}
+0
+-0
+--0.jpg
+1
+-1
+--0.jpg
+2
+-2
+--1.jpg
So basically there are going to be many many files for zoom level 0-15. (about 120.000 tiles for my test-region).
I am using https://github.com/mattconnolly/ZipArchive now but also tried out https://github.com/soffes/ssziparchive before and both are pretty slow. It takes about 5!! minutes on my iPhone 5S for the files to unzip.
Is there any way I can speed things up? What other possibilities rather than downloading the tiles in one big zip file would there be?
Edit:
How can i download the content of the whole folder quickly to my iPhone without the need of unzipping something?
Any help is appreciated!
JPGs rarely compress at all with zip - they are by definition already compressed. What you should do is create your own binary file format, and put whatever metadata you need into it along with the images (which you should encode with a really low quality number, to get their size down).
When you download those files, you can open then, quickly read them into memory, and extract out data or images as needed.
This will be really fast and have virtually no overhead if your extra data is binary (not text).
PS: I just tripped on a PHP Plist class
If anyone is wondering how I was ending up:
For my use-case (MapTiles) I am using MBTiles now instead of zipped images. It's one big database file and super easy to read if using FMDB. No unpacking whatsoever needed...
Even if I was placing the Images all in one binary file without any compression, the "extracting" still took forever!
I was wondering how do you inject metadata into an f4v file with quepoints? I've been reading somewhere that it's either during encoding or a custom actionscript that embeds when the file runs.
An F4V file is merely a renamed MP4 file. By and large, any tools, tips and technologies that work on MP4 file will do so for F4V files.
Seeking into MP4 files is non-trivial, and much more difficult than FLV files, which I assume you are thinking about. (But maybe I am wrong?)
That said, the meta data you are after is probably already in the MP4 in the MOOV atom. (MP4 files are composed of atoms. The MOOV atom is the meta data atom.) There probably is no need to inject it. But, to get quick starts and have a player be able to seek through out a file, then you need to have the MOOV 'atom' at the front of the file. There are tools to do this on existing files, and it can be done when encoding the file.
I've never heard of AS doing any of this.