Lets say you have 3 swf files in a directory:
/game/assets/
1.swf
2.swf
3.swf
What I need to do, is package these up into a SWC File, and then move that SWC file to the libs/ directory.
I plan to use ant, so this step must always occur before the compliation stage.
Today I use a VBS file to generate a XML file. Then I use that XML file to generate a AssetMap which is a series of [Embeds] (1.swf, 2.swf, 3.swf) which are ByteArrays.
I then pass these byte arrays to a loader.loabytes to generate a MovieClip.
But this "real time bytearray conversion" as far too slow. Id prefer I could have direct access to instances like I do with a SWC.
Can anyone offer me advice?
Related
Given I have a list of files, e.g foo/src/main.cpp, foo/src/bar.cpp, foo/README.md is it possible to determine which of those files are part of a bazel package?
In my example, the output would e.g. be foo/src/main.cpp, foo/src/bar.cpp since the README.md would not be part of the build.
One way to do this would be to call bazel query on each file and see if it results in an output, but that is quite inefficient and so I was wondering if there is an easier way.
Background: I am trying to determine if a changes in a set of files have an impact on a target, and I want to use bazel query somepath(//some/target, set($FILES)) for that, but this will fail if any of the files in $FILES is not part of a BUILD file.
How about flipping it around and querying for all the source files of the target with:
bazel query 'kind("source file", deps(//some:target))'
and then checking if the result has any of the files in the set
I've got a python program under active development, which uses gettext for translation.
I've got a .POT file with translations, but it is slightly out of date. I've got a script to generate an up-to-date .PO file. Is there a way to check how much of the new .PO file is covered by the .POT file?
I've got a .POT file with translations, but it is slightly out of date. I've got a script to generate an up-to-date .PO file
I think you mean the other way around. POT files are generated from your source code with PO files containing the translations.
Is there a way to check how much of the new .PO file is covered by the .POT file?
The Gettext command line msgmerge program can be used for syncing your out-of-date PO files with your latest source strings. To create a new PO file from an updated POT you would issue this command:
msgmerge old.po new.pot > updated.po
The new file will contain all the existing translations that are still valid and add any new source strings. Open it in your favourite PO editor and you should see how many strings now remain untranslated.
Update
As pointed out in the comments, you can see how many strings remain untranslated with the "statistics" option of the msgfmt program (normally used for compiling to .mo) e.g.
msgfmt --statistics updated.po
Or without bothering with the interim file:
msgmerge old.po new.pot | msgfmt --statistics -
This would produce a synopsis like:
123 translated messages, 77 untranslated messages.
I have folders that contain xml that I need to index in Sphinx. I explored the xmlpipe2 driver, and my understanding is that it only reads xml generated from a script, aka, one source. Is there a way to index a folder of xml files if I don't have the option of putting it a single xml file?
a XMLPipe Script is just a script that outputs XML, which sphinx then ingests.
It doesnt matter WHERE that script, gets the data from that it outputs.
It could get it from other the XML files, the script would just walk the folder structure, read all the files, and output XML.
I have some compression components (like KAZip, JVCL, zLib) and exactly know how to use them to compress files, but i want to compress multiple folders into one single archive and keep folders structure after extract, how can i do it?
in all those components i just can give a list of files to compress, i can not give struct of folders to extract, there is no way (or i couldn't find) to tell every file must be extracted where:
i have a file named myText.txt in folder FOLDER_A and have a file with same name myText.txt in folder FOLDER_B:
|
|__________ FOLDER_A
| |________ myText.txt
|
|__________ FOLDER_B
| |________ myText.txt
|
i can give a list of files to compress: myList(myText.txt, myText.txt) but i cant give the structure for uncompress files, what is best way to found which file belongs to which folder?
The zip format just does not have folders. Well, it kinda does, but they are kind of empty placeholders, only inserted if you need metadata storage like user access rights. But other than those rather rare advanced things - there is no need for folders at all. What is really done - and what you can observe opening zip file in the notepad and scrolling to the end - is that each file has its path in it, starting with "archive root". In your exanple the zip file should have two entries (two files):
FOLDER_A/myText.txt
FOLDER_B/myText.txt
Note, that the separators used are true slashes, common to UNIX world, not back-slashes used in DOS/Windows world. Some libraries would fix back-slashes it for you, some would not - just do your tests.
Now, let's assume that that tree is contained in D:\TEMP\Project - just for example.
D:\TEMP\Project\FOLDER_A\myText.txt
D:\TEMP\Project\FOLDER_B\myText.txt
There are two more questions (other than path separators): are there more folders within D:\TEMP\Project\ that should be ignored, rather than zipped (like maybe D:\TEMP\Project\FOLDER_C\*.* ? and does your zip-library have direct API to pack the folders wit hall its internal subfolder and files or should you do it file by file ?
Those three questions you should ask yourself and check while choosing the library. The code drafts would be somewhat different.
Now let's start drafting for the libraries themselves:
The default variant is just using Delphi itself.
Enumerate the files in the folder: http://docwiki.embarcadero.com/CodeExamples/XE3/en/DirectoriesAndFilesEnumeraion_(Delphi)
If that enumeration results in absolute paths then strip the common D:\TEMP\Project from the beginning: something like If AnsiStartsText('D:\TEMP\Project\', filename) then Delete(filename, 1, Length('D:\TEMP\Project\'));. You should get paths relative to chosen containing place. Especially if you do not compress the whole path and live some FOLDER_C out of archive.
Maybe you should also call StringReplace to change '\' into '/' on filenames
then you can zip them using http://docwiki.embarcadero.com/Libraries/XE2/en/System.Zip.TZipFile.Add - take care to specify correct relative ArchiveFileName like aforementioned FOLDER_A/myText.txt
You can use ZipMaster library. It is very VCL-bound and may cause troubles using threads or DLLs. But for simple applications it just works. http://www.delphizip.org/
Last version page have links to "setup" package which had both sources, help and demos. Among demos there is an full-featured archive browser, capable of storing folders. So, you just can read the code directly from it. http://www.delphizip.org/191/v191.html
You talked about JVCL, that means you already have Jedi CodeLib installed. And JCL comes with a proper class and function, that judging by name can directly do what you want it too: function TJclSevenzipCompressArchive.AddDirectory(const PackedName: WideString; const DirName: string = ''; RecurseIntoDir: Boolean = False; AddFilesInDir: Boolean = False): Integer;
Actually all those libraries are rather similar on basic level, when i made XLSX export i just made a uniform zipping API, that is used with no difference what an actual zipping engine is installed. But it works with in-memory TStream rather than on-disk files, so would not help you directly. But i just learned than apart of few quirks (like instant vs postponed zipping) on ground level all those libs works the same.
I am using the tf function to list the contents of a tar.gz file. It is pretty large ~1 GB. There are around 1000 files organized in a year/month/day file structure.
The listing operation takes quit a bit of time. Seems like a listing should be fast. Can anyone enlighten me on the internals?
Thanks -
Take a look at wikipedia, for example, to verify that each file inside the tar is preceed by a header. To verify all files inside the tar, is necessary to read the whole tar.
There's no "index" in the beggining of the tar to indicate it's contents.
Tar has simple file structure. If you want list them, you must parse all file.
If you want find one file, you can stop process. But must be sure archive has only one file version. This is typical on packed archives because adding on that is unsupported.
for example you can do like this:
tar tvzf somefile.gz|grep for find something|\
while read file; do foundfile="$file"; last; done
at this loop will break and do not read everything, but only from start to file position.
If you must do something more with list, save it to any temporary file. you can gzip this file for place saving if it is needed:
tar tvzf somefile.gz|gzip >temporary_filelist.gz