How do I format using standardjs in coc.nvim? - standardjs

I need to use StandardJs for linting a TS project's files. I use coc.nvim. I've read multiple SO posts and articles about setting up coc.nvim with ESLint to get it to lint on fileSave, but I cannot find anything with regards to StandardJS.
How can I get coc.nvim to automatically format JS and TS files I write using coc.nvim after saving?

Related

is there any way I can avoid reading old files from old folder with Apache Beam's TextIo watchForNewFiles(Duration, condition)?

Use Case: During dataflow job start up we should provide initial file name to read data and later on it should watch for new files in that directory and it should consider all remaining old files as already read.
Issues:
Approach 1:
PCollection<String> readfile = pipeline.apply(TextIO.read().from("gs://folder-Name/*").
watchForNewFiles(Duration.standardSeconds(10),
Watch.Growth.afterTimeSinceNewOutput(Duration.standardSeconds(30))));
If we are using like this its considering old files as new files for this dataflow job and reading all those files in that folder
Approach 2:
PCollection<String> readfile = pipeline.apply(TextIO.read().from("gs://folder-Name/file-name").
watchForNewFiles(Duration.standardSeconds(10),
Watch.Growth.afterTimeSinceNewOutput(Duration.standardSeconds(30))));
Its reading only this particular file and not able to read upcoming new files
can anyone please suggest the approach to achieve my use case?
The watchForNewFiles() function will always read all files matching the filepattern, both existing and new. In your second approach, the file pattern is only one file, so you just get that.
However, you can use the lower-level building block transforms in FileIO to accomplish what you need. The following code will just read files written after the pipeline starts:
PCollection<String> lines = p
.apply(FileIO.match().filepattern("gs://folder-Name/*")
.continuously(Duration.standardSeconds(30), afterTimeSinceNewOutput(Duration.standardHours(1)))
.setCoder(MetadataCoderV2.of())
.apply(Filter.by(metadata -> metadata.lastModifiedMillis() > PIPELINE_START))
.apply(FileIO.readMatches())
.apply(apply(TextIO.readFiles()))
You can change the details of the Filter transform to whatever precise condition you need. To also include specific older files, you can read those with a standard TextIO.read().from(...) and then use Flatten to combine that PCollection with the continuous set. Like this:
PCollection allLines =
PCollectionList.of(lines).and(p.apply(TextIO.read().from("gs://folder-Name/file-name)
.apply(Flatten.pCollections())
Maybe you need to clarify your Use Case, do you provide a file name to read ? or a file pattern ? What is the number of files expected ? Should you really use a Dataflow streaming pipeline ? Doesn't a Cloud Function answer your need ? What is your issue ? Files get read again when you restart your pipeline ?
You can, as suggested by danielm use FileIO to fetch and filter on file metadata in order to know which file was added after the pipeline began.
If you provide a file pattern, then all file will be read once by the pipeline. There's no way to keep a State between pipelines if you not code it yourself, so when you restart the pipeline you will read again all the file matching the pattern.
If you want to avoid that, you can manually move old files to another path between stopping the old pipeline and starting a new one.
You could also consider is consuming GCS notification on file creation with PubsubIO and use this event to know which file to treat in your pipeline.
A good practice though is to have multiple folders that reflects the status of the files:
input
processing
failed
succeed
This way you know the state of each file. You can put files to treat in the input folder, and inside your pipeline move the file to its corresponding state folder.

Is there a way to determine the coverage of a .PO file?

I've got a python program under active development, which uses gettext for translation.
I've got a .POT file with translations, but it is slightly out of date. I've got a script to generate an up-to-date .PO file. Is there a way to check how much of the new .PO file is covered by the .POT file?
I've got a .POT file with translations, but it is slightly out of date. I've got a script to generate an up-to-date .PO file
I think you mean the other way around. POT files are generated from your source code with PO files containing the translations.
Is there a way to check how much of the new .PO file is covered by the .POT file?
The Gettext command line msgmerge program can be used for syncing your out-of-date PO files with your latest source strings. To create a new PO file from an updated POT you would issue this command:
msgmerge old.po new.pot > updated.po
The new file will contain all the existing translations that are still valid and add any new source strings. Open it in your favourite PO editor and you should see how many strings now remain untranslated.
Update
As pointed out in the comments, you can see how many strings remain untranslated with the "statistics" option of the msgfmt program (normally used for compiling to .mo) e.g.
msgfmt --statistics updated.po
Or without bothering with the interim file:
msgmerge old.po new.pot | msgfmt --statistics -
This would produce a synopsis like:
123 translated messages, 77 untranslated messages.

use LDoc to generate document for whole lua project with index page

I want to generate documentation for my lua project
but with Ldoc i generate docs for each single lua file and the output file every time overwrite the index.html file .
So my question is how i can generate generate documentation for the whole project with index page that has link to the all pages.
I tried to do that with see tag but i don't know if i can use it to reference to another file not another part in the document
I used this:
ldoc.lua.bat pathtomyproject/filename.lua
The output is the default path myluainstallationpath/doc/index.html.
Try ldoc.lua.bat pathtomyproject instead. This will generate the docs for all the files in pathtomyproject and will generate an index.html that links to each file used in that folder..

Printing out Javadocs

Something that I've had a good hard look for and I have not been able to find, is how to efficiently obtain a hard copy of Javadocs? Obviously, one solution is simply to navigate to each page and execute a browser print, but there's got to be a better way! Do you guys have any ideas?
You can use DocBook Doclet (dbdoclet) to create DocBook XML from your JavaDoc Comments. The DocBook XML can then be transformed to PDF or (Singlepage-)HTML.
You can call the tool from the commandline. Point it to your class files and it will generate the DocBook XML. This works similar to the javadoc command which will generate the JavaDoc HTML. Example:
./dbdoclet -sourcepath ~/my-java-program/src/main/java -subpackages org.example
The result is a DocBook XML file in a dbdoclet subdirectory which can be used to create a PDF or HTML file. This can also be done from the command line; I am using the docbkx-maven-plugin for this.
You can do mass conversions with it, but it would require some time to make it work the way you want.

How can I generate a SWC from asset files dynamically?

Lets say you have 3 swf files in a directory:
/game/assets/
1.swf
2.swf
3.swf
What I need to do, is package these up into a SWC File, and then move that SWC file to the libs/ directory.
I plan to use ant, so this step must always occur before the compliation stage.
Today I use a VBS file to generate a XML file. Then I use that XML file to generate a AssetMap which is a series of [Embeds] (1.swf, 2.swf, 3.swf) which are ByteArrays.
I then pass these byte arrays to a loader.loabytes to generate a MovieClip.
But this "real time bytearray conversion" as far too slow. Id prefer I could have direct access to instances like I do with a SWC.
Can anyone offer me advice?

Resources