TypeScript localization issues - localization

If I use typescript and declare a string there, and set that string to some Russian text.
Then typescript compiler compiles to JavaScript with no regard to encoding I use in solution.
So I get that "questions in rombic symbols" instead of normal letters.
So, do you know how to fix it?

TypeScript as default creates an ANSI encoded file when added using visual studio. (or visual studio creates ansi files as standard)
To fix open the file in notepad and use "save as" and change the encoding to UTF8 or UNICODE.

You need to change your file encoding to UNICODE.
Compiling:
var x = "привет мир";
class foo {
public done() {
return "привет мир";
}
}
Gave the following JS for me:
var x = "привет мир";
var foo = (function () {
function foo() { }
foo.prototype.done = function () {
return "привет мир";
};
return foo;
})();

I have the same issue when I use cscript to run tsc.js compiler.
I've found that the input file for tsc should be in utf-8 with signature (BOM) or unicode-16 encoding.
If I use nodejs to run tsc.js the input file could be in utf-8 even without signature and everything is ok.

Related

Include file as a string in Dart

In Dart is there a way to include a file as a string, similar to Rust's include_str macro?
I do not want to load the string at runtime from a file or asset.
Not with a similar function.
However there is a workaround that I sometimes use which is a bit different from what you are looking for but might suit your needs :
File translated.dart (presumably generated) :
const translatedHelloWorld = "Hola mundo\n";
In your main file :
import 'translated.dart';
void main() {
assert(translatedHelloWorld, "Hola mundo\n");
}

CoreNLP : provide pos tags

I have text that is already tokenized, sentence-split, and POS-tagged.
I would like to use CoreNLP to additionally annotate lemmas (lemma), named entities (ner), contituency and dependency parse (parse), and coreferences (dcoref).
Is there a combination of commandline options and option file specifications that makes this possible from the command line?
According to this question, I can ask the parser to view whitespace as delimiting tokens, and newlines as delimiting sentences by adding this to my properties file:
tokenize.whitespace = true
ssplit.eolonly = true
This works well, so all that remains is to specify to CoreNLP that I would like to provide POS tags too.
When using the Stanford Parser standing alone, it seems to be possible to have it use existing POS tags, but copying that syntax to the invocation of CoreNLP doesn't seem to work. For example, this does not work:
java -cp *:./* -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props my-properties-file -outputFormat xml -outputDirectory my-output-dir -sentences newline -tokenized -tagSeparator / -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerMethod newCoreLabelTokenizerFactory -file my-annotated-text.txt
While this question covers programmatic invocation, I'm invoking CoreNLP form the commandline as part of a larger system, so I'm really asking whether this is possible to achieve this with commandline options.
I don't think this is possible with command line options.
If you want you can make a custom annotator and include it in your pipeline you could go that route.
Here is some sample code:
package edu.stanford.nlp.pipeline;
import edu.stanford.nlp.util.logging.Redwood;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.util.concurrent.MulticoreWrapper;
import edu.stanford.nlp.util.concurrent.ThreadsafeProcessor;
import java.util.*;
public class ProvidedPOSTaggerAnnotator {
public String tagSeparator;
public ProvidedPOSTaggerAnnotator(String annotatorName, Properties props) {
tagSeparator = props.getProperty(annotatorName + ".tagSeparator", "_");
}
public void annotate(Annotation annotation) {
for (CoreLabel token : annotation.get(CoreAnnotations.TokensAnnotation.class)) {
int tagSeparatorSplitLength = token.word().split(tagSeparator).length;
String posTag = token.word().split(tagSeparator)[tagSeparatorSplitLength-1];
String[] wordParts = Arrays.copyOfRange(token.word().split(tagSeparator), 0, tagSeparatorSplitLength-1);
String tokenString = String.join(tagSeparator, wordParts);
// set the word with the POS tag removed
token.set(CoreAnnotations.TextAnnotation.class, tokenString);
// set the POS
token.set(CoreAnnotations.PartOfSpeechAnnotation.class, posTag);
}
}
}
This should work if you provide your token with POS tokens separated by "_". You can change it with the forcedpos.tagSeparator property.
If you set customAnnotator.forcedpos = edu.stanford.nlp.pipeline.ProvidedPOSTaggerAnnotator
to the property file, include the above class in your CLASSPATH, and then include "forcedpos" in your list of annotators after "tokenize", you should be able to pass in your own pos tags.
I may clean this up some more and actually include it in future releases for people!
I have not had time to actually test this code out, if you try it out and find errors please let me know and I'll fix it!

How to change macro escape char in Apache Velocity

I'm using apache velocity in front of LaTeX. The # and $ escape chars are conflicting with LaTeX. I want to replace # with %% and $ with ## to avoid the conflicts. Simply using a string replace on the source file code is not a good solution because I have to use things like #parse and #include. The parsed/included file should also be able to use the modified escape chars. Is there a way to configure this? Is there a configuration option?
You can use a custom resource loader to modify files loaded by #parse:
VelocityEngine engine = new VelocityEngine();
Properties props = new Properties();
props.put("resource.loader", "customloader");
props.put("customloader.resource.loader.class", CustomLoader.class.getName());
engine.init(props);
public static class CustomLoader extends FileResourceLoader {
public InputStream getResourceStream(String arg0) throws ResourceNotFoundException {
InputStream original = super.getResourceStream(arg0);
//TODO modify original, return modified
original.close();
}
}

Is there a way to use the latest IE9 MSHTML interfaces from C++ Builder (or Delphi) with TCppWebBrowser?

I have a situation where I would like to use some methods available via the IHTMLDocument7 interface shipped with IE9. In particular the getElementsByTagNameNS() method because I want to work with specific tag types (a lot easier than parsing the whole document).
My current code looks like this:
IHTMLDocument2* doc = NULL;
if (browser->ControlInterface->Document) // make sure TCppWebBrowser is OK
{
if (SUCCEEDED(browser->ControlInterface->Document->QueryInterface(IID_IHTMLDocument2, (void**)&doc)))
{
IHTMLElement* body;
HRESULT hr = doc->get_body(&body);
if (SUCCEEDED(hr))
{
WideString innerHtml;
body->get_innerHTML(&innerHtml);
txtInfo->Text = innerHtml;
body->Release();
}
doc->Release();
}
}
This works, and may have issues, but I'm most interested in getting the functionality I want right now.
If I change this code to use the new interface available with IE9:
browser->ControlInterface->Document->QueryInterface(IID_IHTMLDocument7, (void**)&doc)
I get the following compiler error:
[BCC32 Error] Unit2.cpp(134): E2451 Undefined symbol 'IID_IHTMLDocument7'
Full parser context
Unit2.cpp(129): parsing: void _fastcall TForm2::Button4Click(TObject *)
[BCC32 Error] Unit2.cpp(134): E2285 Could not find a match for 'IUnknown::QueryInterface(undefined,void * *)'
Full parser context
Unit2.cpp(129): parsing: void _fastcall TForm2::Button4Click(TObject *)
It appears that it cannot find a match for this interface.
What should I do to make this
interface available? I'm guessing the
Windows SDK version shipped with BCB
may be out of date, or doesn't know
about a type library for the IE9
version of MSHTML.
Is there a way to
make the appropriate headers
available for this interface
(IID_IHTMLDocument7), and keep the
TCppWebBrowserControl? Or do I need
to import a separate ActiveX control?
I am using C++ Builder Starter XE (15.0.3953.35171) on Windows 7 (x64) with IE9.
Use IHTMLDocument3 Interface instead of IHTMLDocument7, or execute javascript to return what you need, like:
IHTMLDocument2 *doc = NULL;
IHTMLWindow2 *win;
if(SUCCEEDED(CppWebBrowser->Document->QueryInterface(IID_IHTMLDocument2, (LPVOID*)&doc))) {
HRESULT hr = doc->get_parentWindow(&win);
if (SUCCEEDED(hr)) {
BSTR cmd = L"function deltag(){\
var all = this.document.getElementsByTagNameNS('IMG'); \
var images = []; \
for(var a=0;a<all.length;++a) \
{ \
if(all[a].tagName == 'NAME') \
images.push(all[a]); \
} \
for(var i=0;i<images.length;++i) \
{ \
images[i].parentNode.removeChild(images[i]); \
} " ; \
VARIANT v;
VariantInit(&v);
win->execScript(cmd,NULL,&v);
VariantClear(&v);
win->Release();
}
doc->Release();
}
IE9 headers are available for downloading at http://msdn.microsoft.com/en-us/ie/aa740471
I am not sure how old is your BCB linker. VC 2005's linker requires the KB949009 hotfix to link against IE9 libs in a debug configuration.
At least in C++Builder 2010, IHTMLDocument7 is not defined in mshtml.h, but IHTMLDocument6 is. If you download an up-to-date SDK from Microsoft, you can copy the IHTMLDocument7 definition directly into your existing code.
Alternatively, try seeing if BCCSDK has been updated to support IHTMLDocument7 yet.

How to generate an HTML report from PartCover results .xml

How to generate an HTML report from PartCover results .xml
There is a tool you can use to generate a HTML report:
https://github.com/danielpalme/ReportGenerator
Here you can find an article how to integrate the tool into MSBuild:
http://www.palmmedia.de/Blog/2009/10/30/msbuild-code-coverage-analysis-with-partcover-and-reportgenerator
To my knowledge, there is no convenient tool like NCoverExplorer that can transform a PartCover results .xml file into a .html report, but there are some .xsl files that can be used to transform PartCover's results to .html in CruiseControl.NET: Using CruiseControl.NET with PartCover.
You could take those .xsl files from CruiseControl.NET and convert your PartCover results.xml using something like Sandcastle's XslTransform.exe.
By the way, if this happens to be related to TeamCity, the upcoming 5.0 release will include support for .NET coverage using PartCover or NCover. See the documentation for more informations. Otherwise ignore this paragraph ;-)
Easiest solution is probably to use msxsl, a simple command line transformer. I use it for exactly this purpose, and it's easy to integrate into your build system.
http://www.microsoft.com/downloads/details.aspx?FamilyID=2FB55371-C94E-4373-B0E9-DB4816552E41&displaylang=en
Maybe a complicated way of doing it, but I did this with the Simian xml report. Created an XSL file for the formatting, then wrote a dumb little console application;
private const string MissingExtension = "Please enter a valid {0} file, this is missing the extension.";
private const string InvalidExtension = "Please enter a valid {0} file, the file provided has an invalid extension.";
public static void Main(string[] args)
{
if (args.Length < 2)
{
System.Console.WriteLine("Please enter a xsl file and xml file full path.");
return;
}
var xslFile = args[0];
var xmlFile = args[1];
if (!CheckFileNameFormat(xslFile, false))
return;
if (!CheckFileNameFormat(xmlFile, true))
return;
var transform = new XslCompiledTransform();
// Load the XSL stylesheet.
transform.Load(xslFile);
// Transform xml file into an html using the xsl file provided.
transform.Transform(xmlFile, xmlFile.Replace("xml", "html"));
}
private static bool CheckFileNameFormat(string fileName, bool isXmlFile)
{
var extension = isXmlFile ? "xml" : "xsl";
// valida that the argument has a period
if (fileName.IndexOf(".") < 1)
{
System.Console.WriteLine(string.Format(MissingExtension, extension));
return false;
}
var filePieces = fileName.Split('.');
// split on the period and make sure the extension is valid
if (filePieces[filePieces.GetUpperBound(0)] != extension)
{
System.Console.WriteLine(string.Format(InvalidExtension, extension));
return false;
}
return true;
}
Then I can call it from a MSBuild file like so;
<Target Name="RunSimian" DependsOnTargets="RebuildSolution">
<Exec IgnoreExitCode="true" Command=""$(MSBuildProjectDirectory)\..\Build\Packages\Simian\simian-2.2.24.exe" -formatter=xml:$(MSBuildProjectDirectory)\..\Build\Artifacts\simian.xml -language=cs -excludes=$(MSBuildProjectDirectory)\..\Product\Production\**\*.Designer.cs $(MSBuildProjectDirectory)\Production\**\*.cs" >
</Exec>
<Exec IgnoreExitCode="true" Command=""$(MSBuildProjectDirectory)\..\Build\Packages\XmlToHtmlConverter.exe" $(MSBuildProjectDirectory)\..\Build\Packages\Simian\simian.xsl $(MSBuildProjectDirectory)\..\Build\Artifacts\simian.xml">
</Exec>

Resources