I am wondering if there is an easy way of keeping different i18n files in sync, so when adding a key/value in messsage_aa.properties would result in the same line in a message_bb.properties file? But this offcourse using a different value..
I would really like to have the same key on the same line number in each of my message_xx.properties file..
Any suggestion?
This feature depends on your IDE.
As an example, I use ResourceBundle plugin in Eclipse that partially do what you are asking.
"Partially", because it keeps the same order but not necessary the same line (i.e. it doesn't skip line to match one file with another).
Related
I have a following situation: in some of my i18n property files there are properties containing a special word:
prop.example=specialword just for example
prop.test=just for test specialword
I want to have a possibility of having a property somewhere in my Config.groovy that would contain a specific value for this specialword so that if I specify:
specialword=Value of special word
in a Config.groovy then I want my i18n properties to be resolved like:
prop.example=Value of special word just for example
prop.test=just for test Value of special word
for that purpose, when building the project, I want to access property files in order to look for occurences of specialword and to replace them with value of specialwordvalue from Config.groovy.
Is that possible somehow? Perhaps, someone faced similar situation? I would really appreciate any help.
Thanks, Cheers
Instead of trying to change the way the properties are compiled, you would be better off passing the special value as an argument to your message code (as discussed in the comments to your question).
For instance:
<g:message code="my.key.code" args="[someVariableWithAValueFromConfig]" />
If your message code doesn't use the argument it will simply be ignored. This seems like the best approach to the problem you are trying to solve.
I have a script which saves some files at a given location. It works fine but when I send this code to someone else, he has to change the paths in the code. It's not comfortable for someone who does not know what is in that code and for me to explain every time where and how the code should be changed.
I want to get this path in a variable which will be taken from the configuration file. So it will be easier for everyone to change just this config file and nothing in my code. But I have never done this before and could not find any information on how I can do this in the internet.
PS: I do not have any code and I ask about an ultimate solution but it is really difficult to find something good in the internet about dxl, especially since I'm new with that. Maybe someone of you already does that or has an idea how it could be done?
DXL has a perm to read the complete context of a file into a variable: string readFile (string) (or Buffer readFile (string))
you can split the output by \n and then use regular expressions to find all lines that match the pattern
^\s*([^;#].*)\s*=\s*(.*)\s*$
(i.e. key = value - where comment lines start with ; or #)
But in DOORS I prefer using DOORS modules as configuration modules. Object Heading can be the key, Object Text can be the value.
Hardcode the full name of the configuration module into your DXL file and the user can modify the behaviour of the application.
The advantage over a file is that you need not make assumptions on where the config file is to be stored on the file system.
It really depends on your situation. You are going to need to be a little more specific about what you mean by "they need to change the paths in the code". What are these paths to? Are they DOORS module paths, are they paths to local/network files, or are the something else entirely?
Like user3329561 said, you COULD use a DOORS module as a configuration file. I wouldn't recommend it though, simply because that is not what DOORS modules were designed for. DOORS is fully capable of reading system files in one line at a time as well as all at once, but I can't recommend that option either until I know what types of paths you want to load and why.
I suspect that there is a better solution for your problem that will present itself once more information is provided.
I had the same problem, I needed to specify the path of my configuration file used in my dxl script.
I solved this issue passing the directory path as a parameter to DOORS.exe as follow:
"...\DOORS\9.3\bin\doors.exe" -dxl "string myVar = \"Hello Word\"
then in my dxl script, the variable myVar is a global variable.
I need to process a (GCS) bucket of files, where each file is compressed and contains a single multi-line JSON record. Also, the name of the file being processed is significant and I need to know it within my transform.
Starting with examples in the docs, TextIO looks pretty close, but it looks like its designed to process each file line-by-line and does not allow me to read the entire file at once. Also, I don't see any way to get the filename that's being processed?
PCollectionTuple results = p.apply(TextIO.Read
.from("gs://bucket/a/*.gz")
.withCompressionType(TextIO.CompressionType.GZIP)
.withCoder(MyJsonCoder.of()))
Looks like I need to write a custom IO reader, or some such? Any tips for best place to start?
You are correct that right now none of the existing classes do exactly what you want. There are 2 reasonable approaches:
Match the filepattern yourself (using IOChannelUtils and IOChannelFactory) and wrap the resulting files into a PCollection<String> where the String will be a filename, using Create.of(filenames). Then apply a ParDo with a function which reads the given filename.
Write your own subclass of Source (there's also FileBasedSource, but it's not quite right for your use case). It would be configured by the filepattern, and splitIntoBundles would match the filepattern and expand into individual sources each corresponding to one file.
I would recommend the first approach because it seems like less code and your use case does not require the full power of Source.
I have an XML file in my app resources folder. I am trying to update that file with new dictionaries dynamically. In other words I am trying to edit an existing XML file to add new keys and values to it.
First of all can we edit a static XML file and add new dictionary with keys and values to it. What is the best way to do this.
In general, you can read an XML file into a document object (choose your language), use methods to modify it (add your new dictionary), and (re-)write it back out to either the original XML file, or a new one.
That's straightforward ... just roll up the ol' sleeves and code it up.
The real problem comes in with formatting in the XML file before and after said additions.
If you are going to 'unix diff' the XML file before and after, then order is important. Some standard XML processors do better with order than others.
If the order changes behind the scenes, and is gratuitously propagated into your output file, you lose standard diffing advantages, such as some gui differs, and some scm diffs (svn, cvs, etc.).
For example, browse to:
Order of XML attributes after DOM processing
They discuss that DOM loses order where SAX does not.
You can also write a custom XML 'diff'er (there may be such off-the-shelf ... for example check out 'http://diffxml.sourceforge.net/') that compares 2 XML documents tag-by-tag, attribute-by-attribute, etc.
Perhaps some standard XML-related tool such as XSLT will allow you to keep the formatting constant without changing tag or attribute order. You'd have to research that.
BTW, a related problem is the config (.ini) file problem ... many common processors flippantly announce that the write-order may not agree with the read-order.
I'm having a slight annoyance with the javascript_I18n plugin, which generates js-friendly versions of the I18n translation tables, so that you can localise your javascript. It all works fine, but it works by calling to_json on each locale's translations hash and outputting the results into a file. When you call to_json on a hash the resulting string is all a single lne, which means you end up with files with an enormous line at the end.
This in turn is stopping git from being able to merge any changes, because the merge works on a line by line basis and can't deal with a single massive line with changes in the middle somewhere. It's not a massive problem because i can always just re-generate the js-friendly translation file with a rake task that comes with the plugin (which replaces the mid-merge file with a brand new one which i can just commit in), but it's just a bit annoying. It occurred to me that if the json was output on different lines, instead of all the same line, then it wouldn't be a problem, and it wouldn't even make the file that much bigger, just inserting two characters (\n) per line.
Before i try and hack the resulting string with a gsub to split it onto seperate lines, is there a nicer way to call to_json on a hash and output the results onto seperate lines? Or a nicer way to solve this problem generally? (i can't find much useful in the documentation for javascript_I18n).
Grateful for any advice - max
Not to answer you question, but to give a suggestion:
It would probably be easier to just ignore all the generated js translation files.
This is because even if you separate the translation into multiple lines, there are still chances of merge conflicts, which you probably already resolved once in your yaml file.
I would set it up this way:
1). gitignore all the js translation files.
2). In ActionController, add a before filter to generate the js files automatically on each load in development mode only.
3). Tweak your deploy.rb file to generate the js files after code update.
No more merge conflicts! (at least for js translation files) :D
Aaron Qian