Is there anyway to automatically find I18N violations in a Grails project? For example,
<td valign="top" class="name"><label for="enabled">Enabled:</label></td>
should be flagged because it's not using <g:message> to get the label value.
It would be nice if codenarc had a rule for this, but I don't think it does.
I have also looked for such a code quality test and have yet to find one.
Implementing this should be fairly trivial - if all text content in a GSP is required to be applied via tags, your GSP should consist entirely of element nodes and no text nodes.
This crux of the problem is predominantly an XML issue: how do you check a set of XML documents and flag those that contain text nodes?
Assuming you can import org.codehaus.groovy.grails.commons.GrailsResourceUtils in a codenarc rule you can use the VIEWS_DIR_PATH property to determine where all the GSP files live.
From there, the high level process you would need is:
Build a collection of all the GSP files in the application
For each file, load the content into an XML parser (Java has plenty) and check the node type for every node, flagging those files that contain text nodes
I appreciate that this is a very high level solution but conceptually it should work.
Related
Is there a way to extend a mat-table that automatically includes the matSort directive (and other custom directives that interact with the columns, like filter) and still have the content inside hold the mat-sort-header directives?
<mat-table [matSortActive]="sortActive" [matSortDirection]="sortDirection" matSort>
<ng-content></ng-content>
</mat-table>
Here is an example: https://stackblitz.com/edit/angular-bxsavu.
I've tried creating a component on its own that just puts <ng-content> inside the <table> element, but that creates the error:
DwfTableComponent.html:1 ERROR Error: Missing definitions for header,
footer, and row; cannot determine which columns should be rendered.
at getTableMissingRowDefsError (table-errors.ts:48)
I've tried adding nothing to the entire template and just using the original CDK_TABLE_TEMPLATE (seen in the stackblitz link above), and this creates the error:
ERROR TypeError: Cannot read property 'viewContainer' of undefined
at DwfMatTableExtendedComponent.CdkTable._forceRenderHeaderRows (table.ts:854)
So it seems like I can't really get any traction on making this work.
The context of this all is that our site has many tables that need to sort, but we need developers to be able to write in what columns are sortable when writing the markup. If I can get this to work for MatSort, I can then turn and apply this to my own server side filtering component that behaves very much like the MatSort feature (has a customFilter directive in the <table> element, and within the <th mat-header-cell *matHeaderCellDef> spot there is a custom-filter-header directive). And then the big piece of it will be another feature that lets the table change what cells display (links or text) when the table is "paused" -- another feature that is controlled by the wrapper but needing to affect the inside content.
There are many other features in our current "table-wrapper" (search windows, exports, paging), but this one part of it has been a constant source of confusion. There's something a little broken feeling when I can't make a component that is made of two well known components and still leaves the table structure flexible. I'm sure I'm missing some piece of it, but this would greatly reduce the repetition of code for each table we have to write.
I've managed to get a table up and running, by using the original CDK_TABLE_TEMPLATE and extending the CdkTable found in the source code. From there, I put the MatSort directive on my own component, and fill things out like normal.
There were several bumps along the way. For starters, you have to import the CdkTableModule in your module. Next, you have to implement OnInit and call super.ngOnInit() just to get it to render. Styling then requires using the source code's table.scss, and even then, you need to tweak things just to get it to look right.
At this point, it feels like I'm roaming into hack territory, but there is traction finally, and I think after I figure out why the default sorting doesn't happen (as well as glyphs not appearing), I'll be on my way to expanding this to what I need. It's by no means ready for production, but the development has led to a quick education on angular limitations and abilities.
If anyone is curious, the same link https://stackblitz.com/edit/angular-bxsavu provides where I'm at right now.
I have an XML file in my app resources folder. I am trying to update that file with new dictionaries dynamically. In other words I am trying to edit an existing XML file to add new keys and values to it.
First of all can we edit a static XML file and add new dictionary with keys and values to it. What is the best way to do this.
In general, you can read an XML file into a document object (choose your language), use methods to modify it (add your new dictionary), and (re-)write it back out to either the original XML file, or a new one.
That's straightforward ... just roll up the ol' sleeves and code it up.
The real problem comes in with formatting in the XML file before and after said additions.
If you are going to 'unix diff' the XML file before and after, then order is important. Some standard XML processors do better with order than others.
If the order changes behind the scenes, and is gratuitously propagated into your output file, you lose standard diffing advantages, such as some gui differs, and some scm diffs (svn, cvs, etc.).
For example, browse to:
Order of XML attributes after DOM processing
They discuss that DOM loses order where SAX does not.
You can also write a custom XML 'diff'er (there may be such off-the-shelf ... for example check out 'http://diffxml.sourceforge.net/') that compares 2 XML documents tag-by-tag, attribute-by-attribute, etc.
Perhaps some standard XML-related tool such as XSLT will allow you to keep the formatting constant without changing tag or attribute order. You'd have to research that.
BTW, a related problem is the config (.ini) file problem ... many common processors flippantly announce that the write-order may not agree with the read-order.
I use java and saxonee-9.5.1.6.jar included build path , when run, getting these errors at different times.
Error at xsl:import-schema on line 6 column 169 of stylesheet.xslt:
XTSE1650: net.sf.saxon.trans.LicenseException: Requested feature (xsl:import-schema)
requires Saxon-EE
Error on line 1 column 1
SXXP0003: Error reported by XML parser: Content is not allowed in prolog.
javax.xml.transform.TransformerConfigurationException: Failed to compile stylesheet. 1 error detected.
I open .xslt file in hex editor and dont see any different character at the beginning AND
I use transformerfactory in a different project but any error I get.
Check what the implementation class of tFactory is. My guess is it is probably net.sf.saxon.TransformerFactoryImpl - which is basically the Saxon-HE version.
When you use JAXP like this, you're very exposed to configuration problems, because it loads whatever it finds sitting around on the classpath, or is affected by system property settings which could be set in parts of the application you know nothing about.
If your application depends on particular features, it's best to load a specific TransformerFactory, e.g. tFactory = new com.saxonica.config.EnterpriseTransformerFactory().
I don't know whether your stylesheet expects the source document to be validated against the schema, but it it does, note that this isn't automatic: you can set properties on the factory to make it happen.
I would recommend using Saxon's s9api interface rather than JAXP for this kind of thing. The JAXP interface was designed for XSLT 1.0, and it's a real stretch to use it for some of the new 2.0 features like schema-awareness: it can be done, but you keep running into limitations.
I defined aliases for the fields to provide friendly names in the template edition. The problem is that these friendly names are localized and FastReport saves the template with the Aliases, not the Field Names! That doesn't seem very clever.
If I take a template that was created in language A and try to use it with language B, it raises a lot of errors because the fields are not found anymore. Or worse, if someone decides that one particular translation isn't good and change it, that field won't be found anymore.
Is there a way to have friendly names for the fields without substituting the field names of the template that will be saved?
Since FastReports saves all its report templates as XML files, it could be that the easiest way to accomplish what you want to do is write a routine that will read the FastReport XML file and iterate through all of the TfrxMemoView nodes changing the Text attribute to the friendly local name.
I want to include a spark view in another spark view.
I've tried to use the include tag.
But it doesn't seem to support variables as part of the href attribute.
Eg.
<include href="_group_${groupData.Type}.spark" />
Does anyone know of any workaround to do this?
The <include> tag is part of the Spark language that gets parsed on the first pass and cannot include variables of its own because the view class file has not yet been generated for the variables to be evaluated. Using <include> is a means of including a static resource of some kind.
I think the thing you may be looking for is the <use import="myFile.spark"/> tag for including other Spark files, or you could just use Spark Partials built in. The problem however is that you're trying to have the included spark files dynamically determined at runtime which I don't think will be possible.
Is there any way you can pre-generate the views for each groupData.Type value using the pre-compilation ability in Spark?
The other option potentially (if you really do need these dynamic at runtime) is to create and maintain an InMemoryViewFolder instance and you can add "virtual" files to it as you pull them out of the database but you still won't get away with using variables inside any Spark language elements because variables "don't exist" at that point in the parsing/rendering pipeline.
Hope that helps,
Rob