I'm developing a plugin for Redmine and encountered an issue of how to implement plugin specific settings in Redmine in the most neat way.
Is it possible to have a plugin specific settings in {redmine_home}/plugin/{my_plugin}/config/settings.yml while sharing with a core a model (in MVC terms) logic which reads YAML file, sets attributes of the model class, provides easy access to them, etc. ({redmine_home}/app/models/setting.rb)
I think copypasting or require'ing the core model in the plugin model is definitely a poor design so right now i'm tending to have a plugin specific settings in the core config {redmine_home}/config/settings.yml and when it comes to plugin controller to read a settings it relies on the core model to do that. ({redmine_home}/app/models/setting.rb)
Is this a proper design? Is there any better ways to do this?
Thanks.
I just checked 3 different plugins in our project all used something like:
options = YAML::load( File.open(File.join(Rails.root, 'plugins/fancy_plugin/config', 'settings.yml')))
So just copy pasting.
Related
I am working on a NopCommerce website and have quite a bit of site-wide customization so I have created a plugin to handle it all but not sure on how to handle the localization. I see there are a couple of ways of updating the Localization strings, one way I have found is in the Plugin's Install() method:
this.AddOrUpdatePluginLocaleResource("Plugins.Payments.PayPalStandard.Fields.AdditionalFee", "Additional fee");
This looks like it only adds new resource strings for the plugin, is there a similar way to update the other resources via the Install() method like:
Admin.Catalog.Products.List.DownloadPDF
I found that there is a way to export the entire language to a language_pack.xml file, would it be better to just create an entire language pack instead? Is there a way to add a new language pack from the plugins Install() method?
I guess I could simply open the language_pack.xml file and add each resource found using the AddOrUpdatePluginLocaleResource, I was hoping that there was a built-in way of doing this using NopCommernce functionality.
Thanks!
As #Raphael suggested in a comment, provide a language pack along with plugin file to the end users, and give an option to upload required resource file within your plugin configuration page.
As per as I know, there is no inbuilt way to add language pack on plugin installation, but you can do some code on plugin install method to find language pack file(s) from plugin folder and install it, not quite sure, you can take reference of inbuilt methods.
I was hopping to have an easy way to customize the display behavior of the Grails fields plugin after reading its docs, but I just realized that it demands an enormous effort as there is no available templates to start from.
I can see the display functionality is hard-coded in FormFieldsTagLib (from methods like renderDefaultInput() ) but I think it is imperative to have the templates themselves (or a way to generate them, somewhat like generating static scaffolding in Grails).
I can see no consistent (and reasonable) way to customize display behaviors for the Grails fields plugin without that. Am I missing something?
Imagine the use case where someone wants to change the boolean default rendering just to display the field label after (and not before) the checkbox, and keep it available to all the boolean fields within its application. Which concerns will he need to handle regarding if the field is required, has errors, prefix and so on? When all he needed was just moving two divs around.
Grails version: 2.5.4, fields-plugin version: 1.5.1
You aren't missing something. You'd have to re-create the existing implementation of each field type rendering in a template for use with the plugin. There isn't a way to generate a file to start with (like scaffolding).
I won't bore you with the historical reason as to why this is the case, but if you do create a set of base templates it would be a good idea to contribute back to the plugin.
I had an issue with the <f:table> tag, and found this post, which led me to find the base or default template inside the plugin repo.
Take a look at
https://github.com/grails3-plugins/fields/tree/master/grails-app/views
That may help you finding some default templates, along with the official doc and this answer on where to put the override.
Hope it helps you.
I am developing one automation testing framework for a web application testing. For this automation framework I want implement logging with log4j2.
In web I found that there is 4 different way to configure the log4j2 configuration
1) .xml
2) .yml
3) .properties
4) .json
I am confuse which configuration will be better for which purpose. Can anyone explain me for what kind of application/situation which configuration is suitable.
Also I want to know how I can implement log4j2 from start to end (any link)
You're completely free to use any! It's a matter of personal preference.
You can see that they have the same capabilities in the configuration documentation.
A properties file is definitely the simplest, but there's probably more documentation using XML. YAML is much richer, I would start with either properties or XML.
Is there a way to generate entities from an existing database model or do I have to create all entities with yeoman (yo) on my own?
I've heard about such a technique from the Spring Roo project.
Please check this helper
https://www.npmjs.com/package/generator-jhipster-db-helper
as per the description of this helper "This JHipster module makes mapping on an existing database easier."
also you can use this tool :
https://github.com/Blackdread/sql-to-jdl
which will convert your sql files into idl files , then you can use jHipster import-idl functionality to import the database.
that will help you speed up the process of generating hipster application with an existing database
no you cannot, because the jhipster yeoman generator "only" scaffolds the entities according to the templates + the given parameters/choices. It does not ask external sources like databases in this step.
The generator creates all files for jpa, angular and liquibase changelogs. Finally, liquibase creates the tables using the changelogs during startup, if they don't exist yet.
So, you can say that jhipster uses an "entity first" instead of a "table first" approach.
Although it would be a nice feature, I don't think that it will be integrated into jhipster, because existing databases are so different that it would be too difficult to handle each possibility. There are different choices of primary keys, different datatypes, different realizations of many-to-many relations or generalizations and so on.
Or you can request a new feature on Github and maybe it will be implemented...
But, to give some directions:
I also had a same situation where I tried to migrate an existing database with about 50 tables and with lots of data to jhipster (this was jhipster 1.6 or so) and I also thought of a "database refactoring" [1]. However, my "solution" was to create a new database using jhipster and then migrate the data from the old database to the new one by using some sql statements.
The main reasons:
I had another database model that differs from the jhipster expected model (e.g. I used other primary keys and references)
No repositories nor angular stuff (which was my main reason to use jhipster) are generated
The liquibase changelogs are missing [2]
Such a special refactoring causes a lot of additional changes afterwards, when you try to generate a jhipster entity against an existing table. These changes could be more time-consuming than creating new entities with jhipster.
These changes could also cause problems in feature upgrades
and yes, roo has such a technique for reverse engineering or refactoring a database (http://docs.spring.io/spring-roo/reference/html/base-dbre.html). AFAIK, it only creates roo-conform entities that are based on JPA. So, it also differs to spring data JPA that is used by jhipster (same problem like other jpa-refectoring tools like [1])
[1] I used an eclipse JPA plugin that can create jpa entity classes from an existing database in another dropwizard-based project before. But, I don't tried it in combination with Spring/Jhipster.
[2] It is possible to create liquibase changelogs from an existing database: http://www.liquibase.org/documentation/generating_changelogs.html
Spring Roo includes the DBRE addon, a great tool for database reverse engineering that generates your domain entities automatically.
#eplog you are wrong, the DBRE lets you to use the --repository option to create Spring Data JPA Repositories for each entity. Take a look at http://docs.spring.io/spring-roo/docs/1.3.1.RELEASE/reference/html/base-dbre.html#d4e1765
Imho, the benefits that the DBRE provides you are:
You don't need to search, test and learn 3rd party tools, Roo does it for you.
Reverse egineering is a powerful tool to migrate legacy applications in those environments in which you cannot migrate the database
Hope it helps. Enjoy with Roo!
Yes you can!
Check out this stack overflow answer:
How to modify existing entity generated with jhipster?
AND, check this video out for auto-generating JPA annotated domain objects from an existing schema using Eclipse with JBoss Tools.
After you make the Hibernate config file, and you can open up the "Code Generation Tools", on the "Exporters" tab, mmake sure you select checkboxes for "Use Java 5 syntax", and "EJB3" annotations.
https://www.youtube.com/watch?v=KO_IdJbSJkI
Also, make sure your hibernate jar(s) are the same number number as your config, in my case I was doing Hibernate Spatial and the versions were mismatched from Hibernate-Core and wouldn't work for a min.
Out of the two options suggested in previous answers
https://www.npmjs.com/package/generator-jhipster-db-helper
https://github.com/Blackdread/sql-to-jdl
second option worked in a straight-forward manner. Below are the steps and pre-reuisites:
Install java and maven (you can use sdkman)
Clone the repo
modify ./src/main/resources/application.yml for your database's configurations specifically : spring -> datasource -> username, spring -> datasource -> password and application -> database-to-export
run 'mvn'
if you want to generate the jdl with a specific file name modify application -> export -> path per your needs.
What is the best way to load seed (initial or test) data into grails application. I'm considering 3 options
Putting everything in *BootStrap.groovy files. This is tedious if the domain classes and test data are many.
Write custom functionality to load it through xml. May not be too difficult with the excellent xml support by groovy, but lot of switch statements for different domain classes.
Use Liquibase LoadData api. I see you can load the data fairly easy from csv files.
Choice 3 seems the easiest. But, I'm not familiar with Liquibase. Is it good in this scenario, or only used for migration, db changes etc. If anyone could provide a better sol, or point to an example with Liquibase, it would be great help..
Another answer would be to leverage grails run-script. This would allow you to move what you might put in bootstrap and keep it where you want on your file system (possibly outside of the codebase). Similarly, you could install the console plugin and load code through that on a running application.
Depending on your data needs, check out the great build-test-data plugin as well.
I'm using the Fixtures plugin to load test/initial data, it works for me.
http://www.grails.org/plugin/fixtures
look at http://www.dbunit.org/ and http://www.grails.org/DBUnit+Plugin
look into SeedMe plugin:
https://github.com/bertramdev/seed-me
seed = {
author(meta:[key:'name'], name: 'David', description: 'Author Bio Here')
}
One way I have generated seed data is using a service. I created a class, lets call it SeederService. I can inject this service in the Bootstrap.groovy and call whatever method I would want.
The beauty of SeederService is that you can also use the same service in your unit-tests. Simply inject the service class in your unit test and generate your seed data.