I am trying to migrate from log4j1 to logj2.
I have a WLS server with two ear files. Ear1.ear and Ear2.ear. Both have similar code and for logging, they use the same logger name. In log4j1, there were two different config files loggingconfigEar1.xml and loggingconfigEar2.xml writing to ear1.log and ear2.log respectively.
I am trying to implement the same in log4j2, but not able to find an easy way out. Is it possible to have two different ears with same logger name have its own individual log files. Right now, I am initialising the config file through the System Property log4j.configurationFile and it does not work.
The only other option that I can think of is, have separate logger names for the two ears. But that would involve code change in quite a few places and I want to have it as a last resort option.
FYI, there was a lot of customisation done for log4j1 which I have avoided in the migration either by scrapping the functionality or by rewriting code. I am not sure how exactly this separate logging was achieved in log4j1.
Is there a way to generate entities from an existing database model or do I have to create all entities with yeoman (yo) on my own?
I've heard about such a technique from the Spring Roo project.
Please check this helper
https://www.npmjs.com/package/generator-jhipster-db-helper
as per the description of this helper "This JHipster module makes mapping on an existing database easier."
also you can use this tool :
https://github.com/Blackdread/sql-to-jdl
which will convert your sql files into idl files , then you can use jHipster import-idl functionality to import the database.
that will help you speed up the process of generating hipster application with an existing database
no you cannot, because the jhipster yeoman generator "only" scaffolds the entities according to the templates + the given parameters/choices. It does not ask external sources like databases in this step.
The generator creates all files for jpa, angular and liquibase changelogs. Finally, liquibase creates the tables using the changelogs during startup, if they don't exist yet.
So, you can say that jhipster uses an "entity first" instead of a "table first" approach.
Although it would be a nice feature, I don't think that it will be integrated into jhipster, because existing databases are so different that it would be too difficult to handle each possibility. There are different choices of primary keys, different datatypes, different realizations of many-to-many relations or generalizations and so on.
Or you can request a new feature on Github and maybe it will be implemented...
But, to give some directions:
I also had a same situation where I tried to migrate an existing database with about 50 tables and with lots of data to jhipster (this was jhipster 1.6 or so) and I also thought of a "database refactoring" [1]. However, my "solution" was to create a new database using jhipster and then migrate the data from the old database to the new one by using some sql statements.
The main reasons:
I had another database model that differs from the jhipster expected model (e.g. I used other primary keys and references)
No repositories nor angular stuff (which was my main reason to use jhipster) are generated
The liquibase changelogs are missing [2]
Such a special refactoring causes a lot of additional changes afterwards, when you try to generate a jhipster entity against an existing table. These changes could be more time-consuming than creating new entities with jhipster.
These changes could also cause problems in feature upgrades
and yes, roo has such a technique for reverse engineering or refactoring a database (http://docs.spring.io/spring-roo/reference/html/base-dbre.html). AFAIK, it only creates roo-conform entities that are based on JPA. So, it also differs to spring data JPA that is used by jhipster (same problem like other jpa-refectoring tools like [1])
[1] I used an eclipse JPA plugin that can create jpa entity classes from an existing database in another dropwizard-based project before. But, I don't tried it in combination with Spring/Jhipster.
[2] It is possible to create liquibase changelogs from an existing database: http://www.liquibase.org/documentation/generating_changelogs.html
Spring Roo includes the DBRE addon, a great tool for database reverse engineering that generates your domain entities automatically.
#eplog you are wrong, the DBRE lets you to use the --repository option to create Spring Data JPA Repositories for each entity. Take a look at http://docs.spring.io/spring-roo/docs/1.3.1.RELEASE/reference/html/base-dbre.html#d4e1765
Imho, the benefits that the DBRE provides you are:
You don't need to search, test and learn 3rd party tools, Roo does it for you.
Reverse egineering is a powerful tool to migrate legacy applications in those environments in which you cannot migrate the database
Hope it helps. Enjoy with Roo!
Yes you can!
Check out this stack overflow answer:
How to modify existing entity generated with jhipster?
AND, check this video out for auto-generating JPA annotated domain objects from an existing schema using Eclipse with JBoss Tools.
After you make the Hibernate config file, and you can open up the "Code Generation Tools", on the "Exporters" tab, mmake sure you select checkboxes for "Use Java 5 syntax", and "EJB3" annotations.
https://www.youtube.com/watch?v=KO_IdJbSJkI
Also, make sure your hibernate jar(s) are the same number number as your config, in my case I was doing Hibernate Spatial and the versions were mismatched from Hibernate-Core and wouldn't work for a min.
Out of the two options suggested in previous answers
https://www.npmjs.com/package/generator-jhipster-db-helper
https://github.com/Blackdread/sql-to-jdl
second option worked in a straight-forward manner. Below are the steps and pre-reuisites:
Install java and maven (you can use sdkman)
Clone the repo
modify ./src/main/resources/application.yml for your database's configurations specifically : spring -> datasource -> username, spring -> datasource -> password and application -> database-to-export
run 'mvn'
if you want to generate the jdl with a specific file name modify application -> export -> path per your needs.
I'm developing a plugin for Redmine and encountered an issue of how to implement plugin specific settings in Redmine in the most neat way.
Is it possible to have a plugin specific settings in {redmine_home}/plugin/{my_plugin}/config/settings.yml while sharing with a core a model (in MVC terms) logic which reads YAML file, sets attributes of the model class, provides easy access to them, etc. ({redmine_home}/app/models/setting.rb)
I think copypasting or require'ing the core model in the plugin model is definitely a poor design so right now i'm tending to have a plugin specific settings in the core config {redmine_home}/config/settings.yml and when it comes to plugin controller to read a settings it relies on the core model to do that. ({redmine_home}/app/models/setting.rb)
Is this a proper design? Is there any better ways to do this?
Thanks.
I just checked 3 different plugins in our project all used something like:
options = YAML::load( File.open(File.join(Rails.root, 'plugins/fancy_plugin/config', 'settings.yml')))
So just copy pasting.
I have been trying to create schema.yml content in memory from a behavior I have written. I want to test if any changes made by a developer to schema.yml comply with the current database fields and foreign key references. I see the task class that builds schema, but havenot been able to find straightforward way to do that or am I missing something here? Can this be done by leveraging the symfony api already available rather than writing my own solution?
Thanks in advance.
P.S. I am using Propel as ORM
Why don't you tweak the migration task to define difference between your current schema (in memory) and the one potentially modified by the developer?
php symfony doctrine:generate-migrations-diff
This task generate a diff between generated classes and current schema.yml.
What you can do:
generate new model (form and filter) based on the new schema.yml
put this change into a new folder (not defaults one)
run the task doctrine:generate-migrations-diff an give it the path for new models (forms and filters)
if it generates migrations classes: developper made some change, if not, every thing is ok.
Edit: (since the OP use propel)
You have almost the same task in Propel (and the doc).
I have a symfony application with two different applications (frontend, backend) but there is one action in common. Now I have duplicated its code in both apps but I don't like that at all.
Is there a way to reuse an action in multiple symfony applications?
The easiest way would be to make an actions base class in lib with the shared methods/actions. Then the modules that need to use this functionality can just extend that base class instead of sfActions.
You could also probably just use an event listener on method_not_found of sfComponent. But that may not work as expected if the method is an actual action (and it would also be available in all modules and all components without some special detection logic).
The most complicated way would be to make a Plugin. Of course that would require making the logic that works with any models dynamic so it can be configured or isolating the relevant parts of the schema to the plugin's schema.
Two more options:
1) if your are on Linux, make a symlink to your actions.class.php or even whole module, if you share the same templates.
cd apps/backend/modules/name
ln -s ../../frontend/modules/name name
2) if you have not gone too far in development, re-factor your project to have only ONE application (my favourite).
If you want to share a module (and hence also it's actions), the proper way is to create a plugin.