Do anyone know what's the libraries load order based on, in a single folder on Tomcat 8?
Here is my situation:
There is this customer java application deployed on tomcat that, for some reasons, has a class in multiple libraries in the same webapp-shared folder. I know it's wrong, everyone knows that, but this customer refuses to fix this.
We have to deploy this application on kubernetes, so we created a Dockerfile and everything else, and it's working correctly most of the time.
When a pod is deployed on some nodes, it seems that the load order of the libraries is different, and that causes the application to not work properly.
So, basically, what I'm asking is: is there anyone who actually knows what's the load order based on? Is it the filesystem used? Is it the docker overlay? Can it be the LC_COLLATE? Is it actually pseudo random?
These nodes are basically identical and I'm really having a hard time trying to find out what the difference could be.
Thank you.
The only ordering specified by the Servlet Specification is that classes in /WEB-INF/classes have precedence over /WEB-INF/libs/*.jar.
In Tomcat 7 and earlier the order of libraries in the libs depended on the underlying filesystem, so that it would break when you move your web application to a different OS or to a different Java EE server. In Tomcat 8 and later it is random by design. As random as a HashSet is.
See discussion in Bug 57129. Regarding Resources and PreResources see configuration reference.
Related
Creating nice project stubs using jhipster has worked fine until recently.
Now on Windows 7 the node_modules look like deeply recursive folders, deep enough to grab a hole into the FS. As a consequence maven is not capable anymore of building the project.
Can anybody out there help a jhipster newbie?
It's because the path length is limited on Windows. You could try to generate your project at the root of your hard drive, that should give you more chances to make it work -> other people have it working on Windows, so it should be OK
When building Docker images, I find myself in a strange place -- I feel like I'm doing something that somebody has already done many times before -- and did a vastly better job at it. In most cases, this gut feeling is absolutely right -- I'm taking a piece of software and re-describe everything that's already described in the OS's packaging system in a Dockerfile.
More often than not, I even find myself installing software into the image using a packager manager and then looking inside that package to get some clues about writable paths, configuration files, open ports etc. for my Dockerfile. The duplication of effort between OS packager and Docker packager is most evident in such a case which I assume is one of the more common.
So basically, every Docker user building an image on top of pre-packaged software is re-packaging almost from scratch, but without the time and often the domain knowledge the OS packagers had for trial, error and polish. If we consider the low reusability of community-maintained images (re-basing from Debian to RHEL hurts), we're stuck with copying or re-implementing functionality that already exists and works on OS level, wasting a lot of time and putting a maintenance burden on the poor souls who'd inherit whatever we might leave behind.
Is there any way to resolve this duplication of effort and re-use whatever package maintainers have already learned about a piece of software in Docker?
The main source for Docker image reuse is hub.docker.com
Search first there if your system is already described in one of those images.
You can see their Dockerfile, and start your own from one of those images instead of starting from a basic ubuntu or wheezy one.
I have an Erlang application which has a dependency in its deps directory on another application.
From what I understand I can either;
a) start my dependent application from my including application by calling application:start(some_other_app) which starts the application and shows it running standalone within Observer.
b) include my dependent application in my .app file with {included_applications, [some_other_app]} so that the application is loaded and not started and then start the included application from my own top level supervisor. This again starts the included application and shows its running below my own supervision hierarchy in Observer.
My question is when should I use either approach? If I use option “a” and my dependent application exits will it be restarted or should I be using approach “b” so that any dependencies I have are monitored accordingly?
On a side note I use Rebar to package and manage my dependencies.
Thanks,
Andy.
You probably shouldn't do either a) nor b)
From Learn You Some Erlang
Look at the chapter: Included Applications
It is more and more recommended not to use included applications for a
simple reason: they seriously limit code reuse. Think of it this way.
We've spent a lot of time working on ppool's architecture to make it
so anybody can use it, get their own pool and be free to do whatever
they want with it. If we were to push it into an included application,
then it can no longer be included in any other application on this VM,
and if erlcount dies, then ppool will be taken down with it, ruining
the work of any third party application that wanted to use ppool.
For these reasons, included applications are usually excluded from
many Erlang programmers' toolbox. As we will see in the following
chapter, releases can basically help us do the same (and much more) in
a more generic manner.
In the chapter Release is the Word you can read about how several applications are bundled into a relase and how they are started.
Having your dependencies declared in your application descriptor is the way to go, so you should use option B in most of the scenarios.
The application controller will ensure that all your dependencies are present and started (in order) before starting your application and will also make your app fail if those terminate with errors. Also, the application controller will shutdown everything when needed.
Other than that, if you choose option A, when starting an application with application:start/1, you will get a temporary application by default, so you should use application:start/2, passing the permanent atom as the second argument.
EDIT: Having your dependencies in the application descriptor also helps visibility, it's easy to know your deps without scanning the source code.
I had a similar question, How do you include dependencies in an erlang project, and then how do you release them?
I had some help from various friends, and the erlang mailing list...and after re-reading some docs and more trial and error... I figured some stuff out. It's long, so check out the gist:
https://gist.github.com/3728780
-Todd
I've been developing a web application and a lot of customers are asking if they can host the application in their network (for security reasons). I have been looking for a way to package up a rails app into a single executable (with server and all), but haven't been able to find anything. My other requirement is that we distribute it without the source. Because of that I was looking at JRuby and Warbler. The end product should run on linux or windows. Has anyone done anything like this before, or can anyone point me in the right direction.
Thanks
My best guess would be to use JRuby and the JRubyCompiler, although I have no idea if you could compile a whole rails project (including all the required gems). I got it to compile a small ruby script though. Anyway, if you succeed, you could package those in a jar or war and deploy that as a contained application.
It doesn't sound like you necessarily need to package it as an executable, as long as the code is obfuscated. I personally haven't needed to protect any of my code, but a quick google search returned this product http://rubyencoder.com/. I'm sure there are others out there, but the basic idea is that your code is unreadable and cannot be reverse engineered. This would allow you to run a standard rails environment without giving access to your source code.
If you have the budget and really want to outsource this, the Github guys partnered with BitRock to build their cross-platform installable product (Github Firewall Install). BitRock has this case study on their website.
Help! I'm porting a large ruby app to Grails - but the Grails startup of my application takes more than 2 minutes.
I've already set dbCreate to "read" I've ensured my high end dual processor desktop windows box gives Grails needed RAM (1 Gig). I have no plugins installed. I have 170 domain classes that used to be ruby classes.
When it starts up it prints out the line "Running Grails App.." and then hangs for a long time before it then prints out the "Server running" line.
I just did something where I migrated all my ids to bigints. That seems to have worsened the problem. Now it takes about 10 minutes to startup.
I am new to grails would you please give me a few more details on what and where to log the events at startup? As to profiling the vm, its been a few years since I did a lot of Java. What do you recommend as the best profiling tool to use now?
What else can I do to speed up Grails startup?
Unfortunately, I am not sure too much can be done beyond what you already did. As you know, there is a lot going on when it starts up, with all the plugin resolution / loading, adding dynamic methods to your domain objects, and overall dynamic nature of Groovy.
I am not sure which version you are using, but I've asked for ability to turn off dependency checking when you start up in 1.2, since that adds a bunch of time to startup time as well.
I realize above isn't too helpful, so perhaps this will be: I split up my application into several plugins. One for domain objects, one for graphing capability, one for excel import, another for some UI constructs I needed. I didn't do it just because of slow startup times, but the advantage is that I can test parts of the system separately from each other before integrating everything together.
I am about to add a piece of new functionality that involves at least 10 new domain objects, and I am first developing them in a separate plugin by having stubs for the few objects they have to interact with from the core app. That allows me to both reduce startup times, and also have my code better isolated.
So if it's an option for you, try to separate out things so you can work on them separately, which will alleviate your issue somewhat. There may also be other benefits in terms of having your team work on smaller components separately, better modularization, etc.
Hope this is helpful.
170 domain classes is fairly large, but 2 minutes still seems really long to me. Do you have a ton of plugins installed? Potentially too verbose debug settings?
I'd be curious how long it took if you created a fresh grails app, copied in all of your domain objects (and the subset of plugins that the domain objects might need to actually operate) and see how long that takes to start.
Jean's suggestion about separating things out if possible is a good one. I've done something similar on previous projects where we have a domain plugin, and our other apps all rely on that domain plugin.
You could also use the grails events to log some timing information on start up to see where your bottlenecks are. Timing the "PluginInstalled" event should be good as I think that the hibernate plugin would be caught by this in addition to the other plugins.
You may have a dependency problem. If a plugin you use relies on a library in maven that has 'open ended' dependencies, grails will go and look each time if there are newer versions to download in the range. I have no idea why anyone would specify it like this. It seems it would lead to unreliable behaviour. For me, the culprit is Amazon's java aws library, naturally used by a plugin that talks to Amazon's cloud.
http://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk/1.2.10
note how some of its dependencies are like this
org.apache.httpcomponents httpclient [4.1, 5.0)
it appears that every time, grails is looking for a newer version (and downloading if it exists, I just noticed 4.2-alpha1 of httpclient come down when I ran this time).
By removing that dependency from the plugin and manually adding the required libraries to my .lib folder, I reduced my startup time from >30sec to <1sec
You might want to see if there are other knobs you can turn other than Grails in order to fix this.
Have you tried approaching this as a performance issue? You can a look at the box performance and try to find out what the bottleneck is. Is it CPU? Is it a disk read issue? Can you attach a profiler to the VM and find out what's using up most of your startup time?
Have you tried basics like these for further deployment to a servlet container of your choice or in-place .war bootstrapping?
grails -Ddisable.auto.recompile=true run-app
grails run-war
grails war