I am working on a multi threaded batch program where I have to use embedded connection pooling, I cannot use server managed connection pool because I cannot deploy it as a app client as there are few restrictions in client side.
I thought of using Apache Commons DBCP2.x but after researching on this I found many stackoverflow and blogs where people posted about unsteadiness of Apache Commons DBCP. Although during my research I never faced any issues in DEV environment.
May be this question asked in many forums, I am confused and I really need expert suggestion on this
1) Is Apache Commons DBCP 2.x Stable for Production?
2) Shall I choose other connection pooling like c3p0, BoneCP etc...
Thanks in advance!
1 .DBCP2 is used in production. requires 'good' understanding of parameters to 'get it right' for the purpose.
c3p0 is big (as in number of classes), BoneCP is deprecated in favor of HikariCP
Hope this helps.
Related
I am working on a Spring boot project which uses Hazelcast as Cache. I am using the community edition of that. I have couple of questions,
I wanted to know whether there is minimal provision provided in community edition for security features. I know that we can provide unique group name so other nodes cannot join the cluster. But is there any other way?.
I also tried with hazelcast.application.validation.token but it is not working. What is the correct way to check with this property.
Also, hazelcast communicating using TCP is not blocked by spring boot. Is there any way in spring security to add some security feature to hazelcast?
I suppose, you're using Hazelcast 4.0 or later. The property hazelcast.application.validation.token was removed in version 4.
Maybe you've already looked into this answer - it's related to Hazelcast 3.y versions. Some info is still valid though.
The basic protection approach in Hazelcast version 4 (OS) is to set different cluster names (equivalent of group name in Hazelcast 3).
You can use the advanced network feature which allows you to have separated port numbers for different protocols (member protocol, client protocol, REST, ...). Then you can use OS level protection - such as firewall - to protect these endpoints.
You can also disable binding server sockets to all network interfaces (default behavior) and control which interface is used.
I don't think the Spring security provides a feature which would help you with protecting Hazelcast endpoints, but I'm not Spring expert, so maybe I'm wrong.
Our future architecture is to move towards docker /micro services. Currently we are using JBoss EAP 6.4 (with potential to upgrade to EAP 7) and Tomcat.
According to me JEE container is too heavy (slow, more memory, higher maintenance etc) for microservices environment. However, I was told that EAP 7 is quite fast and light weight and can be used for developing microservices. What is your input in deciding EAP 7 vs Tomcat 8 for docker/microservices? Cost and speed would be consideration.
EAP7 vs Tomcat 8 is an age old question answered multiple times here, here and here.
Tomcat is only a web container where as EAP7 is an application server that provides all Java EE 7 features such as persistence, messaging, web services, security, management, etc. EAP7 comes in two profiles - Web Profile and Full Profile. The Web Profile is much trimmer version and includes only the relevant implementations typically required for building a web application. The Full Profile is, as you'd expect, contains full glory of the platform. So using EAP 7 Web Profile will help you cut down the bloat quite a bit.
With Tomcat, you'll have to use something like Spring to bring the equivalent functionality and package all the relevant JARs with your application.
These discussions are typically helpful when you are starting a brand new project and have both Java EE or Spring resources at hand. Here are the reasons you may consider using EAP7:
You are already using EAP 6.4. Migrating to EAP 7 would be seamless. Using Docker would be just a different style of packaging your applications. All your existing monitoring, clustering, logging would continue to work. If you were to go with Tomcat, then you'll have to learn the Spring way of doing things. If you have time and resources and willing to experiment, you can go that route too. But think about what do you want to gain out of it?
EAP 7 is optimized for container and cloud deployments. Particularly, it is available as a service with OpenShift and so you know it works OOTB.
EAP 7 will give a decent performance boost in terms of latency and throughput over EAP 6.4. Read https://access.redhat.com/articles/2607521 for more details.
You may also consider TomEE. They provide Java EE stack integrated with Tomcat.
Another option, as #Federico recommended, consider using WildFly Swarm. Then you can really start customizing on what parts of the Java EE platform do you want. And your deployment model is using a JAR file.
As for packaging using Docker, they all provide a base image and you need to bundle your application in it. Here are a couple of important considerations for using a Docker image for microservices:
Size of the Docker image: A container may die unexpectedly or orchestration framework may decide to reschedule it on a different host. A bigger image size will take that much more longer to download. This means your perceived startup time of the service would be more for a bigger image. This also means dynamic scaling of the app would take longer to be effective.
Bootup time of the image: After the image is downloaded, the container may startup quickly but how long does it take for the application to be "ready"?
As a personal note, I'm much more familiar with Java EE stack than Tomcat/Spring, and WildFly continues to be favorite application server.
Besides using traditional Application servers, which are not really that heavy, you can taste different flavor of Java EE, called microcontainers.
Java EE is just a set of standards. Standard results in an API specification and everyone is then free to implement the specification. An Application Server (AS) is mainly a fine-tuned collection of this functionality. Those APIs were not brought to life for no reason. These represent functionality commonly used in projects. Application server can be viewed as a "curated set" of those functionalities. This approach has many advantages - AS has many users, therefore it is well tested over time. Wiring the functionality on your own may result in bugs.
Anyhow, a new age has come, where with Docker, the application carries its dependencies with it. The need for a full-blown application server with all the functionality ready to be served to applications is no longer required in many cases. In times past, the application server did not exactly know which services the applications deployed will need. Therefore, everything was bundled in. Some of more innovative AS like WildFly instantiated only the services required. Also, there are Java EE profiles which eased the monolith Application Server a little bit.
Right now, we usually ship the application together with it's dependencies (JDK, libraries, AS) inside a Docker - or we're heading there. Therefore, an effort to bundle exactly the right amount of is a logical choice. But, and it is a "big but", the need for the functionality of the AS is still relevant. It is still good idea to develop common functionality based on standards and common effort. It only no longer seems to be an option to distribute it as one big package, potentially leaving most of the APIs inactive. This effort has many names, be it microcontainers, uberjar creators ...
WildFly Swarm
Payara Micro
Spring Boot*
There are Java EE server so light it is doubtful to use anything else.
* Spring Boot is not based on Java EE and in default configuration present in the Getting Started guide, Tomcat is used internally.
WebSphere Liberty
Apache TomEE
The key point is, your Java EE application should be developed as an independent Java EE application. Wrapping it with "just enough" functionality is delegated onto these micro solutions. This is, at least in my humble opinion, the right way to go. This way, you will retain compatibility with both full-blown AS and micro-solutions. The uber-jar, containing all the dependencies, can be created during or after the build process.
WildFly Swarm or Payara Micro are able to "scan" the application, running only the services required. For a real-world application, the memory footprint in production can be as low as 100 MB - for a real-world application. This is probably what you want. Spring Boot can do similar things, if you require Spring. However, from my experience, Spring Boot is much more heavyweight and memory hungry than modern Java EE, because it obviously has Spring inside, so if you are seeking lightweigtness in terms of memory consumption, try Java EE, especially WildFly Swarm (or pure WildFly) and Payara Micro. Those are my favorite AS and they can be really, really small. I would say WildFly Swarm is much easier to start with, Payara micro requires more reading, but offers interesting functionality. Both can work as a wrapper - you can just wrap your current project with them after the build phase, no need to change anything.
Payara Micro even provides Docker images ready to use ! As you can see, Java EE is mature and ready to enter Docker lands :)
One of the very good and reliable resources is Adam Bien, for example in his Java EE micro/nanoservices video. Have a look.
I'll be developing stand-alone Java application being JMS client. I want to make sure that every time I send a message to a queue, I do not have to create session, connection etc.
I was thinking of using CachedConnectionFactory which comes with Apache Camel or using the solution Spring provides. Still, as far as I know the limitation of the former is that it is not suitable for transactions, and of the latter, that it may not behave correctly in case of failover.
On one post (http://stackoverflow.com/questions/8922339/how-to-pooling-the-jms-connection-in-a-standalone-java-applications) it was suggested to use Apache commons pool component, but I don't think creating such pool would be a trivial task anyway
Any comments on that?
I've not been doing bare metal TCP/IP for about 18 months, so I wonder what the current state of the art is.
I'm looking for both positive and negative aspects, with development of both server and client software.
I will be doing a project that needs a rock-solid TCP/IP layer, so for me that is an important aspect :)
For this to become a community wiki, I'm looking for broader answers than just 'rock solid'. So for instance information about the feature-width is also appreciated.
I'll be updating the question with relevant aspects found in the answers in order to get a wiki entry that has a balanced overview of those libraries.
For example, see my answer below with my past experience with Indy
I'm ambivalent on the exception handling and anti-freeze in Indy, though I got used to it, it still felt somewhat unnatural.
Right now I develop in both Delphi 2007 (non Unicode) and XE (Unicode), so the libraries I'm considering should support at least those two Delphi versions.
Edit: Summary of my past experience with Indy, and the comments (thanks Eugene, Marjan)
(please update with the current Indy state of the art):
Pro:
ships with Delphi
mature
development community
open source so lots of eyes scrutinizing those sources
a truckload of valuable comment documentation in the source code
OpenSSL support
supports a broad set of Delphi versions (including 2007 and XE)
wide choice of protocols
Con:
the version shipping with Delphi was not always the most stable one; download from the sources was usually required to get a stable build
(in the mean time) lots of duplication of code that now is in Delphi (but Indy requires for compatibility with older Delphi versions)
not all TCP/IP components were up-to-date (for instance, back then the POP3 client component did not support some basic POP3 commands)
version interoperability was a pain: upgrading from one Indy version to another could be very time consuming
I'm ambivalent on the exception handling and anti-freeze in Indy, though I got used to it, it still felt somewhat unnatural.
breaking changes are made between build updates; ifdefs required to accommodate those
Unclear release status if any at all, no RCs for a long while, getting trunk can make your local copy unstable
ICS - The Internet Component Suite
ICS - see www.overbyte.be. Open source by François Piette. To me this has always been the number 1 alternative to Indy. It's most interesting selling point: it makes using asynchronous stuff easy, and being async seems to be closer to "bare metal" sockets programming.
I've used it to build a fairly complex VNC proxy where the proxy itself (server) is built with ICS and the clients are a mixture of Indy and ICS. In periods of high demand the proxy handles about 100 simultaneous connections and about 10 simultaneous VNC screen sessions. It eats up an average of 5 mbit/s, handles connections over two distinct Internet connections. I don't think the 100 + 10 is the limit, because the server handles that without any problems and CPU usage is too low to mention.
Pros:
Works asynchronously
Somewhat easier on beginners because it doesn't need threads
Supports a good number of protocols
Cons:
Relies on Windows messaging. I'm simply not comfortable with that.
The async behavior makes implementing most protocols slightly difficult (because most protocols are in the form of send command / receive response). This shouldn't matter for most people since ICS offers ready-made implementation for the most-used protocols.
All that being said, I haven't used ICS in a very long time, I'm not up-to-date with all the bells and whistles. This is CW, so please edit and expand!
I have used Indy since 2003 for my own TCP communications framework. It is rock-solid, I have a version used with Delphi 2007 and another with Delphi 2010, if you handle the threadng correctly there is no need to use the anti-freeze stuff, and I have consistent exception handling on the client and the server by implementing my own wrapper around this.
You can dowload it here (http://www.csinnovations.com/framework_delphi.htm) - look for the Tcp units, mainly AppTcpServerUnt and AppTcpClientUnt.
I would strongly recommend Clever Internet Suite, it's by far the best designed and written set of communication components. It's not free and so not that well known, but it's well worth investigating.
Pro:
well designed and written
contains many components and implements various protocols.
supports a broad set of Delphi versions (including 2007 and XE)
SSL support
mature product as the release history indicates
Con:
not open source
You could consider using a higher protocol level like HTTP, because:
It's more firewall and VPN friendly;
It's well documented and known as a good protocol;
It already has secured HTTPS version;
It has a very low overhead over row TCP/IP;
It's ready to use in an AJAX environment (if you need it in the future);
Microsoft already did the low-level tuning for you in modern version of Windows.
In this case, you could take a look at two Open Source classes working from Delphi 6 up to XE:
THttpApiServer which implements a HTTP server using fast http.sys kernel-mode server:
The HTTP Server API enables applications to communicate over HTTP without
using Microsoft Internet Information Server (IIS). Applications can register
to receive HTTP requests for particular URLs, receive HTTP requests, and send
HTTP responses. The HTTP Server API includes SSL support so that applications
can exchange data over secure HTTP connections without IIS. It is also
designed to work with I/O completion ports.
The HTTP Server API is supported on Windows Server 2003 operating systems
and on Windows XP with Service Pack 2 (SP2). Be aware that Microsoft IIS 5
running on Windows XP with SP2 is not able to share port 80 with other HTTP
applications running simultaneously.
TWinHTTP which handles client-side HTTP/1.1 request using the WinHTTP API:
Microsoft Windows HTTP Services (WinHTTP) is targeted at middle-tier and
back-end server applications that require access to an HTTP client stack;
Is much faster than older WinINet API.
Resulting speed is very good (especially the server), and you will rely on Microsoft implementation. The first is the core of IIS, and the second is used in the latest versions of Internet Explorer.
The answer really depends on many factors and your requirements, such as
what layers are needed (TCP, SSL/TLS, application-level protocols)
whether you need a client or a server as well (server is much more complicated task)
whether you count paid options.
In general, not much (positive) happened in 18 months or even in 3 years as most developers look at .NET as primary development platform.
Clever Internet Suite mentioned in other answer and DevArt's SecureBridge gained some new functionality.
Our SecureBlackbox offers support for the most advanced features (besides native SSL/TLS): IPv6, HTTPS Proxy with basic, digest and NTLM authentication (starting with SecureBlackbox 9), International Domain Names (starting with SecureBlackbox 9), DNSSEC, bandwidth control and more.
Application-level protocols supported by SecureBlackbox are HTTP (client and server), WebDAV (client and server), FTP (client and server), SSH and SFTP (client and server), SMTP and POP3 clients, DNS client, AS2 and AS3. All of the protocols (besides SSH and SFTP, of course) have complete support for SSL/TLS.
The list of supported protocols can be found on Packages page. Supported protocol features are listed on Technical Specification page for each package.
Worked with NetMaster components way (way!) back in the old Delphi versions (2! 3! 4!)
Did some work with Indy, but had the unnatural feeling also (actually I'd describe it more as bulky)
Stumbled upon Synapse when I was searching for just a light wrapper around the Windows network API,
And then rediscovered plain old TTcpClient/TTcpServer. They are Delphi's own wrapper around winsock! I use them blocking, with a dedicated TThread inheritant for each TTcpClient, and let TTcpServer do the threads and do all the work in DoAccept, see here for an example.
This, fow now, gave me the rock-solid feel we're looking for. If you want to support heavy load, I would try and build a thread manager that handles several sockets/connections per thread, or have two sets of threads: a few that listen a larger number of 'dormant' connections, and the others that handle lesser 'active' connections, switching connections between the threads depending on wether a request or response is being handled. (e.g. HTTP's Connection: keep-alive)
We're running a Rails site at http://hansard.millbanksystems.com, on a dedicated Accelerator. We currently have Apache setup with mod-proxy-balancer, proxying to four mongrels running the application.
Some requests are rather slow and in order to prevent the situation where other requests get queued up behind them, we're considering options for proxying that will direct requests to an idle mongrel if there is one.
Options appear to include:
recompiling mod_proxy_balancer for Apache as described at http://labs.reevoo.com/
compiling nginx with the fair proxy balancer for Solaris
compiling haproxy for Open Solaris (although this may not work well with SMF)
Are these reasonable options? Have we missed anything obvious? We'd be very grateful for your advice.
Apache is a bit of a strange beast to use for your balancing. It's certainly capable but it's like using a tank to do the shopping.
Haproxy/Nginx are more specifically tailored for the job. You should get higher throughput and use fewer resources at the same time.
HAProxy offers a much richer set of features for load-balancing than mod_proxy_balancer, nginx, and pretty much any other software out there.
In particular for your situation, the log output is highly customisable so it should be much easier to identify when, where and why slow requests occur.
Also, there are a few different load distribution algorithms available, with nice automatic failover capabilities too.
37Signals have a post on Rails and HAProxy here (originally seen here).
if you want to avoid Apache, it is possible to deploy a Mongrel cluster with an alternative web server, such as nginx or lighttpd, and a load balancer of some variety such as Pound or a hardware-based solution.
Pounds (http://www.apsis.ch/pound/) worked well for me!
The only issue with haproxy and SMF is that you can't use it's soft-restart feature to implement the 'refresh' action, unless you write a wrapper script. I wrote about that in a bit more detail here
However, IME haproxy has been absolutely bomb-proof on solaris, and I would recommend it highly. We ship anything from a few hundred GB to a couple of TB a day through a single haproxy instance on solaris 10 and so far (touch wood) in 2+ years of operation we've not had any problems with it.
Pound is an HTTP load balancer that I've used successfully in the past. It includes a dynamic scaling feature that may help with your specific problem:
DynScale (0|1): Enable or disable
the dynamic rescaling code (default:
0). If enabled Pound will periodically
try to modify the back-end priorities
in order to equalise the response
times from the various back-ends. This
value can be overridden for specific
services.
Pound is small, well documented, and easy to configure.
I've used mod_proxy_balancer + mongrel_cluster successfully (small traffic website).