.war on tomcat on docker, 404 on servlet - docker

I finally found the motivation to work with Docker : I tried to deploy a basic "hello-world" servlet, on a tomcat running on a docker container.
This servlet works perfectly when I run it on the Tomcat started by intelliJ.
But when I use it with Docker, using this Dockerfile
FROM tomcat:latest
ADD example.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
And I build/start the image/container:
docker build -t example .
docker run -p 8090:8080 example
The index.jsp is displayed correctly at localhost:8090/example/, but I get a 404 when trying to access the servlet at localhost:8090/example/hello-servlet
At the same time, I can access localhost:8080/example/hello-servlet, when my non dockerized tomcat runs, and it works well.
Here is the servlet code :
package io.bananahammock.bananahammock_backend;
import java.io.*;
import javax.servlet.http.*;
import javax.servlet.annotation.*;
#WebServlet(name = "helloServlet", value = "/hello-servlet")
public class HelloServlet extends HttpServlet {
private String message;
public void init() {
message = "Hello World!";
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
response.setContentType("text/html");
PrintWriter out = response.getWriter();
out.println("<html><body>");
out.println("<h1>" + message + "</h1>");
out.println("</body></html>");
}
public void destroy() {
}
}
What am I missing?

Since August 31, 2021 (this commit) the Docker image tomcat:latest uses Tomcat 10 (see the list of available tags).
As you are probably aware, software which uses the javax.* namespace does not work on Jakarta EE 9 servers such as Tomcat 10 (see e.g. this question). Therefore:
if it is a new project, migrate to the jakarta.* namespace and test everything on Tomcat 10 or higher,
if it is a legacy project, use another Docker image, e.g. the tomcat:9 tag.

Related

Spring Boot Admin with Eureka and Docker Swarm

EDIT/SOLUTION:
I've got it, partly thanks to #anemyte's comment. Although the eureka.hostname Property was not the issue at play (though it did warrant correction), looking more closely got me to the true cause of the problem: the network interface in use, port forwarding and (bad) luck.
The services that I chose for this prototypical implementation were those that have port-forwardings in a production setting (I must have unfortunately forgotten to add a port-forwarding to the example service below - dumb, though I do not know if this would have helped).
When a Docker Swarm service has a port forwarding, the container has an additional bridge interface in addition to the overlay interface which is used for internal container-to-container communication.
Unfortunately, the client services were choosing to register with Eureka with their bridge interface IP as the advertised IP instead of the internal swarm IP - possibly because that is what InetAddress.getLocalhost() (which is internally used by Spring Cloud) would return in that case.
This led me to erroneously believe that Spring Boot Admin could reach these services - as I externally could, when it in fact could not as the wrong IP was being advertised. Using cURL to verify this only compounded the confusion as I was using the overlay IP to check whether the services can communicate, which is not the one that was being registered with Eureka.
The (provisional) solution to the issue was setting the spring.cloud.inetutils.preferred-networks setting to 10.0, which is the default IP address pool (more specifically: 10.0.0.0/8) for the internal swarm networks (documentation here). There is also a blacklist approach using spring.cloud.inetutils.ignored-networks, however I prefer not to use it.
In this case, client applications advertised their actual swarm overlay IP to Eureka, and SBA was able to reach them.
I do find it a bit odd that I did not get any error messages from SBA, and will be opening an issue on their tracker. Perhaps I was simply doing something wrong.
(original question follows)
I have the following setup:
Service discovery using Eureka (with eureka.client.fetch-registry=true and eureka.instance.preferIpAddress=true)
Spring Boot Admin running in the same application as Eureka, with spring.boot.admin.context-path=/admin
Keycloak integration, such that:
SBA itself uses a service account to poll the various /actuator endpoints of my client applications.
The SBA UI itself is protected via a login page which expects an administrative login.
Locally, this setup works. When I start my eureka-server application together with client applications, I see the following correct behaviour:
Eureka running on e.g. localhost:8761
Client applications successfully registering with Eureka via IP registration (eureka.instance.preferIpAddress=true)
SBA running at e.g. localhost:8761/admin and discovering my services
localhost:8761/admin correctly redirects to my Keycloak login page, and login correctly provides a session for the SBA UI
SBA itself successfully polling the /actuator endpoints of any registered applications.
However, I have issues replicating this setup inside a Docker Swarm.
I have two Docker Services, let's say eureka-server and client-api - both are created using the same network and the containers can reach each other via this network (via e.g. curl). eureka-server correctly starts and client-api registers with Eureka right away.
Attempting to navigate to eureka_url/admin correctly shows the Keycloak login page and redirects back to the Spring Boot Admin UI after a successful login. However, no applications are registered and I cannot figure out why.
I've attempted to enable more debug/trace logging, but I see absolutely no logs; it's as if SBA is simply not fetching the Eureka registry.
Does anyone know of a way to troubleshoot this behaviour? Has anyone had this issue?
EDIT:
I'm not quite sure which settings may be pertinent to the issue, but here are some of my configuration files (as code snippets since they're not that small, I hope that's OK):
application.yaml
(Includes base eureka properties, SBA properties, and Keycloak properties for SBA)
---
eureka:
hostname: localhost
port: 8761
client:
register-with-eureka: false
# Registry must be fetched so that Spring Boot Admin knows that there are registered applications
fetch-registry: true
serviceUrl:
defaultZone: http://${eureka.hostname}:${eureka.port}/eureka/
instance:
lease-renewal-interval-in-seconds: 10
lease-expiration-duration-in-seconds: 30
environment: eureka-test-${user.name}
server:
enable-self-preservation: false # Intentionally disabled for non-production
spring:
application:
name: eureka-server
boot:
admin:
client:
prefer-ip: true
# Since we are running in Eureka, "/" is already the root path for Eureka itself
# Register SBA under the "/admin" path
context-path: /admin
cloud:
config:
enabled: false
main:
allow-bean-definition-overriding: true
keycloak:
realm: ${realm}
auth-server-url: ${auth_url}
# Client ID
resource: spring-boot-admin-automated
# Client secret used for service account grant
credentials:
secret: ${client_secret}
ssl-required: external
autodetect-bearer-only: true
use-resource-role-mappings: false
token-minimum-time-to-live: 90
principal-attribute: preferred_username
build.gradle
// Versioning / Spring parents poms
apply from: new File(project(':buildscripts').projectDir, '/dm-versions.gradle')
configurations {
all*.exclude module: 'spring-boot-starter-tomcat'
}
ext {
springBootAdminVersion = '2.3.1'
keycloakVersion = '11.0.2'
}
dependencies {
compileOnly 'org.projectlombok:lombok'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-server'
implementation "de.codecentric:spring-boot-admin-starter-server:${springBootAdminVersion}"
implementation 'org.keycloak:keycloak-spring-boot-starter'
implementation 'org.springframework.boot:spring-boot-starter-security'
compile "org.keycloak:keycloak-admin-client:${keycloakVersion}"
testCompileOnly 'org.projectlombok:lombok'
}
dependencyManagement {
imports {
mavenBom "org.keycloak.bom:keycloak-adapter-bom:${keycloakVersion}"
}
}
The actual application code:
package com.app.eureka;
import de.codecentric.boot.admin.server.config.EnableAdminServer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;
#EnableAdminServer
#EnableEurekaServer
#SpringBootApplication
public class EurekaServer {
public static void main(String[] args) {
SpringApplication.run(EurekaServer.class, args);
}
}
Keycloak configuration:
package com.app.eureka.keycloak.config;
import de.codecentric.boot.admin.server.web.client.HttpHeadersProvider;
import org.keycloak.KeycloakPrincipal;
import org.keycloak.KeycloakSecurityContext;
import org.keycloak.OAuth2Constants;
import org.keycloak.adapters.springboot.KeycloakSpringBootProperties;
import org.keycloak.adapters.springsecurity.KeycloakConfiguration;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.keycloak.adapters.springsecurity.token.KeycloakAuthenticationToken;
import org.keycloak.admin.client.Keycloak;
import org.keycloak.admin.client.KeycloakBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Scope;
import org.springframework.context.annotation.ScopedProxyMode;
import org.springframework.http.HttpHeaders;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistry;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;
import java.security.Principal;
import java.util.Objects;
#KeycloakConfiguration
#EnableConfigurationProperties(KeycloakSpringBootProperties.class)
class KeycloakConfig extends KeycloakWebSecurityConfigurerAdapter {
private static final String X_API_KEY = System.getProperty("sba_api_key");
#Value("${keycloak.token-minimum-time-to-live:60}")
private int tokenMinimumTimeToLive;
/**
* {#link HttpHeadersProvider} used to populate the {#link HttpHeaders} for
* accessing the state of the disovered clients.
*
* #param keycloak
* #return
*/
#Bean
public HttpHeadersProvider keycloakBearerAuthHeaderProvider(final Keycloak keycloak) {
return provider -> {
String accessToken = keycloak.tokenManager().getAccessTokenString();
HttpHeaders headers = new HttpHeaders();
headers.add("X-Api-Key", X_API_KEY);
headers.add("X-Authorization-Token", "keycloak-bearer " + accessToken);
return headers;
};
}
/**
* The Keycloak Admin client that provides the service-account Access-Token
*
* #param props
* #return keycloakClient the prepared admin client
*/
#Bean
public Keycloak keycloak(KeycloakSpringBootProperties props) {
final String secretString = "secret";
Keycloak keycloakAdminClient = KeycloakBuilder.builder()
.serverUrl(props.getAuthServerUrl())
.realm(props.getRealm())
.grantType(OAuth2Constants.CLIENT_CREDENTIALS)
.clientId(props.getResource())
.clientSecret((String) props.getCredentials().get(secretString))
.build();
keycloakAdminClient.tokenManager().setMinTokenValidity(tokenMinimumTimeToLive);
return keycloakAdminClient;
}
/**
* Put the SBA UI behind a Keycloak-secured login page.
*
* #param http
*/
#Override
protected void configure(HttpSecurity http) throws Exception {
super.configure(http);
http
.csrf().disable()
.authorizeRequests()
.antMatchers("/**/*.css", "/admin/img/**", "/admin/third-party/**").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.anyRequest().permitAll();
}
#Autowired
public void configureGlobal(final AuthenticationManagerBuilder auth) {
SimpleAuthorityMapper grantedAuthorityMapper = new SimpleAuthorityMapper();
grantedAuthorityMapper.setPrefix("ROLE_");
grantedAuthorityMapper.setConvertToUpperCase(true);
KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(grantedAuthorityMapper);
auth.authenticationProvider(keycloakAuthenticationProvider);
}
#Bean
#Override
protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
return new RegisterSessionAuthenticationStrategy(buildSessionRegistry());
}
#Bean
protected SessionRegistry buildSessionRegistry() {
return new SessionRegistryImpl();
}
/**
* Allows to inject requests scoped wrapper for {#link KeycloakSecurityContext}.
* <p>
* Returns the {#link KeycloakSecurityContext} from the Spring
* {#link ServletRequestAttributes}'s {#link Principal}.
* <p>
* The principal must support retrieval of the KeycloakSecurityContext, so at
* this point, only {#link KeycloakPrincipal} values and
* {#link KeycloakAuthenticationToken} are supported.
*
* #return the current <code>KeycloakSecurityContext</code>
*/
#Bean
#Scope(scopeName = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public KeycloakSecurityContext provideKeycloakSecurityContext() {
ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
Principal principal = Objects.requireNonNull(attributes).getRequest().getUserPrincipal();
if (principal == null) {
return null;
}
if (principal instanceof KeycloakAuthenticationToken) {
principal = (Principal) ((KeycloakAuthenticationToken) principal).getPrincipal();
}
if (principal instanceof KeycloakPrincipal<?>) {
return ((KeycloakPrincipal<?>) principal).getKeycloakSecurityContext();
}
return null;
}
}
KeycloakConfigurationResolver
(separate class to prevent circular bean dependency that happens for some reason)
package com.app.eureka.keycloak.config;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class KeycloakConfigurationResolver {
/**
* Load Keycloak configuration from application.properties or application.yml
*
* #return
*/
#Bean
public KeycloakSpringBootConfigResolver keycloakConfigResolver() {
return new KeycloakSpringBootConfigResolver();
}
}
Logout controller
package com.app.eureka.keycloak.config;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PostMapping;
import javax.servlet.http.HttpServletRequest;
#Controller
class LogoutController {
/**
* Logs the current user out, preventing access to the SBA UI
* #param request
* #return
* #throws Exception
*/
#PostMapping("/admin/logout")
public String logout(final HttpServletRequest request) throws Exception {
request.logout();
return "redirect:/admin";
}
}
I unfortunately do not have a docker-compose.yaml as our deployment is done mostly through Ansible, and anonymizing those scripts is rather difficult.
The services are ultimately created as follows (using docker service create):
(some of these networks may not be relevant as this is a local swarm running on my personal node, of note are the swarm networks)
dev#ws:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
3ba4a65c319f bridge bridge local
21065811cbff docker_gwbridge bridge local
ti1ksbdxlouo services overlay swarm
c59778b105b5 host host local
379lzdi0ljp4 ingress overlay swarm
dd92d2f75a31 none null local
eureka-server Dockerfile:
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY eureka-server.jar /home/myuser/eureka-server.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom eureka-server.jar \
--server.port=8761; sh
Eureka/SBA app Docker swarm service:
dev#ws:~$ docker service create --name eureka-server -p 8080:8761 --replicas 1 --network services --hostname eureka-server --limit-cpu 1 --limit-memory 768m eureka-server
Client applications are then started as follows:
Dockerfile
FROM registry/image:latest
MAINTAINER "dev#com.app"
COPY client-api.jar /home/myuser/client-api.jar
USER myuser
WORKDIR /home/myuser
CMD /usr/bin/java -jar \
-Xmx523351K -Xss1M -XX:ReservedCodeCacheSize=240M \
-XX:MaxMetaspaceSize=115625K \
-Djava.security.egd=file:/dev/urandom -Deureka.instance.hostname=client-api client-api.jar \
--eureka.zone=http://eureka-server:8761/eureka --server.port=0; sh
And then created as Swarm services as follows:
dev#ws:~$ docker service create --name client-api --replicas 1 --network services --hostname client-api --limit-cpu 1 --limit-memory 768m client-api
On the client side, of note are the following eureka.client settings:
eureka:
name: ${spring.application.name}
instance:
leaseRenewalIntervalInSeconds: 10
instanceId: ${spring.cloud.client.hostname}:${spring.application.name}:${spring.application.instanceId:${random.int}}
preferIpAddress: true
client:
registryFetchIntervalSeconds: 5
That's all I can think of right now. The created docker services are running in the same network and can ping each other by IP as well as by hostname (cannot show output right now as I am not actively working on this at the moment, unfortunately).
In the Eureka UI I can, in fact, see my client applications registered and running - it's only SBA which does not seem to notice that there are any.
I found nothing wrong with the configuration you have presented. The only weak lead I see is eureka.hostname=localhost from application.yml. localhost and loopback IPs are two things that better to be avoided with swarm. I think you should check if it isn't something network related.

gRPC streaming procedure returning "Method is unimplemented." when running in Azure Container Instance

I have a gRPC service defined and implemented in dotnet core 3.1 using C#. I have a stream call defined like so:
service MyService {
rpc MyStreamingProcedure(Point) returns (stream ResponseValue);
}
In the service it is generated
public virtual global::System.Threading.Tasks.Task MyStreamingProcedure(global::MyService.gRPC.Point request, grpc::IServerStreamWriter<global::MyService.gRPC.ResponseValue> responseStream, grpc::ServerCallContext context)
{
throw new grpc::RpcException(new grpc::Status(grpc::StatusCode.Unimplemented, ""));
}
In my service it is implemented by overriding this:
public override async Task MyStreamingProcedure(Point request, IServerStreamWriter<ResponseValue> responseStream, ServerCallContext context)
{
/* magic here */
}
I have this building in a docker container, and when I run it on localhost it runs perfectly:
docker run -it -p 8001:8001 mycontainerregistry.azurecr.io/myservice.grpc:latest
Now here is the question. When I run this in an Azure Container Instance and call the client using a public IP address, the call fails with
Unhandled exception. Grpc.Core.RpcException: Status(StatusCode=Unimplemented, Detail="Method is unimplemented.")
at Grpc.Net.Client.Internal.HttpContentClientStreamReader`2.MoveNextCore(CancellationToken cancellationToken)
It appears that it is not seeing the override and is running the procedure in the base class. The unary call on the same gRPC service works fine using the container running in public ACI. Why would the streaming call behave differently on localhost and running over a public IP address?
I got the same error, because I had not registered service.
app.UseEndpoints(endpoints =>
{
endpoints.MapGrpcService<MyService>();
});

Selenium Grid - How to shutdown the node after execution?

I am trying to implement a solution that'd shutdown the node running inside a docker (Swarm) container after a test run.
I looked at docker remove command but cannot use the docker container rm command as the containers are at the service-task level
I looked at the /lifecycle-manager api but cannot get to the node from client, the docker stack is running through a nginx server and only one port(4444) gets exposed
Finally I looked at extended the grid node (DefaultRemoteProxy). Excuse my bad java code, this is my first stab at writing java code. With this, it looks like I can stop the node but it gets registered to the hub
How can i stop this re-registration process or start the node without it
My goal is to have a new container for every test and let the docker orchestration bring up a new container when the node is shutdown and container gets removed (docker api https://docs.docker.com/engine/api/v1.24/)
public class ExtendedProxy extends DefaultRemoteProxy implements TestSessionListener {
public ExtendedProxy(RegistrationRequest request, GridRegistry registry) {
super(request, registry);
}
#Override
public void afterCommand(TestSession session, HttpServletRequest request, HttpServletResponse response) {
RequestType type = SeleniumBasedRequest.createFromRequest(request, getRegistry()).extractRequestType();
if(type == STOP_SESSION) {
System.out.println("Going to Shutdown the Node");
GridRegistry registry = getRegistry();
registry.stop();
registry.removeIfPresent(this);
}
}
}
Hub
[DefaultGridRegistry.assignRequestToProxy] - Shutting down registry.
[DefaultGridRegistry.removeIfPresent] - Cleaning up stale test sessions on the unregistered node
[DefaultGridRegistry.add] - Registered a node
Node
[ActiveSessions$1.onStop] - Removing session de04928d-7056-4b39-8137-27e9a0413024 (org.openqa.selenium.firefox.GeckoDriverService)
[SelfRegisteringRemote.registerToHub] - Registering the node to the hub: http://localhost:4444/grid/register
[SelfRegisteringRemote.registerToHub] - The node is registered to the hub and ready to use
I figured out the solution. I am answering my own question, hoping it'd benefit the community.
Start the node with the command line flag. This stops the auto registration thread from ever getting created.
registerCycle - 0
And in your class that extends DefaultRemoteProxy, override the afterSession
#Override
public void afterSession(TestSession session) {
totalSessionsCompleted++;
GridRegistry gridRegistry = getRegistry();
for(TestSlot slot : getTestSlots()) {
gridRegistry.forceRelease(slot, SessionTerminationReason.PROXY_REREGISTRATION);
}
teardown();
gridRegistry.removeIfPresent(this);
}
When the client executed the driver.quit() method, the node de-registers with the hub.

scdf 1.7.3 docker k8s #Bean no running, no logs

As a user, writing a processor as a cloud function,
scdf 1.7.3, spring boot 1.5.9, spring-cloud-function-dependencies 1.0.2,
public class MyFunctionBootApp {
public static void main(String[] args) {
SpringApplication.run(MyFunctionBootApp.class,
"--spring.cloud.stream.function.definition=toUpperCase");
}
#Bean
public Function<String, String> toUpperCase() {
return s -> {
log.info("received:=" + s);
return ( (s+"jsa").toUpperCase());
};
}
}
i've create a simple stream => time | function-runner | log
function-runner-0.0.6.jar at nexus is ok
docker created ok,
Container entrypoint set to [java, -cp, /app/resources:/app/classes:/app/libs/*, function.runner.MyFunctionBootApp]
No time message from time pod arrived to function-runner processor executing toUpperCase function
No logs
I am checking deploying using , app.function-runner.spring.cloud.stream.function.definition=toUpperCase, #FunctionalScan
any clues?
We discussed function-runner being deprecated in favor of native support of Spring Cloud Function in Spring Cloud Stream. See: scdf-1-7-3-docker-k8s-function-runner-not-start. Please don't duplicate post it.
Also, you're on a very old Spring Boot version (v1.5.9 - at least 1.5yrs old). More importantly, Spring Boot 1.x is in maintenance-only mode, and it will be EOL by August 2019. See: spring-boot-1-x-eol-aug-1st-2019. It'd be good that you upgrade to 2.1.x latest.

get heap dump from command line using jmx

Is it possible to get a server heap dump on a running process on linux (CentOS) using JMX from command line ?
can't open VisualVM,
can't install jmap
It can be done with this simple code:
import com.sun.management.HotSpotDiagnosticMXBean;
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.management.JMX;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
#SuppressWarnings("restriction")
public class CreateHeapDump
{
public static void main(String[] args) throws Exception
{
String host = args[0];
String port = args[1];
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + host + ":" + port + "/jmxrmi");
JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();
ObjectName mbeanName = new ObjectName("com.sun.management:type=HotSpotDiagnostic");
HotSpotDiagnosticMXBean bean = JMX.newMBeanProxy(mbsc, mbeanName, HotSpotDiagnosticMXBean.class, true);
String fileName = "heap_dump_" + new SimpleDateFormat("dd.MM.yyyy HH.mm").format(new Date()) + ".hprof";
boolean onlyLiveObjects = true;
bean.dumpHeap(fileName, onlyLiveObjects);
}
}
Compile it:
javac CreateHeapDump.java
Call it from command line:
java CreateHeapDump localhost 9010
This won't be pretty, but it works. Having said that, you might want to consider scripting this in Groovy or Jython, or even JavaScript....
I added a quickie add-on to jmxlocal, a project which implements standard JMX remoting for local JVMs. It now supports a command line invocation of one command against the connected MBeanServer, and the command must be specified in Java code.
Clone the repo and build with mvn clean install.
Copy the jar (jmxlocal-1.0-SNAPSHOT.jar) to your target server.
Execute the dump JMX command using the PID of the target java process as follows:
java -jar target/jmxlocal-1.0-SNAPSHOT.jar -j service:jmx:attach:///<PID> -c "conn.invoke(on(\"com.sun.management:type=HotSpotDiagnostic\"), \"dumpHeap\", new Object[]{\"/tmp/heap.dump\", true}, new String[]{String.class.getName(), boolean.class.getName()})"
The output will be
Command Executed. Result [null]
and you should find your dump file at /tmp/heap.dump.
If you need to, you can supply credentials using the -u [username] and -p [password] arguments.

Resources