How to get application port programmatically in Dropwizard - dropwizard

I am using dropwizard version 0.7.1. It is configured to use "random" (ephemeral?) port (server.applicationConnectors.port=0). I want to get what port is really in use after startup, but I can't find any information how to do that.

You can get a serverStarted callback from a lifecycle listener to figure this out.
#Override
public void run(ExampleConfiguration configuration, Environment environment) throws Exception {
environment.lifecycle().addServerLifecycleListener(new ServerLifecycleListener() {
#Override
public void serverStarted(Server server) {
for (Connector connector : server.getConnectors()) {
if (connector instanceof ServerConnector) {
ServerConnector serverConnector = (ServerConnector) connector;
System.out.println(serverConnector.getName() + " " + serverConnector.getLocalPort());
// Do something useful with serverConnector.getLocalPort()
}
}
}
});
}

I find this approach worked well for me with both the Simple and Default server configurations in Dropwizard.
public void run(ExampleConfiguration configuration, Environment environment) throws Exception {
Stream<ConnectorFactory> connectors = configuration.getServerFactory() instanceof DefaultServerFactory
? ((DefaultServerFactory)configuration.getServerFactory()).getApplicationConnectors().stream()
: Stream.of((SimpleServerFactory)configuration.getServerFactory()).map(SimpleServerFactory::getConnector);
int port = connectors.filter(connector -> connector.getClass().isAssignableFrom(HttpConnectorFactory.class))
.map(connector -> (HttpConnectorFactory) connector)
.mapToInt(HttpConnectorFactory::getPort)
.findFirst()
.orElseThrow(IllegalStateException::new);
}

Related

Wrong Kafka topic names for Spring-Cloud-Function app deployed as part of Spring-Cloud-Data-Flow stream

I have a simple SCDF stream that looks like this:
http --port=12346 | mvmn-transform | file --name=tmp.txt --directory=/tmp
The mvmn-transform is a simple custom transformer that looks like this:
#SpringBootApplication
#EnableBinding(Processor.class)
#EnableConfigurationProperties(ScdfTestTransformerProperties.class)
#Configuration
public class ScdfTestTransformer {
public static void main(String args[]) {
SpringApplication.run(ScdfTestTransformer.class, args);
}
#Autowired
protected ScdfTestTransformerProperties config;
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(Message<?> message) {
Object payload = message.getPayload();
Map<String, Object> result = new HashMap<>();
Map<String, String> headersStr = new HashMap<>();
message.getHeaders().forEach((k, v) -> headersStr.put(k, v != null ? v.toString() : null));
result.put("headers", headersStr);
result.put("payload", payload);
result.put("configProp", config.getSomeConfigProp());
return result;
}
// See https://stackoverflow.com/questions/59155689/could-not-decode-json-type-for-key-file-name-in-a-spring-cloud-data-flow-stream
#Bean("kafkaBinderHeaderMapper")
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
BinderHeaderMapper mapper = new BinderHeaderMapper();
mapper.setEncodeStrings(true);
return mapper;
}
}
This works fine.
But I've read that Spring Cloud Function should allow me to implement such apps without a necessity to specify binding and transformer annotations, so I've changed it to this:
#SpringBootApplication
// #EnableBinding(Processor.class)
#EnableConfigurationProperties(ScdfTestTransformerProperties.class)
#Configuration
public class ScdfTestTransformer {
public static void main(String args[]) {
SpringApplication.run(ScdfTestTransformer.class, args);
}
#Autowired
protected ScdfTestTransformerProperties config;
// #Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
#Bean
public Function<Message<?>, Map<String, Object>> transform(
// Message<?> message
) {
return message -> {
Object payload = message.getPayload();
Map<String, Object> result = new HashMap<>();
Map<String, String> headersStr = new HashMap<>();
message.getHeaders().forEach((k, v) -> headersStr.put(k, v != null ? v.toString() : null));
result.put("headers", headersStr);
result.put("payload", payload);
result.put("configProp", "Config prop val: " + config.getSomeConfigProp());
return result;
};
}
// See https://stackoverflow.com/questions/59155689/could-not-decode-json-type-for-key-file-name-in-a-spring-cloud-data-flow-stream
#Bean("kafkaBinderHeaderMapper")
public KafkaHeaderMapper kafkaBinderHeaderMapper() {
BinderHeaderMapper mapper = new BinderHeaderMapper();
mapper.setEncodeStrings(true);
return mapper;
}
}
And now I have a problem - SCDF source and target topic names are ignored by Spring-Cloud-Function apparently, and topics transform-in-0 and transform-out-0 are created instead.
SCDF creates topics that have names like <stream-name>.<app-name> eg something like TestStream123.http and TestStream123.mvmn-transform
Previously they were used for transformer - as it should be, since it is a part of the SCDF stream. But now they are ignored by Spring-Cloud-Function and transform-in-0 and transform-out-0 are created instead.
Thus my transformer no longer receives any input, as it expects it on a wrong Kafka topic. And would probably produce no output to the stream as well, since it outputs to the wrong Kafka topic also.
P.S. Just in case, full project code on GitHub: https://github.com/mvmn/scdftest-transformer/tree/scfunc
In order to run locally start up Kafka, Skipper, SCDF and SCDF console, do mvn clean install in the app folder and then do app register --name mvmn-transform-1 --type processor --uri maven://x.mvmn.study.scdf.scdftest:scdftest-transformer:0.1.1-SNAPSHOT --metadata-uri maven://x.mvmn.study.scdf.scdftest:scdftest-transformer:0.1.1-SNAPSHOTin the coonsole. Then you can deploy stream using definition http --port=12346 | mvmn-transform | file --name=tmp.txt --directory=/tmp
Since you are using the functional model of writing Spring Cloud Stream applications, when you deploy this app, you need to pass two properties on the custom processor to restore the Spring Cloud Data Flow behavior.
spring.cloud.stream.function.bindings.transform-in-0=input
spring.cloud.stream.function.bindings.transform-out-0=output
Can you try that and see if that makes a difference?

How to configure Micronaut and Micrometer to write ILP directly to InfluxDB?

I have a Micronaut application that uses Micrometer to report metrics to InfluxDB with the micronaut-micrometer project. Currently it is using the Statsd Registry provided via the io.micronaut.configuration:micronaut-micrometer-registry-statsd dependency.
I would like to instead output metrics in Influx Line Protocol (ILP), but the micronaut-micrometer project does not offer an Influx Registry currently. I tried to work around this by importing the io.micrometer:micrometer-registry-influx dependency and configuring an InfluxMeterRegistry manually like this:
#Factory
public class MyMetricRegistryConfigurer implements MeterRegistryConfigurer {
#Bean
#Primary
#Singleton
public MeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
#Override
public boolean supports(MeterRegistry meterRegistry) {
return meterRegistry instanceof InfluxMeterRegistry;
}
}
When the application runs, the metrics are exposed on my /metrics endpoint as I would expect, but nothing gets written to InfluxDB. I confirmed that my local InfluxDB accepts metrics at the expected localhost:8086/write?db=metrics endpoint using curl. Can anyone give me some pointers to get this working? I'm wondering if I need to manually define a reporter somewhere...
After playing around for a bit, I got this working with the following code:
#Factory
public class InfluxMeterRegistryFactory {
#Bean
#Singleton
#Requires(property = MeterRegistryFactory.MICRONAUT_METRICS_ENABLED, value =
StringUtils.TRUE, defaultValue = StringUtils.TRUE)
#Requires(beans = CompositeMeterRegistry.class)
public InfluxMeterRegistry getMeterRegistry() {
InfluxConfig config = new InfluxConfig() {
#Override
public Duration step() {
return Duration.ofSeconds(10);
}
#Override
public String db() {
return "metrics";
}
#Override
public String get(String k) {
return null; // accept the rest of the defaults
}
};
return new InfluxMeterRegistry(config, Clock.SYSTEM);
}
}
I also noticed that an InfluxMeterRegistry will be available out of the box in the future for micronaut-micrometer as of v1.2.0.

SpringAMQP errorHandler and returnExceptions problem

i am not sure my understanding to errorHandler and returnExceptions is right or not.
but here is my goal: i sent a message from App_A, use #RabbitListener to receive message in App_B.
according to the doc
https://docs.spring.io/spring-amqp/docs/2.1.3.BUILD-SNAPSHOT/reference/html/_reference.html#annotation-error-handling
i assume if APP_B has a business exception during process the message,through set errorHandler and returnExceptions in a right way on #RabbitListener can let the exception back to App_A.
do I understood correctly?
if i am rigth, how to use it in a right way?
with my code, i get nothing in APP_A .
here is my code in APP_B
errorHandler:
#Component(value = "errorHandler")
public class ErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message arg0, org.springframework.messaging.Message<?> arg1,
ListenerExecutionFailedException arg2) throws ListenerExecutionFailedException {
throw new ListenerExecutionFailedException("msg", arg2, null);
}
}
RabbitListener:
#RabbitListener(
bindings = #QueueBinding(
value = #Queue(value = "MRO.updateBaseInfo.queue", durable = "true"),
exchange = #Exchange(name = "MRO_Exchange", type = ExchangeTypes.DIRECT, durable = "true"),
key = "baseInfoUpdate"
),
// errorHandler = "errorHandler",
returnExceptions = "true"
)
public void receiveLocationChangeMessage(String message){
BaseUpdateMessage newBaseInfo = JSON.parseObject(message, BaseUpdateMessage.class);
dao.upDateBaseInfo(newBaseInfo);
}
and code in APP_A
#Component
public class MessageSender {
#Autowired
private RabbitTemplate rabbitTemplate;
public void editBaseInfo(BaseUpdateMessage message)throws Exception {
//and i am not sure set RemoteInvocationAwareMessageConverterAdapter in this way is right
rabbitTemplate.setMessageConverter(new RemoteInvocationAwareMessageConverterAdapter());
rabbitTemplate.convertAndSend("MRO_Exchange", "baseInfoUpdate", JSON.toJSONString(message));
}
}
i am very confuse with three points:
1)do i have to use errorHandler and returnExceptions at the same time? i thought errorHandler is something like a postprocessor that let me custom exception.if i don't need a custom exception can i just set returnExceptions with out errorHandler ?
2)should the method annotated with #RabbitListener return something or void is just fine?
3)in the sender side(my situation is APP_A), does have any specific config to catch the exception?
my workspace environment:
Spring boot 2.1.0
rabbitMQ server 3.7.8 on docker
1) No, you don't need en error handler, unless you want to enhance the exception.
2) If the method returns void; the sender will end up waiting for timeout for a reply that will never arrive, just in case an exception might be thrown; that is probably not a good use of resources. It's better to always send a reply, to free up the publisher side.
3) Just the RemoteInvocationAwareMessageConverterAdapter.
Here's an example:
#SpringBootApplication
public class So53846303Application {
public static void main(String[] args) {
SpringApplication.run(So53846303Application.class, args);
}
#RabbitListener(queues = "foo", returnExceptions = "true")
public String listen(String in) {
throw new RuntimeException("foo");
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
template.setMessageConverter(new RemoteInvocationAwareMessageConverterAdapter());
return args -> {
try {
template.convertSendAndReceive("foo", "bar");
}
catch (Exception e) {
e.printStackTrace();
}
};
}
}
and
org.springframework.amqp.AmqpRemoteException: java.lang.RuntimeException: foo
at org.springframework.amqp.support.converter.RemoteInvocationAwareMessageConverterAdapter.fromMessage(RemoteInvocationAwareMessageConverterAdapter.java:74)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertSendAndReceive(RabbitTemplate.java:1500)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertSendAndReceive(RabbitTemplate.java:1433)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertSendAndReceive(RabbitTemplate.java:1425)
at com.example.So53846303Application.lambda$0(So53846303Application.java:28)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:804)
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:794)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:324)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
at com.example.So53846303Application.main(So53846303Application.java:15)
Caused by: java.lang.RuntimeException: foo
at com.example.So53846303Application.listen(So53846303Application.java:20)
As you can see, there is a local org.springframework.amqp.AmqpRemoteException with the cause being the actual exception thrown on the remote server.

Spring SockJs RequestHandler doesn't upgrade connection to 101

Even though this is not described in the Spring documentation, a websocket connect should lead to a connection upgrade response (101 status).
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig<S extends ExpiringSession> extends AbstractSessionWebSocketMessageBrokerConfigurer<S>{
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic", "/queue");
config.setApplicationDestinationPrefixes("/mobile-server");
config.setUserDestinationPrefix("/mobile-user");
}
#Override
public void configureStompEndpoints(StompEndpointRegistry registry) {
registry
.addEndpoint("/ws")
.setHandshakeHandler(new DefaultHandshakeHandler(new TomcatRequestUpgradeStrategy()))
.setAllowedOrigins("*")
.withSockJS()
.setSessionCookieNeeded(false)
;
}
}
However, I get a 200 status with a "Welcome to SockJS" message which is generated by TransportHandlingSockJsService in stead of the WebSocketHttpRequestHandler which would generate the upgrade AFAIK
#Configuration
public class WebSocketSecurity extends AbstractSecurityWebSocketMessageBrokerConfigurer{
#Override
protected boolean sameOriginDisabled() {
return true;
}
#Override
protected void configureInbound(MessageSecurityMetadataSourceRegistry messages) {
messages
.nullDestMatcher().permitAll()
.simpSubscribeDestMatchers("/user/queue/errors").permitAll()
.simpDestMatchers("/mobile-server/**").hasRole("ENDUSER")
.simpSubscribeDestMatchers("/user/**", "/topic/**").hasRole("ENDUSER")
.anyMessage().denyAll();
}
}
When I change the config to
#Override
public void configureStompEndpoints(StompEndpointRegistry registry) {
registry
.addEndpoint("/ws")
.setHandshakeHandler(new DefaultHandshakeHandler(new TomcatRequestUpgradeStrategy()))
.setAllowedOrigins("*");
}
to my surprise a call to /ws does lead to a connection upgrade 101. I'm surprised, since the documentation and all examples uniformly use the withSockJS() and the start of any websocket connection AFAIK is a request upgrade.
I can choose to force the upgrade by connecting to /ws/websocket (also not documented). So, I'm not sure what is best.
Any suggestions?
This is expected behavior. It's how the SockJS protocol works:
http://sockjs.github.io/sockjs-protocol/sockjs-protocol-0.3.3.html. There is an initial "greeting" request and then the client starts trying transports one at a time.

Mbean registered but not found in mbean Server

I have a problem about the mbeans. I have created a simple mbean and I have registered it on the default mBeanServer that is run (Via eclipse or java -jar mbean.jar) and in the same process if I try to fouund the mbean registered with a simple query:
for (ObjectInstance instance : mbs.queryMBeans(ObjectNameMbean, null)) {
System.out.println(instance.toString());
}
the query retuerns my mbean, but if I start another process and try to search this mbean registered the mbeas is not found! why?
The approch is : (Process that is running)
public static void main(String[] args) throws Exception
{
MBeanServer mbeanServer =ManagementFactory.getPlatformMBeanServer();
ObjectName objectName = new ObjectName(ObjectNameMbean);
Simple simple = new Simple (1, 0);
mbeanServer.registerMBean(simple, objectName);
while (true)
{
wait (Is this necessary?)
}
}
So this is the first process that is running (that has the only pourpose to registry the mbean, because there is another process that want to read these informations.
So I start another process to search this mbean but nothing.
I 'm not using jboss but the local Java virtual Machine but my scope is to deploy this simple application in one ejb (autostart) and another ejb will read all informations.
All suggestions are really apprecciated.
This example should be more useful :
Object Hello:
public class Hello implements HelloMBean {
public void sayHello() {
System.out.println("hello, world");
}
public int add(int x, int y) {
return x + y;
}
public String getName() {
return this.name;
}
public int getCacheSize() {
return this.cacheSize;
}
public synchronized void setCacheSize(int size) {
this.cacheSize = size;
System.out.println("Cache size now " + this.cacheSize);
}
private final String name = "Reginald";
private int cacheSize = DEFAULT_CACHE_SIZE;
private static final int DEFAULT_CACHE_SIZE = 200;
}
Interface HelloBean (implemented by Hello)
public interface HelloMBean {
public void sayHello();
public int add(int x, int y);
public String getName();
public int getCacheSize();
public void setCacheSize(int size);
}
Simple Main
import java.lang.management.ManagementFactory;
import java.util.logging.Logger;
import javax.management.MBeanServer;
import javax.management.ObjectName;
public class Main {
static Logger aLog = Logger.getLogger("MBeanTest");
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
Hello mbean = new Hello();
mbs.registerMBean(mbean, name);
// System.out.println(mbs.getAttribute(name, "Name"));
aLog.info("Waiting forever...");
Thread.sleep(Long.MAX_VALUE);
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
}
So now I have exported this project as jar file and run it as "java -jar helloBean.jar" and by eclipse I have modified the main class to read informations of this read (Example "Name" attribute) by using the same objectname used to registry it .
Main to read :
public static void main(String[] args) {
try{
MBeanServer mbs = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("ApplicationDomain:type=Hello");
System.out.println(mbs.getAttribute(name, "Name"));
}
catch(Exception x){
x.printStackTrace();
aLog.info("exception");
}
}
But nothing, the bean is not found.
Project link : here!
Any idea?
I suspect the issue here is that you have multiple MBeanServer instances. You did not mention how you acquired the MBeanServer in each case, but in your second code sample, you are creating a new MBeanServer instance which may not be the same instance that other threads are reading from. (I assume this is all in one JVM...)
If you are using the platform agent, I recommend you acquire the MBeanServer using the ManagementFactory as follows:
MBeanServer mbs = java.lang.management.ManagementFactory.getPlatformMBeanServer() ;
That way, you will always get the same MBeanServer instance.

Resources