REST call not working with Camel running in Docker - docker

I have this Camel Rest Route:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
import org.apache.camel.model.rest.RestBindingMode;
import spark.Spark;
public class MainCamel {
public static void main(final String[] args) throws Exception {
final Main camelMain = new Main();
camelMain.configure().addRoutesBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
this.getContext().getRegistry().bind("healthcheck", CheckUtil.class);
this.restConfiguration()
.bindingMode(RestBindingMode.auto)
.component("netty-http")
.host("localhost")
.port(11010);
this.rest("/healthcheck")
.get()
.description("Healthcheck for docker")
.outType(Integer.class)
.to("bean:healthcheck?method=healthCheck");
}
});
// spark
Spark.port(11011);
Spark.get("/hello", (req, res) -> "Hello World");
System.out.println("ready");
camelMain.run(args);
}
public static class CheckUtil {
public Integer healthCheck() {
return 0;
}
}
}
I also created a second REST server with Spark.
The Camel route does NOT work if the code is executed in a Docker container.
Exception: org.apache.http.NoHttpResponseException: localhost:11010 failed to respond
The Spark server works fine.
However when executing the code directly in IntelliJ both REST Servers work. Of course both ports are exposed in the container.

You are binding the Netty HTTP server to localhost. Meaning that it will not be able to serve requests that originate from outside of the container.
Change .host("localhost") to .host("0.0.0.0") so that the sever listens on all available network interfaces.

Related

Can't inject the guice dependency in the jersey filter

In the process of setup a bridge between guice and jersey, I ran into one problem.
When trying to create a jersey filter, I was unable to inject guice dependencies into it.
I found a duplicate, however there is no solution to the problem there.
Everything is exactly the same.
The only difference is that I don't get a startup error. The filter works, but my dependencies are null.
Interestingly, Filter and HttpFilter work fine. But it doesn't really work for me.
There's another thing that's interesting. In the resource, which I understand is an HK2 dependency, I can inject guice bean.
#ApplicationPath("/test")
private static class TestApplicationConfig extends ResourceConfig
{
public TestApplicationConfig()
{
register(JacksonFeature.class);
register(AuthFilter.class);
register(new ContainerLifecycleListener()
{
public void onStartup(Container container)
{
ServletContainer servletContainer = (ServletContainer) container;
ServiceLocator serviceLocator = container.getApplicationHandler().getServiceLocator();
GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
Injector injector = (Injector) servletContainer
.getServletContext()
.getAttribute(Injector.class.getName());
guiceBridge.bridgeGuiceInjector(injector);
}
public void onReload(Container container)
{
}
public void onShutdown(Container container)
{
}
});
}
}
In ServletModule child.
serve(path).with(ServletContainer.class, ImmutableMap.of(
"javax.ws.rs.Application", TestApplicationConfig.class.getName(),
"jersey.config.server.provider.packages", sb.toString()));
I trying with register(AuthFilter.class) and #Provider
#Singleton
#Provider
public class AuthFilter implements ContainerRequestFilter
{
#Inject
private SomeInjectedService someInjectedService; **// null here**
#Context
private ResourceInfo resourceInfo;
#Override
public void filter(ContainerRequestContext requestContext) throws IOException
{
// some code
}
}
SomeInjectedService I register by guice
bind(SomeInjectedService.class).asEagerSingleton();
Where can I start diagnosing and what can I do?
UPD:
I noticed different behavior when using different annotations.
If I use javax.inject.Inject, I get the following error message.
org.glassfish.hk2.api.MultiException: A MultiException has 3 exceptions. They are:
1. org.glassfish.hk2.api.UnsatisfiedDependencyException: There was no object available for injection at SystemInjecteeImpl(requiredType=SomeInjectedService,parent=AuthFilter,qualifiers={},position=-1,optional=false,self=false,unqualified=null,1496814489)
2. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of some.package.AuthFilter errors were found
3. java.lang.IllegalStateException: Unable to perform operation: resolve on some.package.AuthFilter
If com.google.inject.Inject, just null. As I understand this method is not correct.
Considering that javax Inject is trying to inject the service but can't find it. Can we conclude that the bridge is not working correctly? But if it's not working correctly, why can I inject this service into my resource?
#Path("/test")
#Produces(MediaType.APPLICATION_JSON)
#Consumes(MediaType.APPLICATION_JSON)
public class SomeResource
{
private final SomeInjectedService someInjectedResource;
#Inject // here I use javax annotation and this code working correctry
public SomeResource(SomeInjectedService someInjectedResource)
{
this.someInjectedResource = someInjectedResource;
}
#GET
#Path("/{user}")
public Response returnSomeResponse(#PathParam("user") String user) throws Exception
{
// some code
}
}

how to run testcontainer with dynamic port for spring data elasticsearch

My test case uses #SpringBootTest annotations to bring up the context and has Autowired some repository. Testcontainer is started in #BeforeAll() method. The problem is RestClientConfig is being initialized/injected before #BeforeAll() in test case. When testcontainer starts, it exports some dynamic port.
I have to set some fixed port in testcontainer 34343 and use the same port in properties file for RestClientConfig.
container = new ElasticsearchContainer(ELASTICSEARCH_IMAGE)
.withEnv("discovery.type", "single-node")
.withExposedPorts(9200)
.withCreateContainerCmdModifier(cmd -> cmd.withHostConfig(
new HostConfig().withPortBindings(new PortBinding(Ports.Binding.bindPort(34343), new ExposedPort(9200)))));
Is there a way to start container and get its dynamic port then use it to initialize RestClientConfig?
I didn't use annoation #Testcontainers though. Is it needed?
Newer versions of Spring provide #DynamicPropertySource for exactly this use case:
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/test/context/DynamicPropertySource.html
Your code should look roughly like this:
#SpringJUnitConfig(...)
#Testcontainers
class ExampleIntegrationTests {
#Container
static ElasticsearchContainer elastic= new ElasticsearchContainer(ELASTICSEARCH_IMAGE)
.withEnv("discovery.type", "single-node");
// ...
#DynamicPropertySource
static void elasticProperties(DynamicPropertyRegistry registry) {
registry.add("spring.elasticsearch.uris", elastic::getHttpHostAddress);
}
}
You can use context configuration initialiser to set properties during runtime, which you can later use in your RestClientConfig.
Let me show you on the example of Postgresql container setup:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT, classes = Application.class)
#ContextConfiguration(initializers = AbstractTestcontainersTest.DockerPostgreDataSourceInitializer.class)
public abstract class AbstractTestcontainersTest {
protected static final String DB_CONTAINER_NAME = "postgres-auth-test";
protected static PostgreSQLContainer<?> postgreDBContainer =
new PostgreSQLContainer<>(DockerImageName.parse("public.ecr.aws/docker/library/postgres:12.10-alpine")
.asCompatibleSubstituteFor("postgres"))
.withUsername("postgres")
.withPassword("change_me")
.withInitScript("db.sql")
.withCreateContainerCmdModifier(cmd -> cmd.withName(DB_CONTAINER_NAME))
.withDatabaseName("zpot_main");
#BeforeAll
public static void beforeAll() throws ShellExecutionException {
postgreDBContainer.start();
}
#AfterAll
public static void afterAll() {
postgreDBContainer.stop();
}
public static class DockerPostgreDataSourceInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(
applicationContext,
"spring.datasource.url=" + postgreDBContainer.getJdbcUrl(),
"spring.datasource.username=" + postgreDBContainer.getUsername(),
"spring.datasource.password=" + postgreDBContainer.getPassword()
);
}
}
}
All the configuration is done in DockerPostgreDataSourceInitializer, where I set all the properties I need. You also need to annotate your test class with #ContextConfiguration annotaion. You can do something similar with your ElasticSearchContainer. As I just checked the ElasticSearchContainer has a method getHttpHostAddress() which returns host+dynamic_port combination for your container. You can get that host-port pair and set in in properties to be used later in your client configuration. If you need just port you can call container.getMappedPort(9200) and again set that port in properties.
Regarding #Testcontainers annotation, you need it if you want testcontainers to manage your container lifecycle. In that case you also need to annotate container with #Container annotation. Your container will be started either once before all test methods in a class if your container is a static field or before each test method if it's a regular field. You can read more about that here: https://www.testcontainers.org/test_framework_integration/junit_5/#extension.
Or you can start your container manually either in #BeforeAll or #BeforeEach annotated setup methods. In other words no, you don't have to use #Testcontainers annotation.

Deploying a transaction event listener in a Neo4jDesktop installation

I have created a project that contains an ExtensionFactory subclass annotated as #ServiceProvider that returns a LifecycleAdapter subclass which registers a transaction event listener in its start() method, as shown in this example. The code is below:
#ServiceProvider
public class EventListenerExtensionFactory extends ExtensionFactory<EventListenerExtensionFactory.Dependencies> {
private final List<TransactionEventListener<?>> listeners;
public EventListenerExtensionFactory() {
this(List.of(new MyListener()));
}
public EventListenerExtensionFactory(List<TransactionEventListener<?>> listeners) {
super(ExtensionType.DATABASE, "EVENT_LISTENER_EXT_FACTORY");
this.listeners = listeners;
}
#Override
public Lifecycle newInstance(ExtensionContext context, Dependencies dependencies) {
return new EventListenerLifecycleAdapter(dependencies, listeners);
}
#RequiredArgsConstructor
private static class EventListenerLifecycleAdapter extends LifecycleAdapter {
private final Dependencies dependencies;
private final List<TransactionEventListener<?>> listeners;
#Override
public void start() {
DatabaseManagementService managementService = dependencies.databaseManagementService();
listeners.forEach(listener -> managementService.registerTransactionEventListener(
DEFAULT_DATABASE_NAME, listener));
dependencies.log()
.getUserLog(EventListenerExtensionFactory.class)
.info("Registering transaction event listener for database " + DEFAULT_DATABASE_NAME);
}
}
interface Dependencies {
DatabaseManagementService databaseManagementService();
LogService log();
}
}
It works fine in an integration test:
public AbstractDatabaseTest(TransactionEventListener<?>... listeners) {
URI uri = Neo4jBuilders.newInProcessBuilder()
.withExtensionFactories(List.of(new EventListenerExtensionFactory(List.of(listeners))))
.withDisabledServer()
.build()
.boltURI();
driver = GraphDatabase.driver(uri);
session = driver.session();
}
Then I copy the jar file in the plugins directory of my desktop database:
$ cp build/libs/<myproject>.jar /mnt/c/Users/albert.gevorgyan/.Neo4jDesktop/relate-data/dbmss/dbms-7fe3cbdb-11b2-4ca2-81eb-474edbbb3dda/plugins/
I restart the database and even the whole desktop Neo4j program but it doesn't seem to identify the plugin or to initialize the factory: no log messages are found in neo4j.log after the start event, and the transaction events that should be captured by my listener are ignored. Interestingly, a custom function that I have defined in the same jar file actually works - I can call it in the browser. So something must be missing in the extension factory as it doesn't get instantiated.
Is it possible at all to deploy an ExtensionFactory in a Desktop installation and if yes, what am I doing wrong?
It works after I added a provider configuration file to META-INF/services, as explained in https://www.baeldung.com/java-spi. Neo4j finds it then.

How to write an integration test for #RabbitListener annotation?

My question is really a follow up question to
RabbitMQ Integration Test and Threading
There it states to wrap "your listeners" and pass in a CountDownLatch and eventually all the threads will merge. This answer works if we were manually creating and injecting the message listener but for #RabbitListener annotations... i'm not sure how to pass in a CountDownLatch. The framework is auto magically creating the message listener behind the scenes.
Are there any other approaches?
With the help of #Gary Russell I was able to get an answer and used the following solution.
Conclusion: I must admit i'm indifferent about this solution (feels like a hack) but this is the only thing I could get to work and once you get over the initial one time setup and actually understand the 'work flow' it is not so painful. Basically comes down to defining ( 2 ) #Beans and adding them to your Integration Test config.
Example solution posted below with explanations. Please feel free to suggest improvements to this solution.
1. Define a ProxyListenerBPP that during spring initialization will listen for a specified clazz (i.e our test class that contains #RabbitListener) and
inject our custom CountDownLatchListenerInterceptor advice defined in the next step.
import org.aopalliance.aop.Advice;
import org.springframework.aop.framework.ProxyFactoryBean;
import org.springframework.beans.BeansException;
import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.BeanFactoryAware;
import org.springframework.beans.factory.config.BeanPostProcessor;
import org.springframework.core.Ordered;
import org.springframework.core.PriorityOrdered;
/**
* Implements BeanPostProcessor bean... during spring initialization we will
* listen for a specified clazz
* (i.e our #RabbitListener annotated class) and
* inject our custom CountDownLatchListenerInterceptor advice
* #author sjacobs
*
*/
public class ProxyListenerBPP implements BeanPostProcessor, BeanFactoryAware, Ordered, PriorityOrdered{
private BeanFactory beanFactory;
private Class<?> clazz;
public static final String ADVICE_BEAN_NAME = "wasCalled";
public ProxyListenerBPP(Class<?> clazz) {
this.clazz = clazz;
}
#Override
public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
this.beanFactory = beanFactory;
}
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (clazz.isAssignableFrom(bean.getClass())) {
ProxyFactoryBean pfb = new ProxyFactoryBean();
pfb.setProxyTargetClass(true); // CGLIB, false for JDK proxy (interface needed)
pfb.setTarget(bean);
pfb.addAdvice(this.beanFactory.getBean(ADVICE_BEAN_NAME, Advice.class));
return pfb.getObject();
}
else {
return bean;
}
}
#Override
public int getOrder() {
return Ordered.LOWEST_PRECEDENCE - 1000; // Just before #RabbitListener post processor
}
2. Create the MethodInterceptor advice impl that will hold the reference to the CountDownLatch. The CountDownLatch needs to be referenced in both in the Integration test thread and inside the async worker thread in the #RabbitListener. So we can later release back to the Integration Test thread as soon as the #RabbitListener async thread has completed execution. No need for polling.
import java.util.concurrent.CountDownLatch;
import org.aopalliance.intercept.MethodInterceptor;
import org.aopalliance.intercept.MethodInvocation;
/**
* AOP MethodInterceptor that maps a <b>Single</b> CountDownLatch to one method and invokes
* CountDownLatch.countDown() after the method has completed execution. The motivation behind this
* is for integration testing purposes of Spring RabbitMq Async Worker threads to be able to merge
* the Integration Test thread after an Async 'worker' thread completed its task.
* #author sjacobs
*
*/
public class CountDownLatchListenerInterceptor implements MethodInterceptor {
private CountDownLatch countDownLatch = new CountDownLatch(1);
private final String methodNameToInvokeCDL ;
public CountDownLatchListenerInterceptor(String methodName) {
this.methodNameToInvokeCDL = methodName;
}
#Override
public Object invoke(MethodInvocation invocation) throws Throwable {
String methodName = invocation.getMethod().getName();
if (this.methodNameToInvokeCDL.equals(methodName) ) {
//invoke async work
Object result = invocation.proceed();
//returns us back to the 'awaiting' thread inside the integration test
this.countDownLatch.countDown();
//"reset" CountDownLatch for next #Test (if testing for more async worker)
this.countDownLatch = new CountDownLatch(1);
return result;
} else
return invocation.proceed();
}
public CountDownLatch getCountDownLatch() {
return countDownLatch;
}
}
3. Next add to your Integration Test Config the following #Bean(s)
public class SomeClassThatHasRabbitListenerAnnotationsITConfig extends BaseIntegrationTestConfig {
// pass into the constructor the test Clazz that contains the #RabbitListener annotation into the constructor
#Bean
public static ProxyListenerBPP listenerProxier() { // note static
return new ProxyListenerBPP(SomeClassThatHasRabbitListenerAnnotations.class);
}
// pass the method name that will be invoked by the async thread in SomeClassThatHasRabbitListenerAnnotations.Class
// I.E the method name annotated with #RabbitListener or #RabbitHandler
// in our example 'listen' is the method name inside SomeClassThatHasRabbitListenerAnnotations.Class
#Bean(name=ProxyListenerBPP.ADVICE_BEAN_NAME)
public static Advice wasCalled() {
String methodName = "listen";
return new CountDownLatchListenerInterceptor( methodName );
}
// this is the #RabbitListener bean we are testing
#Bean
public SomeClassThatHasRabbitListenerAnnotations rabbitListener() {
return new SomeClassThatHasRabbitListenerAnnotations();
}
}
4. Finally, in the integration #Test call... after sending a message via rabbitTemplate to trigger the async thread... now call the CountDownLatch#await(...) method obtained from the interceptor and make sure to pass in a TimeUnit args so it can timeout in case of long running process or something goes wrong. Once the async the Integration Test thread is notified (awakened) and now we can finally begin to actually test/validate/verify the results of the async work.
#ContextConfiguration(classes={ SomeClassThatHasRabbitListenerAnnotationsITConfig.class } )
public class SomeClassThatHasRabbitListenerAnnotationsIT extends BaseIntegrationTest{
#Inject
private CountDownLatchListenerInterceptor interceptor;
#Inject
private RabbitTemplate rabbitTemplate;
#Test
public void shouldReturnBackAfterAsyncThreadIsFinished() throws Exception {
MyObject payload = new MyObject();
rabbitTemplate.convertAndSend("some.defined.work.queue", payload);
CountDownLatch cdl = interceptor.getCountDownLatch();
// wait for async thread to finish
cdl.await(10, TimeUnit.SECONDS); // IMPORTANT: set timeout args.
//Begin the actual testing of the results of the async work
// check the database?
// download a msg from another queue?
// verify email was sent...
// etc...
}
It's a bit more tricky with #RabbitListener but the simplest way is to advise the listener.
With the custom listener container factory just have your test case add the advice to the factory.
The advice would be a MethodInterceptor; the invocation will have 2 arguments; the channel and the (unconverted) Message. The advice has to be injected before the container(s) are created.
Alternatively, get a reference to the container using the registry and add the advice later (but you'll have to call initialize() to force the new advice to be applied).
An alternative would be a simple BeanPostProcessor to proxy your listener class before it is injected into the container. That way, you will see the method argumen(s) after any conversion; you will also be able to verify any result returned by the listener (for request/reply scenarios).
If you are not familiar with these techniques, I can try to find some time to spin up a quick example for you.
EDIT
I issued a pull request to add an example to EnableRabbitIntegrationTests. This adds a listener bean with 2 annotated listener methods, a BeanPostProcessor that proxies the listener bean before it is injected into a listener container. An Advice is added to the proxy which counts latches down when the expected messages are received.

Jetty 9 RC2 websocket timeout

See https://stackoverflow.com/questions/41810306/appointment-scheduling....
The support for Sesssion.setIdleTimeout(long ms) was added recently to support JSR-356 (javax.websocket) work we are currently doing.
However, with 9.0.0.RC2 you can do the following to set idle timeout early, before the Session is created (this is being fixed, hopefully will make it into RC3)
Server Side option A: WebSocketServlet init-param
In your WEB-INF/web.xml for your websocket servlet, specify the following init-param
<init-param>
<param-name>maxIdleTime</param-name>
<param-value>10000</param-value>
</init-param>
Server Side option B: As policy change on WebSocketFactory
In your WebSocketServlet.configure(WebSocketServletFactory factory) call
#Override
public void configure(WebSocketServletFactory factory)
{
factory.getPolicy().setIdleTimeout(10000);
}
Client Side option A: As WebSocketClient setting
WebSocketClient client = new WebSocketClient();
client.getPolicy().setIdleTimeout(10000);
client.start();
Annotated #WebSocket option
This will work for server or client websockets.
Note: you cannot mix WebSocketListener and #WebSocket annotations together
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketClose;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketConnect;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketError;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
#WebSocket(maxIdleTime=10000)
public class MySocket
{
#OnWebSocketClose
public void onClose(int statusCode, String reason)
{
}
#OnWebSocketConnect
public void onConnect(Session sess)
{
}
#OnWebSocketError
public void onError(Throwable cause)
{
}
#OnWebSocketMessage
public void onText(String message)
{
}
}

Resources