Azure service bus Connection aborted - sdk

My application is deployed is deployed in azure cloud with Azure service bus configuration.
when i track log i can see many log related to connection aborted in info, warning and in error also.
com.azure.core.amqp.exception.AmqpException: org.apache.qpid.proton.engine.TransportException: connection aborted
Try to track this error, but not find any specific solution, why this log coming.

Here using the following way, I wasn't getting any exceptions while using azure service bus with spring boot JMS
Now I am configuring the service bus in the application.properties file as below :
spring.jms.servicebus.connection-string=Endpoint=<Connection String>
spring.jms.servicebus.pricing-tier=<Price Tier>
Now I have simple Rest api which just sends a message to the Azure service Bus
#RestController
public class PostController {
private static final String DESTINATION_NAME = "<queueName>";
#Autowired
private JmsTemplate jmsTemplate;
#PostMapping("/messages")
public String postMessage(#RequestParam String message) {
jmsTemplate.convertAndSend(DESTINATION_NAME, new Test(message));
return message;
}
}
Here Test is just a class with variable name and we are sending the object of class Test
output:

Related

Spring Cloud Data Flow: Docker URI error when running the sample app 'partitioned-batch-job' in Kubernetes cluster

I modified the dataflow sample app partitioned-batch-job to deploy it in a kubernetes cluster via the SCDF server that is running in the cluster. I use the dashboard to launch this app as a task. The app launches fine, but I see the following URI error for the docker resource I provided to the DeployerPartitionHandler. I've shown the modified code snippet below the error stack for reference. Appreciate any input on whether I am using the right URI syntax, and if so why the master step is unable to launch workers with the provided docker image reference.
java.lang.IllegalArgumentException: Unable to get URI for class path resource [docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT]
at org.springframework.cloud.deployer.spi.kubernetes.DefaultContainerFactory.create(DefaultContainerFactory.java:80) ~[spring-cloud-deployer-kubernetes-2.6.2.jar:2.6.2]
at org.springframework.cloud.deployer.spi.kubernetes.AbstractKubernetesDeployer.createPodSpec(AbstractKubernetesDeployer.java:210) ~[spring-cloud-deployer-kubernetes-2.6.2.jar:2.6.2]
at org.springframework.cloud.deployer.spi.kubernetes.KubernetesTaskLauncher.launch(KubernetesTaskLauncher.java:237) ~[spring-cloud-deployer-kubernetes-2.6.2.jar:2.6.2]
at org.springframework.cloud.deployer.spi.kubernetes.KubernetesTaskLauncher.launch(KubernetesTaskLauncher.java:119) ~[spring-cloud-deployer-kubernetes-2.6.2.jar:2.6.2]
at org.springframework.cloud.task.batch.partition.DeployerPartitionHandler.launchWorker(DeployerPartitionHandler.java:394) [spring-cloud-task-batch-2.3.1-SNAPSHOT.jar:2.3.1-SNAPSHOT]
at org.springframework.cloud.task.batch.partition.DeployerPartitionHandler.launchWorkers(DeployerPartitionHandler.java:313) [spring-cloud-task-batch-2.3.1-SNAPSHOT.jar:2.3.1-SNAPSHOT]
at org.springframework.cloud.task.batch.partition.DeployerPartitionHandler.handle(DeployerPartitionHandler.java:302) [spring-cloud-task-batch-2.3.1-SNAPSHOT.jar:2.3.1-SNAPSHOT]
at org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:106) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:208) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:152) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.job.AbstractJob.handleStep(AbstractJob.java:413) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.job.SimpleJob.doExecute(SimpleJob.java:136) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:320) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:149) [spring-batch-core-4.3.3.jar:4.3.3]
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) [spring-core-5.3.9.jar:5.3.9]
at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:140) [spring-batch-core-4.3.3.jar:4.3.3]
Here is the modified code for the file JobConfiguration.java. I just replaced the maven resource with the docker image reference. Also modified the pom.xml dependency to use the kubernetes deployer (instead of local).
#Configuration
public class JobConfiguration {
private static final int GRID_SIZE = 4;
// #checkstyle:off
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
public DataSource dataSource;
#Autowired
public JobRepository jobRepository;
// #checkstyle:on
#Autowired
private ConfigurableApplicationContext context;
#Autowired
private DelegatingResourceLoader resourceLoader;
#Autowired
private Environment environment;
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer,
TaskRepository taskRepository, DockerResourceLoader dockerResourceLoader) throws Exception {
Resource resource = this.resourceLoader
.getResource("docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT");
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
The referenced repository:tag is there in docker hub and it is the same URI I provided when registering the app in the dashboard. The k8s setup is a local bare metal cluster running on VMs. Deployed SCDF server via the provided helm chart. I was able to run the other sample apps (billsetuptask & billrun) without any problems in the cluster.
thank you.
Looks like the DelegatingResourceLoader (this.resourceLoader) is supposed to return a DockerResourceLoader based on the scheme in the URI string. Not sure it is doing that. For now I've just changed the code to explicitly use a DockerResourceLoader bean and it works.
#Bean
public DockerResourceLoader getDockerResourceLoader() {
return new DockerResourceLoader();
}
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer,
TaskRepository taskRepository, DockerResourceLoader dockerResourceLoader) throws Exception {
Resource resource = dockerResourceLoader.getResource("docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT");
/*
Resource resource = this.resourceLoader
.getResource("docker:vrajkuma/partitioned-batch-job:2.3.1-SNAPSHOT");
*/
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
...

how to get Neo4j cluster status using java api

I am trying to find the neo4j cluster health using java API. I see CLI CALL dbms.cluster.overview() is there any equivalent java api for this
1. Variant "Spring Boot"
If Spring Boot with Spring Data Neo4J is an option for you, you could define a DAO which executes your cypher statement and receives the result in an own QueryResult class.
1.1 GeneralQueriesDAO
#Repository
public interface GeneralQueriesDAO extends Neo4jRepository<String, Long> {
#Query("CALL dbms.cluster.overview() YIELD id, addresses, role, groups, database;")
ClusterOverviewResult[] getClusterOverview();
}
1.2 ClusterOverviewResult
#QueryResult
public class ClusterOverviewResult {
private String id; // This is id of the instance.
private List<String> addresses; // This is a list of all the addresses for the instance.
private String role; // This is the role of the instance, which can be LEADER, FOLLOWER, or READ_REPLICA.
private List<String> groups; // This is a list of all the server groups which an instance is part of.
private String database; // This is the name of the database which the instance is hosting.
// omitted default constructor as well getter and setter for clarity
}
1.3 Program flow
#Autowired
private GeneralQueriesDAO generalQueriesDAO;
[...]
ClusterOverviewResult[] clusterOverviewResult = generalQueriesDAO.getClusterOverview();
2. Variant "Without Spring"
Without Spring Boot the rough procedure could be:
Session session = driver.session();
StatementResult result = session.run("Cypher statement");
3. Variant "HTTP endpoints"
Another option could be to use of the HTTP endpoints for monitoring the health of a Neo4j Causal Cluster.

#Inject into Jax-rs resource: CWWAM0002E: An exception occurred while merging an annotation into deployment descriptor:

I am getting a weird exception when I deploy Rest sevice locally to my WAS 85 environment in eclipse:
CWWAM0002E: An exception occurred while merging an annotation into deployment descriptor: com.ibm.wsspi.amm.merge.MergeException: Unable to find EnterpriseBean for class WackyDoodleResource
com.ibm.wsspi.amm.merge.MergeException: Unable to find EnterpriseBean for class WackyDoodleResource
This is not my real code but here is how I set it up with names changed to protect the innocent:
#Path("/wacky-doodle-resource")
public class WackyDoodleResource{
#Context UriInfo uriInfo
#Context SecurityContext securityContext;
#Inject WackyDoodleEJB wackyDoodleEJB;
#POST
#Consumes(MediaType.APPLICATIONI_JSON)
#RolesAllowed("WACKYDOODLES")
#Produces(MediaType.APPLICATIONI_JSON)
public Response createWackyDoodle(WackyDoodleRequestRO requestRO{
String response = null;
response = wackyDoodleEJB.createWackyDoodle(requestRO)
}
return Response.ok(response)).build();
}
#Default
#Singleton
public class WackyDoodleEJB implements IWackyDoodleEJB{
public String createWackyDoodle(WackyDoodleRequestRO req){
System.out.println("Do Something Wacky!");
}
}
public interface IWackyDoodleEJB{
public String createWackyDoodle(WackyDoodleRequestRO request);
}
(Simple recreation of my more complex code for illustration purposes)
I see that exception when I deploy my ear to my local Websphere server. The application appears to start up and deploy just fine (if you don't pay attention to what you find in the logs). However, when I attempt to hit any of my http request #myResources, I get this oddly nondescript message
E com.ibm.ws.webcontainer.internal.WebContainer handleRequest SRVE0255E: A WebGroup/Virtual Host to handle / has not been defined.
I suspect it has something to do with my ear not publishing correctly (the exception in my question). I honestly do not know however. So, what could be happening here? This seems like it should be pretty boilertplate stuff?
The error suggests that I should Turn my regular resource pojo into an EJB? If I add, for example, the stateless #nnotation to my class, the above exception in the logs goes away, but I am still not able to hit my sevice resources. I get the same nondescript exception. I am at a loss and I have been looking at this for an hour. If you could point me in any direction I would appreciate it.

SingnalR with Redis Send Msg to Specific ConnectionID

I am using SingalR for my Chat Application. Wanted to play with Redis
and SignalR but I cannot find an working example where i can send msg to
specific connectionId. Below Code that works for a single server instance.
But when i make it a Web Garden with 3 process it stops working as my
server instance that gets the message cannot find the connectionId
for that destination UserId to send the message.
private readonly static ConnectionMapping<string> _connections = new ConnectionMapping<string>();
public void Send(string sendTo, string message, string from)
{
string fromclientid = Context.QueryString["clientid"];
foreach (var connectionId in _connections.GetConnections(sendTo))
{
Clients.Client(connectionId).send(fromclientid, message);
}
Clients.Caller.send(sendTo, "me: " + message);
}
public override Task OnConnected()
{
int clientid = Convert.ToInt32(Context.QueryString["clientid"]);
_connections.Add(clientid.ToString(), Context.ConnectionId);
}
I have used the example below to setup my box and code but none of
them have examples for sending from one client to specific client or
group of specific clients.
http://www.asp.net/signalr/overview/performance-and-scaling/scaleout-with-redis
https://github.com/mickdelaney/SignalR.Redis/tree/master/Redis.Sample
The ConnectionMapping instance in your Hub class will not synced across different SignalR server instances. You need to use permanent external storage such as a database or a Windows Azure table. Refer to this link for more details:
http://www.asp.net/signalr/overview/hubs-api/mapping-users-to-connections

Access Neo4j in server mode with EmbeddedGraphDatabase?

If I run neo4j in server mode so it is accessible using the REST API, can I access the same neo4j instance with EmbeddedGraphDatabase-class?
I am thinking of a production setup where a Java-app using EmbeddedGraphDatabase is driving the logic, but other clients might navigate the data with REST in readonly mode.
What you are describing is a server plugin or extension. That way you expose your database via the REST API but at the same time you can access the embedded graph db hihgly performant from your custom plugin/extension code.
In your custom code you can get a GraphDatabaseService injected on which you operate.
You deploy your custom extensions as jars with your neo4j-server and have client code operate over a domain oriented restful API with it.
// extension sample
#Path( "/helloworld" )
public class HelloWorldResource {
private final GraphDatabaseService database;
public HelloWorldResource( #Context GraphDatabaseService database) {
this.database = database;
}
#GET
#Produces( MediaType.TEXT_PLAIN )
#Path( "/{nodeId}" )
public Response hello( #PathParam( "nodeId" ) long nodeId ) {
// Do stuff with the database
return Response.status( Status.OK ).entity(
( "Hello World, nodeId=" + nodeId).getBytes() ).build();
}
}
Docs for writing plugins and extensions.

Resources