If I run neo4j in server mode so it is accessible using the REST API, can I access the same neo4j instance with EmbeddedGraphDatabase-class?
I am thinking of a production setup where a Java-app using EmbeddedGraphDatabase is driving the logic, but other clients might navigate the data with REST in readonly mode.
What you are describing is a server plugin or extension. That way you expose your database via the REST API but at the same time you can access the embedded graph db hihgly performant from your custom plugin/extension code.
In your custom code you can get a GraphDatabaseService injected on which you operate.
You deploy your custom extensions as jars with your neo4j-server and have client code operate over a domain oriented restful API with it.
// extension sample
#Path( "/helloworld" )
public class HelloWorldResource {
private final GraphDatabaseService database;
public HelloWorldResource( #Context GraphDatabaseService database) {
this.database = database;
}
#GET
#Produces( MediaType.TEXT_PLAIN )
#Path( "/{nodeId}" )
public Response hello( #PathParam( "nodeId" ) long nodeId ) {
// Do stuff with the database
return Response.status( Status.OK ).entity(
( "Hello World, nodeId=" + nodeId).getBytes() ).build();
}
}
Docs for writing plugins and extensions.
Related
Link - https://github.com/spring-cloud-stream-app-starters/aggregator/tree/master/spring-cloud-starter-stream-processor-aggregator does not list property for gemfire message store
The GemfireMessageStore is configured like this:
#ConditionalOnClass(GemfireMessageStore.class)
#ConditionalOnProperty(prefix = AggregatorProperties.PREFIX,
name = "message-store-type",
havingValue = AggregatorProperties.MessageStoreType.GEMFIRE)
#Import(ClientCacheAutoConfiguration.class)
static class Gemfire {
#Bean
#ConditionalOnMissingBean
public ClientRegionFactoryBean<?, ?> gemfireRegion(GemFireCache cache, AggregatorProperties properties) {
ClientRegionFactoryBean<?, ?> clientRegionFactoryBean = new ClientRegionFactoryBean<>();
clientRegionFactoryBean.setCache(cache);
clientRegionFactoryBean.setName(properties.getMessageStoreEntity());
return clientRegionFactoryBean;
}
#Bean
public MessageGroupStore messageStore(Region<Object, Object> region) {
return new GemfireMessageStore(region);
}
}
The point is that you always can override that ClientRegionFactoryBean with your own.
Or you can take into account that ClientCacheAutoConfiguration is based on the #ClientCacheApplication, which, in turn, allows you to have a ClientCacheConfigurer bean and provide whatever is sufficient for your client cache configuration. Including config and pool. That's right: it is not on the app starter configuration level and you have to right some custom code to be included as a dependency into the final uber jar for target binder-specific application.
More info how to build them is here in Docs: https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.RC1/reference/htmlsingle/#_patching_pre_built_applications
A variety of backend storage options exist through Spring Integration. You can read more about it in spring-cloud-starter-stream-processor-aggregator/README.
Spring Integration docs on this matter are included as a link, and the Gemfire section could be useful.
It'd be also useful to review MessageGroupStore implementation, since it is the foundation for the storage option in aggregator.
I am trying to find the neo4j cluster health using java API. I see CLI CALL dbms.cluster.overview() is there any equivalent java api for this
1. Variant "Spring Boot"
If Spring Boot with Spring Data Neo4J is an option for you, you could define a DAO which executes your cypher statement and receives the result in an own QueryResult class.
1.1 GeneralQueriesDAO
#Repository
public interface GeneralQueriesDAO extends Neo4jRepository<String, Long> {
#Query("CALL dbms.cluster.overview() YIELD id, addresses, role, groups, database;")
ClusterOverviewResult[] getClusterOverview();
}
1.2 ClusterOverviewResult
#QueryResult
public class ClusterOverviewResult {
private String id; // This is id of the instance.
private List<String> addresses; // This is a list of all the addresses for the instance.
private String role; // This is the role of the instance, which can be LEADER, FOLLOWER, or READ_REPLICA.
private List<String> groups; // This is a list of all the server groups which an instance is part of.
private String database; // This is the name of the database which the instance is hosting.
// omitted default constructor as well getter and setter for clarity
}
1.3 Program flow
#Autowired
private GeneralQueriesDAO generalQueriesDAO;
[...]
ClusterOverviewResult[] clusterOverviewResult = generalQueriesDAO.getClusterOverview();
2. Variant "Without Spring"
Without Spring Boot the rough procedure could be:
Session session = driver.session();
StatementResult result = session.run("Cypher statement");
3. Variant "HTTP endpoints"
Another option could be to use of the HTTP endpoints for monitoring the health of a Neo4j Causal Cluster.
I am trying to build a multilayer application (service) in C#. To be precise, I am trying to build a REST webservice with ASP.NET Web Api which will be hosted on my own (with Owin). Now I got so far that I have the following components(every one of them is in a separate .dll):
- RestHost (which in my case is an console application)
- RestService (here is my web service witch all the controllers)
- InterfacesLayer
- ModelLayer (here are the objects I use, just with their get/set methods)
- DataLayer (every single class inside of ModelLayer has its own class in Datalayer, plus there is the Database connection class)
- BusinessLayer (here all the logic is done, again every class from model has its own class, and this layer communicates with the REST service and the datalayer).
RestHost - as the name says, it is the host of my service. Besides that I am also doing my dependency injection here. Since it is not much code I will post it:
static void Main(string[] args)
{
IUnityContainer container = new UnityContainer();
// Dependency Resolving
container.RegisterType<IAktData, AktDataImpl>(new HierarchicalLifetimeManager());
container.RegisterType<IAktService, AktServiceImpl>(new HierarchicalLifetimeManager());
container.RegisterType<ILeistungData, LeistungDataImpl>(new HierarchicalLifetimeManager());
container.RegisterType<ILeistungService, LeistungServiceImpl>(new HierarchicalLifetimeManager());
container.RegisterType<IPersonData, PersonDataImpl>(new HierarchicalLifetimeManager());
container.RegisterType<IPersonService, PersonServiceImpl>(new HierarchicalLifetimeManager());
container.RegisterType<IPersistent, FirebirdDB>(new HierarchicalLifetimeManager());
string serverAddress = ConfigurationManager.AppSettings["serverAddress"];
string connectionString = ConfigurationManager.ConnectionStrings["connectionStrings"].ConnectionString;
using (RESTService.StartServer(container, serverAddress,connectionString ))
{
Console.WriteLine("Server started # "+ DateTime.Now.ToString() + " on " + serverAddress + "/api");
Console.ReadLine();
}
}
Oh and what I forgot to mention, but you can see it from the code, in my host application I am also reading the App.Config where my connection string is hosted.
And here is my problem. I am not sure how to access the Database Connection from my service. Here I am implementing Firebird in my data access layer, but I am unsure how to use it in my application. Of course the easiest way would be just to create an instance and pass it to my service but this is the last thing i want to do. I have also been thinking implementing Firebird as a static class or as a singleton, but then i cannot use my IPersistant interface (and besides that, i don't think that this is the right approach).
So my question would be, is there any best practice for this kind of stuff? I somehow need to pass the connectionstring to the implementation of IPersistent (Firebird) but without actually creating an instance of Firebird in my RESTService.
Thanks
The general pattern for a multi-layer application like the one you're building is to have a data layer that provides your services with access to a database, or some other method of persisting data, usually via a repository.
You can then configure your IoC container to inject your connection string into your repository and then inject your repository into your service. This way your service stays agnostic as to how data is persisted and can focus on defining the business logic.
I actually do a similar thing for a repository that instead of persisting data in a database, stores it in a blob on Azure's CDN. The configuration withing my IoC (StructureMap in my case) looks like this:
string storageApiKey = ConfigurationManager.AppSettings["CloudStorageApiKey"];
string storageUsername = ConfigurationManager.AppSettings["CloudStorageUsername"];
this.For<IImageStorageRepository>().Use<ImageStorageRepository>().Ctor<string>("storageApiKey").Is(storageApiKey).Ctor<string>("storageUsername").Is(storageUsername);
With my repository looking like this:
public class ImageStorageRepository : IImageStorageRepository
{
....
public ImageStorageRepository(string storageApiKey, string storageUsername)
{
this.cloudIdentity = new CloudIdentity() { APIKey = storageApiKey, Username = storageUsername };
this.cloudFilesProvider = new CloudFilesProvider(cloudIdentity);
}
....
}
I have a need to use a .net client to connect to a Signalr enabled application.
The client class needs to be a singleton and loaded for use globally.
I want to know what is the best technique for using singletons globally within an MVC application.
I have been looking into using the application start to get the singleton, where I keep it is a mystery to me.
The HUB cant be a singleton by design SignalR creates a instance for each incoming request.
On the client I would use a IoC framework and register the client as a Singleton, this way eachb module that tries to get it will get the same instance.
I have made a little lib that takes care of all this for you, install server like
Install-Package SignalR.EventAggregatorProxy
Read here for the few steps to hook it up, it needs a back plate service bus or event aggregator to be able to pickup your events
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Once configured install the .NET client in your client project with
Install-Package SignalR.EventAggregatorProxy.Client.DotNet
See here how to set it up
https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki/.NET-Client
Once configured any class can register itself as a listener like
public class MyViewModel : IHandle<MyEvent>
{
public MyViewModel(IEventAggregator eventAggregator)
{
eventAggregator.Subscribe(this);
}
public void Handle(MyEvent message)
{
//Act on MyEvent
}
}
On the server you can send a message from outside the hub to all connected clients using the GetClients() method like this:
public MyHub : Hub
{
// (Your hub methods)
public static IHubConnectionContext GetClients()
{
return GlobalHost.ConnectionManager.GetHubContext<MyHub>().Clients;
}
}
You can use it like this:
MyHub.GetClients().All.SomeMethod();
i am testing the code from SDK to call Alfresco on bitNami Alresco 4.0.e-0 server with a webapp that is located on same tomcat server as Alfresco. The code hangs at the very first call to AuthenticationUtils to get session. I pretty am sure i supplied the standard bitNami Alfresco user and password for this. Did i miss any libraries? I put most available dependencies as my local maven repositories and code compiles well.
the following is code from SDK without Alfresco license as i could not format the code with it:
package org.alfresco.sample.webservice;
import org.alfresco.webservice.repository.RepositoryServiceSoapBindingStub;
import org.alfresco.webservice.types.Store;
import org.alfresco.webservice.util.AuthenticationUtils;
import org.alfresco.webservice.util.WebServiceFactory;
public class GetStores extends SamplesBase
{
/**
* Connect to the respository and print out the names of the available
*
* #param args
*/
public static void main(String[] args)
throws Exception
{
// Start the session
AuthenticationUtils.startSession(USERNAME, PASSWORD);
try
{
// Get the respoitory service
RepositoryServiceSoapBindingStub repositoryService = WebServiceFactory.getRepositoryService();
// Get array of stores available in the repository
Store[] stores = repositoryService.getStores();
if (stores == null)
{
// NOTE: empty array are returned as a null object, this is a issue with the generated web service code.
System.out.println("There are no stores avilable in the repository.");
}
else
{
// Output the names of all the stores available in the repository
System.out.println("The following stores are available in the repository:");
for (Store store : stores)
{
System.out.println(store.getScheme() + "://" + store.getAddress());
}
}
}
finally
{
// End the session
AuthenticationUtils.endSession();
}
}
}
The WebServiceFactory uses
http://localhost:8080/alfresco/api
as default endpoint.You can change the endpoint by providing a file called webserviceclient.properties on the classpath under alfresco (the resource path: alfresco/webserviceclient.properties)
The properties file must offer a property called repository.location, which specifies the endpoint URL. Since you are using a bitnami Alfresco instance, it is probably running on port 80. The file should contain the following property entry:
repository.location=http://localhost:80/alfresco/api