Can we connect to multiple databases on directus simultaneously?
I wanted to connect postgres, SQL and aws at the same time on Directus
The closest the Directus team has come to discussing multiple database management has been this thread on their Github:
https://github.com/directus/directus/discussions/12699
As per the last message by a maintainer:
This is not something that's officially supported [...] Directus wasn't designed to handle this use case out of the box 👍🏻
Within that same thread a user had manually modified the usage of Knex to use pools to connect to different Databases but it seems unreliable.
Currently the best way to get the benefits of using Directus with multiple databases is to set up a server with multiple instances of Directus running, each pointing to a different database, and using NGINX to access each Directus instance under a separate sub-path.
For example if you have your app data on Postgress and create reports on a separate MySQL database, you could setup NGINX to proxy one Directus connected to Postgres to /api and another Directus connected to MySQL to /reports.
Some useful links:
https://learndirectus.com/how-to-manage-multiple-projects-in-directus/
https://github.com/directus/directus/discussions/4480
https://github.com/directus/sdk/issues/51
Related
I'm developing an app which live-streams video and/or audio from different entities. Those entities' IDs and configurations are stored as records in my DB. My app's current architecture is something such as the following:
a CRUD API endpoint for system-wide functionalities, such as logging in or editing an entity's configuration.
N-amount of other endpoints (where N is the number of entities and every endpoint's route is defined by the specific entity's ID, like so: "/:id/api/") for each entity's specific functionalities. Each entity is loaded by the app on initialization. Those endpoints are both a REST API handler and a WebSocket server for live-streaming media received from the backend which was configured for that entity.
On top of that, there's an NGINX instance which acts as a proxy and hosts our client files.
Obviously, this isn't very scalable at the moment (a single server instance handles an ever-growing amount of entities) and requires restarting my server's instance when adding/deleting an entity - which isn't ideal. I was thinking of splitting my app's server into micro-services: one for system-wide CRUD, and N others for each entity defined in my DB. Ultimately, I'd like those micro-services to be run as Docker containers. The problems (or questions to which I don't know the answers) I'm facing at the moment are:
How does one run Docker containers dynamically, according to a DB (or programmatically)? Is it even possible?
How does one update the running Docker container to be able to reconfigure that entity during run-time?
How would one even configure NGINX to proxy those dynamic micro-services? I'm guessing I'll have to use something like Consul?
I'm not very knowledgeable, so pardon me if I'm too naive to think I can achieve such architecture. Also, if you can think of a better architecture, I'd love to hear your suggestions.
Thanks!
I am using Amazon student subscription for a datawarehouse project. I have been able to setup redshift cluster and I am able to query tables via sql work bench. I need to perform BI analysis tasks on the data, the only option I found that was open source is Redash.
I am not being able to identify what to enter in "host" field required in the redash redshift setup. Plus, is there any straight forward way to perform that?
When you're in the AWS backend and go to Redshift -> Cluster -> Configuration, you should see the Endpoint which should look like asd.asdasda.redshift.amazonaws.com:5439. When you remove the 5439 (which is the port) then you'll have to host you have to connect to.
Hope this helps.
While using the Neography gem to implement a Neo4j database on a Rails app, it so appears that the database is local to that particular Rails app's embedded Neo4j server. I wish to share a common Neo4j DB between two Rails apps, just the way you could do with a MySQL database through entries into database.yml. Is this impossible while using Neography? If so, what could be my possible alternatives which don't involve JRuby(i.e. using neo4j.rb).
Neography is a wrapper to query a neo4j server through the REST protocol. In that case, you should be able to query from anywhere you want, once you have started the server, even with a simple curl command.
You want especially to uncomment this line in conf/neo4j-server.properties if your apps are on 2 different servers :
org.neo4j.server.webserver.address=0.0.0.0
Make sure however to protect your database, you can read here how to : http://docs.neo4j.org/chunked/stable/security-server.html
I have a small Rails app I have deployed on heroku's free tier with only an API and no views to create Books, return a Book's genre, and remove all Books from the database. It looks a little like this:
A POST request to site.herokuapp.com/add_book?name=harrypotter&genre=fantasy adds a book.
A GET request to site.herokuapp.com/find_book?name=harrypotter returns the genre.
A POST request to site.herokuapp.com/reset clears the database of all books
Now that I have this working on a single server I want to replicate it across three servers each with a unique URL, so that I can send my calls to any of the 3 servers and have all their databases containing the same Book entries.
For example if I send a
POST request to site1.herokuapp.com/add_book?name=harrypotter&genre=fantasy
then send a
POST request to site2.herokuapp.com/add_book?name=littlewomen&genre=fiction
I can send a
GET request to site3.herokuapp.com/find_book?name=littlewomen .
But if I send a reset call to a server it does not reset the other servers.
I found a two gems called Octopus and Data Fabric but it looks like they replicate across databases on the same server, and not on different servers. Also since I am going to be making calls to three different sites will these gems work?
What's the best way to go about this type of database/server replication in Rails?
Not sure if I quite understand your complete use case. Is it that you simply want to distribute reads/writes across the different databases for different API calls ?
If it is a small application why don't you have all 3 applications on site 1, site 2 and site 3 connect to the same database instance ?
Following options:
Master Slave Replication
In your case you would have 1 master db and 2 slave db. Data will be read and written to the master db but will be replicated to the slaves. You can then configure your rails application to do reads across selected databases.
You can use Octopus to distribute writes/reads between slaves and master.
When using replication, all writes queries will be sent to master, and read queries to slaves.
MySQL Cluster
I have not used MySQL cluster yet but should be using it soon. If your requirement is to distribute reads/writes across all the databases while having the data remain consistent across all nodes MySQL cluster looks like a good candidate. The data is distributed across a pool of data nodes. Every MySql server has access to all the data nodes and as result when a server within the cluster changes data example by inserting rows, this new data is immediately accessible and seen by the other MySql servers in the cluster.
See more here MySql Cluster
So I have a Rails application. It currently runs separate as front-end and back-end + database.
I need to scale it to have several back-end servers.
The backend server has Resque background workers running (spawned by user front-end requests). It also relies heavily on callbacks.
I am planning the following setup:
|front-end| --- |load-balancer (haproxy or AWS ELB)| --- Server 1 ---- Postgresql Database (+++ other DBs added via replication later if needed)
\___ Server 2 ---/
++ (other servers added in the same fashion later )
I have concerns about how to deal with putting Database on a separate machine in this case.
1) I intend to create a new empty Rails app with schema identical to initial back-end. Have it running and accepting updates / posts via HTTP and keep connected via remote SSH (to trigger :after_commit callbacks in back-end). Is it a bettergood idea?
2) I am using Postgresql and intend to switch to an enterprise DB once the need arises. Currently the need is to scale the part of back-end that does processing not the database.
3) Does this approach seem scalable?
I'm not sure I really understand your question. Generally in production applications the database layer is separate from the application layer. I can't quite tell if this pertains to you, but it's definitely an interesting watch. http://vimeo.com/33263672 . It talks about using a redis layer between the rails and db layers to facilitate queuing, and creating a zero downtime environment. It seems like it would be a better solution than using a second rails stack? I think it should look something like this;
|ELB| Web Servers |ELB| Application Servers |RRDNS| Redis Servers | PostGreSQL Servers |
If I am understanding your meaning. If not, that video link is still worth a watch :)