I have two neo4j databases running on two different hosts. I connected my rails app to one of them while generating the app. Now I want to use other database as well with the app. How can I configure the app to connect to both the databases?
There’s not currently a good way to configure one Ruby process to use two sessions at the same time. If you are using Rails you can change the server by setting the NEO4J_URL environment variable. Otherwise you’d need to manage the session by setting Neo4j::ActiveBase.current_session or Neo4j::ActiveBase.on_establish_session (which will set the session for each new thread, which may be needed if you are running a multi-threaded process)
See: https://github.com/neo4jrb/neo4j/blob/master/lib/neo4j/active_base.rb
As Brian mentioned currently we cannot configure one Ruby process to use two sessions at the same time. We have to manage the session by setting Neo4j::ActiveBase.current_session (See: https://github.com/neo4jrb/neo4j/blob/master/lib/neo4j/active_base.rb)
The neo4j.yml sets the Neo4j::ActiveBase.current_session for you in the railtie. If you set Neo4j::ActiveBase.current_session after the app has started up it will override what was in the neo4j.yml. The current_session needs to be a Neo4j::Core::CypherSession object from the neo4j-core gem. (See the readme: https://github.com/neo4jrb/neo4j-core)
Also keep in mind, that currently neo4j does not support having different session for each model. So you might experience problem if, setting the session inside model. A better way would be to set session in the normal runtime of the app. You also might want to wrap the Neo4j::Core::CypherSession to get Query Proxy instead of Neo4j::Core objects. To this you have to specify wrap_level: :proc while declaring the adaptor. (Refer: https://github.com/neo4jrb/neo4j/blob/master/lib/neo4j/session_manager.rb#L14)
So in all, here is what you need to do
http_adaptor = Neo4j::Core::CypherSession::Adaptors::HTTP.new('http://neo4j:7474',{wrap_level: :proc})
Neo4j::ActiveBase.current_session = Neo4j::Core::CypherSession.new(http_adaptor)
this will establish a wrapped session with the desired database in 'http://neo4j:7474'
Related
I'm looking for the best way to solve a problem.
At this moment I have a site for a customer, example.domain.com
My customer ask to create another website with some changes in design, but the contents are the same of the first website. I don't want to duplicate the website, because every feature I add to the site A must be deployed also to site B, and I'm looking a smart way to handle the situation.
I need to keep two different domains and I need also custom mailers and other small tweaks in the controllers (and maybe in some models).
My idea is to put in application controller a before filter like this
before_action :detect_domain
private
def detect_domain
case request.env['HTTP_HOST']
when "example.domain.com"
request.variant = :host1
when "example1.domain.com"
request.variant = :host2
end
end
Then I use the variant with some conditional to choose the mailer, to customize the views and to apply some code changes.
Any other idea?
Using a before filter and a per-request variable like your proposal will work, with a couple caveats that I'll mention below. I'd recommend a tool like the request_store gem to actually store the per-request value of which "skin" is selected.
Now, for the caveats. First, the main problem with per-request variables is that your Rails app does not always exist in the context of a request. Background jobs and console sessions operate outside of the usual request/response flow of your app. You will need to think about what happens when your models or other non-controller/view code is executed when that variable isn't set. I would suggest simply not having your models depend on RequestStore at all -- have the controllers pass any request-specific information down into the models, if needed.
Secondly, it's not clear from your description if you want any data or logical separation between the two domains, or if you just want different look-and-feels. If the former, you might consider the apartment gem, which aims to make database multi-tenancy easier.
EDIT: I also want to mention that, as an alternative to the multi-tenant solution above, you also have the option of a multi-instance solution. Wherein, you use an environment variable to indicate which version of the site should be displayed, and spin up multiple instances of your app (either on the same server with a reverse proxy, or on separate servers with separate DNS entries or a reverse proxy). The downside is increased infrastructure costs, but the context problem I mentioned above no longer exists (everything always has access to environment variables).
Say I have a running rails project, and now I need to add entries to its database from an outside source. This is to be done automatically once a day and can be reduced to loading data from a text file.
Now I'm wondering, what is the conventional way to do this in a Rails project? Do I create a controller method which runs once a day and how do I call it? Do I access the database from outside with something like the sequel gem?
I think it depends of your application restrictions and business requirements.
My opinion is that both ways are good.
But I'd prefer to connect directly to database of use some message queue, just to avoid HTTP, to decrease number of HTTP calls.
I'm just wandering when can I safely use Mongoid.override_database.
If I use it inside Sidekiq worker is DB going to be changed only for the worked which has called the override_database method?
How about using it in standard Rails controller? Is there any situation where it shouldn't be used (where it could cause problems)?
At first I've used .with(database: 'xyz') when I needed to change the DB, but then I've found out that it doesn't work on relational fields...
I am a new Ruby on Rails user and had a question. I have an idea of what I want my Users DB to look like but was wondering whether or not I should add an additional value to it. Basically I need a variable to signal to all users that it is safe to proceed with a certain action. This variable would be persistent across all users and should be visible to all users, but I want the server to be able to change this variable as well. When programming in other languages, I would use a global variables, so I wanted to check if that is also the case here. If so, would this be the best approach for going about it: Site-Wide Global Variables in Ruby on Rails. Also, how would I update the global variables. Thanks for any help!
A global variable doesn't fit your need. It doesn't spread across all the Ruby processes. If your web server spawns 5 ruby processes to handle 5 request at the same time, the variable defined in the first process won't be visible to the others.
There are other solutions available. You can use a database and store the flag/information on the database. Otherwise, you can use a file and store the value in the file.
The best solution would be an in-memory shared data source, such as memcached or Redis.
It's an application that we use internally at the office that I would like to offer as a hosted service for anyone.
How can I do that without making major code changes?
The first thing that occurs to me is to have the app select which database to connect to based on the domain.
So each instance of the app would have its own database, but all instances would share the same code.
The only changes required to the code would be the database selection.
Is this approach maintainable? I've heard wordpress.com does this and that it offers a couple of advantages. I'm mainly looking to do it this way to avoid have to scope my entire set of database queries to a certain site within the same database.
Thanks!
The simplest way to do this is to clone the application, and create another server instance to handle it. This actually the way I handle multiple wordpress blogs on my server
Pro:
This process can be streamlined into a utility script.
Can be easily maintained if symlinks are used for the common code. IE: Everything but branding and some of the things in the config directory.
Cons:
- If you're using passenger it will require an apache restart for each new instance.
- Same if you're using Apache to route subdomains on different virtual hosts to different mongrel clusters.
However the better way comes from the question: Rails - Separate Database Per Subdomain
The method in the accepted answer is much more robust. It might require more changes than you're looking for, but it has all the benefits without the drawbacks of any other methods. Each new instance requires a new entry in the master database with the table name and other instance specific information. You'll also want custom rake task to build the database for each new instance.
I would suggest switching the database connection and adding a view_path based on the domain, I have posted code in this question.
I hope this helps!
I wouldn't do this with multiple databases as you mentioned. Keeping all your schemas/migrations in sync with all the db's could become painful.
I would look into simply making it a multi-tenant app where you have some sort of "Account" model and then all your existing models are scoped to it ... in other words, if this was a blog app, your Account has_many :posts, etc.
With this approach, you can identify accounts by subdomain ... have people choose their subdomain when they create an account and go from there.
It's pretty straightforward to do. If you need add billing into the mix, you might look at the SaaS Railskit (which handles all the signup and subdomain stuff) or Chargify.
You can also identify accounts Twitter-style ... with http://myapp.com/someuser