My friend has setting up a database for a Ragnarok Online server, and he wants me to code the relative website, which is going to use some of that data (and obviously, i'll have to add tables for the news system, website accounts, etc). Since i'm learning RoR i was going to do it that way.
I have a few "best practice" questions related to this :
Should I create a different database for the website, since it's going to have its particular data alongside the game data ? (i already have a few clues to link multiple databases with Rails, but that seems too much of a hassle for what it is).
If not, do i have to create Model/Controller for each of the tables composing the database, despite the fact that i'm not going to use 90% of it ? Or just the ones that i need ?
An example of this problem : the game database has its own "user" table, but i have to have another "user" table for the website, and do some Joins between those two. So, what's the best practice here ?
Uhm, best practice is not making your own user table. This will cause you much pain. Best practice? Use an API. Expose the game's database in some way to your website, and fetch that info with external requests in your web application.
The reason why making a second user table is a hassle:
1) You'll constantly have to update it, pulling data from the original
to keep it up-to-date.
And I mean furthermore, you're gonna have to create a CRON job or something pulling data from that original table to keep it up to date. Yuck. Also what if that CRON job makes a mistake? (It will)
2) It's almost inevitable that there will be inconsistencies if two
separate tables are maintained. Are you sure your web application is
really fail-proof?
Update:
What you're gonna need is essentially a second Rails application that acts as a REST API for that database. For a good idea of what REST is, I'd read through this to get you started: http://tomayko.com/writings/rest-to-my-wife
Once you have a good understanding of that, start making your app, and test if it's working by using tools like cURL to send requests to your API.
Once you have that done, I'd take a look into the Ruby rest-client gem like Nobita mentioned. This is what you're going to use from your web application to request information from your API application.
Just let me note, I think this would be a terrible first Rails project, unless you're already really well versed in other web development tools, preferably MVC frameworks.
Related
Background
I have a fairly typical Rails application, which uses Devise for authentication management. While building this app, I realized that realtime chat would be a great feature to have. Ideally, of course, this would make use of Websockets, in order to reduce the polling load on the server (as well as making it marginally easier to implement, as you don't have to manage polling).
I realized quickly that Ruby isn't really a great fit for having a large number of concurrent connections open at one time. Phoenix, however, is written in Elixir, so I can make use of the Erlang VM, which is quite good at long connections. It also seems like it could be greatly beneficial if all the chat data was stored separate from the main application database, which should also reduce load in the future.
The Problem
I want to be able to make this separation completely invisible to the user. They visit www.example.com/chat, and it loads all the relevant data in from chat.example.com and starts up the websockets, without requiring them to login to a separate service. I think using an <iframe> is probably the way to go about doing this.
My problem is sharing authentication and data between the two applications. The Rails app needs to be able to create conversations on the Phoenix app in response to certain events. The Phoenix app needs to know what user is currently authenticated into Rails, as well as general data about the user.
An OAuth flow with the Rails app as the ID provider seemed like a good fit at first, but I can't figure out a way for the Phoenix app to automatically be granted access. I also have some concerns about user records existing inside the Phoenix app—it should be aware of all users on the main application, so you can start a chat with a user even if they haven't ever opened chat.
What would be the best way to go about doing this? My intuition says that this is going to involve window.postMessage and some kind of token system, but I wanted to ask what the generally accepted way of doing this was before I accidentally created an insecure mess.
Sharing the session isn't too hard, assuming you are running at least Rails 4.1 and using JSON serialization (default for apps created with >=4.1). A quick google search finds PlugRailsCookieSessionStore, which accomplishes this.
For more information on what it takes to share a session between Rails and another language, Matt Aimonetti has an excellent blog post with detailed information.
Lastly, if you would prefer to stay entirely in Ruby, it's definitely doable. Ryan Stout discusses scalability around persistent connections in the FAQ for Volt, which uses a persistent connection for every user. The article he links is also a great read. Just mentioning it to help you weigh the trade off of building a separate app in another language.
I'm going to be collaborating with a Python developer on a web
application. I'm going to be building a part of it in Ruby and he is
going to build another part of it using Django. I don't know much about
Django.
My plan for integrating the two parts is to simply map a certain URL
path prefix (say, any request that begins with /services) to the Python
code, while leaving Rails to process other requests.
The Python and Ruby parts of the app will share and make updates to the
same MySQL datastore.
My questions:
What do people think generally of this sort of integration strategy?
Is there a better alternative (short of writing it all in one language)?
What's the best way to share sensitive session data (i.e. a logged in
user's id) across the two parts of the app?
As I see it you can't use Django's auth, you can't use Django's ORM, you can't use Django's admin, you can't use Django's sessions - all you are left with is URL mapping to views and the template system. I'd not use Django, but a simpler Python framework. Time your Python programmer expanded his world...
One possible way that should be pretty clean is to decide which one of the apps is the "main" one and have the other one communicate with it over a well-defined API, rather than directly interacting with the underlying database.
If you're doing it right, you're already building your Rails application with a RESTful API. The Django app could act as a REST client to it.
I'm sure it could work the other way around too (with the rest-client gem, for instance).
That way, things like validations and other core business logic are enforced in one place, rather than two.
A project, product, whatever you call it, needs a leader.
This is the first proof that you don't have one. Someone should decide either you're doing ruby or python. I prefer ruby myself, but I understand those who prefer python.
I think starting a product asking yourself those kind of questions is a BAD start.
If your colleague only knows prototype, and you only know JQuery, are you going to mix the technologies too? Same for DB? And for testing frameworks?
This is a never ending arguing subject. One should decide, IMHO, if you want so;ething good to happen. I work with a lot of teams, as a consultant, Agile teams, very mature teams for some of them, and that's the kind of stuff they avoid at all cost.
Except if one of you is going to work on some specific part of the project, which REALLY needs one or other of the technologies, but still think the other one is best for the rest of the application.
I think, for example, at a batch computing. You have ALL your web app in ror or django, and you have a script, called by CRON or whatever, computing huge amounts of data outside the web app, filling a DB or whatever.
My2Cts.
Firstly let me mention that I'm new to web-frameworks.
I have to write my first web-app for a Uni project. I spent two weeks learning Grails and Django. Started working with Rails yesterday and loved it. So I've decided to go with it and discard my work in the other frameworks.
About the app
It's supposed to be a Twitter app that utilizes Twitter's Streaming API to record tweets which match a set of specified filters. (I'm going to use the Tweetstream gem which takes care of connecting to Twitter and capturing matching tweets).
The app's web interface should have the following functionality -
Creating new requests The user inputs a set of filter parameters (keywords to track) & URL/username/password of an existing PostgreSQL or MySQL database.
When a request is created, the web-app spawns a background ruby process. This process connects to Twitter via the Tweetstream gem. It also connects to the database specified by the user to stores received tweets.
View/terminate of existing requests
The user should be able to see a list of requests that are running as background processes by visiting a URL such as /listRequests.
See further details about a process/terminate the process
The user should be able to go to URL such as /requests/1/detail to view some details (e.g how long request has been running, number of tweets captured, etc). The user should also be able to terminate the process.
My inexperience is showing as I'm unable to comprehend -
what my models should be (maybe Request should be a model. Tweet doesn't need to be a model as it's not being stored locally)
how I'm going to connect to remote databases.
how I can create background processes (backgroundrb??) and associate them with request objects so that I can terminate then when the user asks.
At the end of the day, I've got to build this myself, so I'm not asking for you to design this for me. But some pointers in the right direction would be extremely helpful and appreciated!
Thanks!
Hmm.
Since the web app is just a thin wrapper around the heavy-lifting processes, it might be more appropriate to just use something like Sinatra here. Rails is a big framework that pulls in lots of stuff that you won't need for this project, even though it will work.
Does the "background process" requirement here strictly mean a separate process, or does it just mean concurrency? TweetStream uses the EventMachine gem to handle updates as they come, which uses a separate thread for each connection. It would be quite possible to spawn the TweetStream clients from a simple Sinatra web app, keep them in a big array, have them all run concurrently with no trouble, and simply run stop on a given client when you want it to stop. No need for a database or anything.
I'm not sure exactly what your prof is looking for you to do here, but MVC doesn't really fit. It's better to work with the requirements than to mush it into a design pattern that doesn't match it :/
Even so, I <3 Rails! Definitely get on that when you're working primarily with objects being represented in a database :)
Quite a project. Most of what will be challenging is not related to rails itself, but rather the integration with background processes. backgroundrb is a bit out of fashion. The last commit on the main github project is over a year ago, so it's likely not up to snuff for Rails 3. Search around and evaluate your options. Resque is popular, but I'm not sure if your real-time needs match with its queue-based structure.
As for your app, I see only a single model, but don't call it request. That's a reserved name in rails. Perhaps a Search model, or something along that line.
Connecting to different databases is straight forward but will require direct configuration of your ActiveRecord class during operation rather than using database.yml.
I am trying to build a CMS I can use to host multiple sites. I know I'm going to end up reinventing the wheel a million times with this project, so I'm thinking about extending an existing open source Ruby on Rails CMS to meet my needs.
One of those needs is to be able to run multiple sites, while using only one code-base. That way, when there's an update I want to make, I can update it in one place, and the change is reflected on all of the sites. I think that this will be able to scale by running multiple instances of the application.
I think that I can use the domain/subdomain to determine which data to display. For example, someone goes to subdomain1.mysite.com and the application looks in the database for the content for subdomain1.
The problem I see is with most pre-built CMS solutions, they are only designed to host one site, including the one I want to use. So the database is structured to work with one site. However, I had the idea that I could overcome this by "creating a new database" for each site, then specifying which database to connect to based on the domain/subdomain as I mentioned above.
I'm thinking of hosting this on Heroku, so I'm wondering what my options for this might be. I'm not very familiar with Amazon S3, or Amazon SimpleDB, but I feel like there's some sort of "cloud database" that would make this solution a lot more realistic, than creating a new MySQL database for each site.
What do you think? Am I thinking about this the wrong way? What advice do you have to offer in this area?
I've worked on a Rails app like this, and the way it was done there was named-based virtual hosts, with db entries for each site running. Each record was scoped to a site if necessary (blog posts, etc.) while users would have access to all sites running out of that db. Administrator permissions could be global or scoped to one or more sites.
You're absolutely correct when you say you'll reinvent the wheel a million times during the project. Plugins will likely require hacking on top of the CMS itself.
In my situation, it ended up being a waste of almost a million dollars of company money to build that codebase to run multiple sites while still being able to cater to the whims of each client site. It worked, but was not very maintainable due to the number of site-specific hacks that subsequently entered the codebase. You may be able to make it work if you don't have to worry about catering to specific client sites running on your platform.
In the end, you're going to need a layer of indirection to handle the different sites regardless of methodology. We ended up putting it in the database itself. If you go with the different-db-for-each-site method you mentioned, you'll put that layer in your code instead. I'm not sure which one is the better method.
I hope you're able to pull this off. I failed.
Also, as I learned today, Heroku offers postgres instead of mysql for rails apps.
There's James Stewart's Theme Support Plugin for Rails 2.3, and lucasefe's themes_for_rails gem for Rails 3+.
I just started using the 2.3 version and it's working well so far.
I was wondering if somebody has some insight on this issue.
A little background:
We've been using Rails to migrate from an old dBase and Visual Basic based system
to build internal company IntrAnet that does things like label printing,
invetory control, shipping, etc - basically an ERP
The Dilemma
Right now we need to replace an old customer-facing website that was done in Java, that
would connect to our internal system for our clients to use. We want to be able to pull information like inventory, order placement, account statements from our internal system and expose it to site live. The reason is that we take orders on the website, through fax & phone and sometimes we have walk-ins. So sometimes (very rarely thou) even a short delay in inventory update on our old Java site causes us to put an order on backorder, because we sell the same item to 2 customers within half an hour. It's usually fixed within one day but we want to avoid this in the future.
Actual Question
Does anyone have any suggestion on how to accomplish this in a better
way?
Here are three options that I see:
a) Build a separate Rails app on a web server, that will connect to the same DB that our internal app connects to.
+++ Pluses:Live data - same thing that our internal apps see, i.e. orders are created in real time, inventory is depleted right away
--- Minuses: Potential security risk, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
b) Build a separate Rails app on a web server, that will connect to a different DB from our internal app.
+++ Pluses: Less security exposure.
--- Minuses:Extra effort to sync web DB and internal DB (or using a web service like REST-API), extra code to handle inventory depletion and order # creation, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
c) Expose internal app to the web
+++ Pluses: all the problems from above eliminated. This is much "DRY"er method.
--- Minuses: A lot more security headaches. More complicated login systems - one for web & one for internal users using LDAP.
So any thoughts? Anyone had similar problem to solve? Please keep in mind that our company has limited resources - namely one developer that is dedicated to this. So this has to be one of those "right" and "smart" solutions, not "throw money/people/resources at this" solutions.
Thank you.
I would probably create separate controllers for the public site and use ActiveResource to pull data from you internal application. Take a look at
http://blog.rubybestpractices.com/posts/gregory/rails_modularity_1.html
http://api.rubyonrails.org/classes/ActiveResource/Base.html
Edit - fixed link and added api link
I would go for a. You should be able to create the controllers so that they are re-usable.
Internal users are as likely to duplicate data as external users.
It's likely that a public UI and an internal, for-the-staff, UI will need to be different. The data needs to be consistent so I would put quite a bit of effort into ensuring that there is exactly one, definitive database. So: one database two UIs?
Have a "service" layer that both UIs can use. If this was Java I would be pretty confident of getting the services done quickly. I wonder how easy it is in Ruby/Rails.
The best outcome would be that your existing Customer Java UI can be adapted to use the Rails service layer.
Assuming you trust your programmers to not accidentally expose things in the wrong place, the 'right' solution seems to me to have a single application, but two different sets of controllers and views, one for internal use, and one for public-facing. This will give you djna's idea of one database, two UIs.
As you say having two separate databases is going to involve a lot of duplication, as well as the problem of replication.
It doesn't make sense to me to have two totally separate apps using the same database; the ActiveRecord part of a Rails app is an abstraction of the database in Ruby code, therefore having two abstractions for a single database seems a bit wrong.
You can also then have common business rules in your models, to avoid code duplication across the two versions of the site.
If you don't completely trust your programmers, then Mike's ActiveResource approach is pretty good - it would make it a lot harder to expose things by accident (although ActiveResource is a lot less flexible and feature rich than ActiveRecord)
What version of Rails are you using? Since version 2.3 Rails Engines is included, this allows to share common code (models/views/controllers) in a Rails plugin.
See the Railscast for a short introduction.
I use it too. I have developed three applications for different clients, but with all the shared code in a plugin.