We are building an app where multiple websites are powered by a single site on IIS
We have a web-based tool where webmaster can edit the "resx" files, like:
/App_GlobalResources/es/Backend.es.resx
However there is two problems with this:
changing these files effects all sites.
It also causes the entire IIS site to restart.
Is there another approach to this?
I think storing the strings in a DB may be a bad idea as it will cause hundreds of SQL lookups per page.
Use a database driven resource provider that supports caching. And you are in luck because someone else already done it
Do you provide an interface to edit the resx files? If so, cache them in the Application scope and expire the cache when they are updated. Then, store them in the database. This way, you'll have both speed and flexibility.
Just because the data is in database, doesn't mean it'll be slow. Cache is the solution. Of course, the first lookup will be slow, but subsequent lookups will be as fast as you can get.
Related
Due to the fact that I am using the OWIN authentication, I had to put the cms on a different website. The cms is at cms.domain.com and in my VS project for domain.com I simply pointed the "piranha" connection string to the right database. So far everything was working perfectly, I only had to change the MediaProvider do deal with the domain name issues for the 2 retrieve methods of IMediaProvider.
Now I'm trying to integrate the menu using the #UI.Menu helper, however it looks like the helpers are not using the database of the Web.Config file (I have no problem to retrieve posts from C#). I only see the Start page listed. To confirm it I have also tried to change the site description from cms.domain.com/manager and to display it with the helper #Site.Description but it still shows default site description so it really looks like there is another database around..
Where does data used by the HTML helpers come from? And How can I fix that?
UPDATE : It seems that it is actually a caching issue, it has nothing to do with the connection string.
Piranha CMS caches a lot of meta-data for performance and to minimize the round-trips to the database. The site information, as well as the sitemap is two of these things as these will most likely be used in every page-rendering.
The default cache implementation uses the IIS cache, which is per application pool. The cache is invalidated when data is modified in the manager interface, but if you for example would run the manager interface in another application pool (site/application) this will make the caching fail, causing the kind of errors you describe.
Not being sure how your application is set up, this is my primary guess. If you are in fact running the client web & the manager in different application pools, and you need to continue doing this you should try one of the following approaches.
Implement a distributed cache provider
Set the system param CACHE_SERVER_EXPIRES to 1
Setting the param to 1 invalidates all server cache after one minute. This is of course not to recommend if you are expecting a lot of traffic to your site as it will more or less disable the caching mechanism.
I hope this helps you.
Best regards
HÃ¥kan
All of sudden it's now working. The only thing I remember to have done is delete duplicate entries in the dbo.page table. It's all working now, however it doesn't explain why the Site Description wasn't retrieve properly too. But never mind, I hope this will help someone else. I hope custom authentication will be built-in in the next release of Pirhana CMS !
First, this may not be the best title, but it seems to make sense at this time.
What I'm looking at is loading a resource which should live for the life of the web app. There may be some provisioning at a later point for a manual refresh, but currently that is not the case.
We have a complex permission structure which resides in the database for multiple reasons. I do not want to incur the overhead of retrieving this for each page load, thus I want it to reside in memory. My first instinct is to create a singleton which I load this into and use it whenever needed to lookup a permission. I understand the hesitance towards singletons and wonder if that is a poor approach.
I do not want to go down the route of yaml or another storage mechanism, the permissions must reside in the DB for other dependencies. That said, in Rails, what would be the most appropriate way to efficiently load and read the data?
This sounds like the perfect use of the cache
permissions = Rails.cache.fetch( 'permissions' ) do
# Permissions don't exist yet, perform long operation and load from DB
load_permissions_from_db
end
More details here.
I'm not totally sure what you mean but i think there are a few ways you could go
caching (e.g. caches_page :page or caches_action :action in the controller )
or possibly storing something in a cookie/ session data, of course i don't totally understand the nature of this data so I don't Know what would work better, if at all
I'm getting into MVC and is wonderful, however, still need to decide what dabatase system to use. For many years my options have been:
1) MS SQL. For complex web applications. Example: A shopping cart or CMS.
2) MS Access. For smaller and simple ones. Example: a small product catalog, blog or news system.
I don't want to keep using Access, however, using SQL means using SQL Express if you don't want to pay more (my clients will not want) in SQL database hosting. But using SQL Express (when hosting supports it) get some some connection problems when many connections are opened (from your app and others hosted apps in the same pool).
I want to use LINQ, thats why now I'm forced to use MS SQL (express) in order to use LINQ2SQL.
Any suggestion on what database rather that MS Access or SQL Express can be used that doesn't require more hosting expenses? Otherwise I have to try Entities + MS Access. Thanks for your help.
Access is completely unacceptable for any website that expects to handle more than one user at a time.
http://www.15seconds.com/Issue/010514.htm
What are your requirements for the databse? Do you need it to be relational? How many simultaneous users are you expecting for your website?
I would either use MySQL or SQL Server Express.
Perhaps you can post the details of your SQL Server problem. We may be able to find a fix for it.
For most small to medium websites, I would definitely use SQL Express. Its free and within its remit, works just as well as SQL Server full version. We run innumerable websites with SQL Express.
Regarding Access, Access is no joke. It just depends how you use it. If I put #mattimus on a horse and told him to lasso a 450 kg cow (something I did on a daily basis as a kid), he too would be a joke. It's horses for courses, isn't it?
To dispell the Access misconceptions, that are based on ignorance and false snobbery, see:
Who Access really is (one very versatile cookie for a start)
Just don't use Access for a website. You don't need to. SQL Express is all you need.
You can use pretty much any database that you want with MVC. As other have suggested stay away from Access for web development.
SQL Server including the Express edition is a good candidate for small web sites. You can also look into MySQL and other cheap/free relational databases.
On an unrelated note, mi familia es de Merida Yucatan. :)
Like so many questions asked on SO, this one is like "how long is a piece of string?" You can't say really say what's appropriate except if you consider the environment.
For a read-only website, Jet/ACE can perform quite splendidly if you know what you're doing. Michael Kaplan's http://trigeminal.com is backed by a Jet database, but it's read-only. Back in its heyday, MichKa used to say he was getting 100K pageviews per day (or some number that was outrageously high, so far as I was concerned!).
But with a read/write site, you wouldn't be able to support nearly as many users.
That doesn't mean you could use Jet/ACE as the back end. If you have five users max, and it's an Intraweb app, you can be just fine with it.
I wouldn't want to use it myself, but it's not going to be a problem with user populations that are appropriate for the Jet/ACE database engine to begin with, and particularly if you manage your connections properly. Jet/ACE works better with a single shared connection than it does with constant opening and closing of connections (because of the overhead of creating and recreating the LDB file), so you have to code against it differently than you would with your standard server-based database.
Also, Michael Kaplan used to point out that via ADO/OLEDB, Jet was threadsafe, but via DAO, it was not. Use of ADO/OLEDB made it marginally acceptable as a back end for a web site, though I would tend not to choose it for anything other than a site that was very small or disposable, or where there were no options that were not outrageously expensive.
But the key point is that there is no blanket statement that can be made here -- whether it will work reliably depends entirely on the specifics of the environment in which you're using it.
Access is a joke. You should go with MSSQL, MySQL, or Oracle. I like MSSQL personally. Access is not designed to handle large operations and is ridiculously slow.
I am trying to build a CMS I can use to host multiple sites. I know I'm going to end up reinventing the wheel a million times with this project, so I'm thinking about extending an existing open source Ruby on Rails CMS to meet my needs.
One of those needs is to be able to run multiple sites, while using only one code-base. That way, when there's an update I want to make, I can update it in one place, and the change is reflected on all of the sites. I think that this will be able to scale by running multiple instances of the application.
I think that I can use the domain/subdomain to determine which data to display. For example, someone goes to subdomain1.mysite.com and the application looks in the database for the content for subdomain1.
The problem I see is with most pre-built CMS solutions, they are only designed to host one site, including the one I want to use. So the database is structured to work with one site. However, I had the idea that I could overcome this by "creating a new database" for each site, then specifying which database to connect to based on the domain/subdomain as I mentioned above.
I'm thinking of hosting this on Heroku, so I'm wondering what my options for this might be. I'm not very familiar with Amazon S3, or Amazon SimpleDB, but I feel like there's some sort of "cloud database" that would make this solution a lot more realistic, than creating a new MySQL database for each site.
What do you think? Am I thinking about this the wrong way? What advice do you have to offer in this area?
I've worked on a Rails app like this, and the way it was done there was named-based virtual hosts, with db entries for each site running. Each record was scoped to a site if necessary (blog posts, etc.) while users would have access to all sites running out of that db. Administrator permissions could be global or scoped to one or more sites.
You're absolutely correct when you say you'll reinvent the wheel a million times during the project. Plugins will likely require hacking on top of the CMS itself.
In my situation, it ended up being a waste of almost a million dollars of company money to build that codebase to run multiple sites while still being able to cater to the whims of each client site. It worked, but was not very maintainable due to the number of site-specific hacks that subsequently entered the codebase. You may be able to make it work if you don't have to worry about catering to specific client sites running on your platform.
In the end, you're going to need a layer of indirection to handle the different sites regardless of methodology. We ended up putting it in the database itself. If you go with the different-db-for-each-site method you mentioned, you'll put that layer in your code instead. I'm not sure which one is the better method.
I hope you're able to pull this off. I failed.
Also, as I learned today, Heroku offers postgres instead of mysql for rails apps.
There's James Stewart's Theme Support Plugin for Rails 2.3, and lucasefe's themes_for_rails gem for Rails 3+.
I just started using the 2.3 version and it's working well so far.
I was wondering if somebody has some insight on this issue.
A little background:
We've been using Rails to migrate from an old dBase and Visual Basic based system
to build internal company IntrAnet that does things like label printing,
invetory control, shipping, etc - basically an ERP
The Dilemma
Right now we need to replace an old customer-facing website that was done in Java, that
would connect to our internal system for our clients to use. We want to be able to pull information like inventory, order placement, account statements from our internal system and expose it to site live. The reason is that we take orders on the website, through fax & phone and sometimes we have walk-ins. So sometimes (very rarely thou) even a short delay in inventory update on our old Java site causes us to put an order on backorder, because we sell the same item to 2 customers within half an hour. It's usually fixed within one day but we want to avoid this in the future.
Actual Question
Does anyone have any suggestion on how to accomplish this in a better
way?
Here are three options that I see:
a) Build a separate Rails app on a web server, that will connect to the same DB that our internal app connects to.
+++ Pluses:Live data - same thing that our internal apps see, i.e. orders are created in real time, inventory is depleted right away
--- Minuses: Potential security risk, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
b) Build a separate Rails app on a web server, that will connect to a different DB from our internal app.
+++ Pluses: Less security exposure.
--- Minuses:Extra effort to sync web DB and internal DB (or using a web service like REST-API), extra code to handle inventory depletion and order # creation, duplication of code - i.e. I need to duplicate all the controllers, models, views, etc. that deal with orders.
c) Expose internal app to the web
+++ Pluses: all the problems from above eliminated. This is much "DRY"er method.
--- Minuses: A lot more security headaches. More complicated login systems - one for web & one for internal users using LDAP.
So any thoughts? Anyone had similar problem to solve? Please keep in mind that our company has limited resources - namely one developer that is dedicated to this. So this has to be one of those "right" and "smart" solutions, not "throw money/people/resources at this" solutions.
Thank you.
I would probably create separate controllers for the public site and use ActiveResource to pull data from you internal application. Take a look at
http://blog.rubybestpractices.com/posts/gregory/rails_modularity_1.html
http://api.rubyonrails.org/classes/ActiveResource/Base.html
Edit - fixed link and added api link
I would go for a. You should be able to create the controllers so that they are re-usable.
Internal users are as likely to duplicate data as external users.
It's likely that a public UI and an internal, for-the-staff, UI will need to be different. The data needs to be consistent so I would put quite a bit of effort into ensuring that there is exactly one, definitive database. So: one database two UIs?
Have a "service" layer that both UIs can use. If this was Java I would be pretty confident of getting the services done quickly. I wonder how easy it is in Ruby/Rails.
The best outcome would be that your existing Customer Java UI can be adapted to use the Rails service layer.
Assuming you trust your programmers to not accidentally expose things in the wrong place, the 'right' solution seems to me to have a single application, but two different sets of controllers and views, one for internal use, and one for public-facing. This will give you djna's idea of one database, two UIs.
As you say having two separate databases is going to involve a lot of duplication, as well as the problem of replication.
It doesn't make sense to me to have two totally separate apps using the same database; the ActiveRecord part of a Rails app is an abstraction of the database in Ruby code, therefore having two abstractions for a single database seems a bit wrong.
You can also then have common business rules in your models, to avoid code duplication across the two versions of the site.
If you don't completely trust your programmers, then Mike's ActiveResource approach is pretty good - it would make it a lot harder to expose things by accident (although ActiveResource is a lot less flexible and feature rich than ActiveRecord)
What version of Rails are you using? Since version 2.3 Rails Engines is included, this allows to share common code (models/views/controllers) in a Rails plugin.
See the Railscast for a short introduction.
I use it too. I have developed three applications for different clients, but with all the shared code in a plugin.