I am working on multilingual site developed in asp.net MVC. Currently we are managing the translation task using resource (resx) file and everything is working fine.
Now as per client requirement, they want to integrate our resource file to a TMS "phrase" through a webhook. So in future, if they create any new key or modifying the existing resource file. Its automatically reflects in application resx file and it should automatically reflects on dev/test/prod environment.
As I tried to update the resource file on API call, its get modified and changes are reflected on application.
But when we modified the resx file under app_GlobalResources folder then it restarts the whole application. so this is one of drawback to use this approach. Also when we deploy our changes then it makes the dll of app_globalreources. Post deployment, unable to add new or make changes in existing translation.
Can any one suggest a best approach, which we should consider to fulfill above requirement.
Edit:-
Can we use json instead of resx file in existing application.
A common way to do translations is through database instead resource files. You save the same information in your database: language, key (the resource name) and value (the translated text).
With this focus, you must develop a way to do translations (the typical CRUD operations) and some layer to get any key in each language.
Talk with your client and check how important is this feature. I worked in a project like this some time ago and, at the end, we never do translations in this way. We add more functionality, made changes, translations and, when iteration finished, we move to production everything. Maybe not your case but it's a pity work on something that later hasn't use.
Related
I am working on a project where we rewrite the interfacing of an existing application, porting everything to swagger/openAPI.
Right now, each feature has its own yml file right now, which is a standalone spec. But there are some drawbacks:
duplicated content in the yml files (e.g. models which could be shared accross files)
duplicated program code (which is generated from those yml files).
having to process each yml file individually when using tools.
Ideally we would like to have a seperate folder for each service, with the models and service description for that specific service close together, but separated from the other services. Of course there are also shared models, which we then want in a different folder (e.g. "/shared-models"). And finally we want all those files to be included by 1 main yml root file.
So, we have been looking at splitting/importing files with a $ref attribute. But it is tricky to come up with a full-scale file and folder structure, because the spec seems to allow usage of $ref on some places, but not all places. You can't just split and structure files any way you like. So, we will probably need some kind of trade-off.
I was especially wondering how other companies do this setup. (e.g. an example of a setup that uses an enterprise level structure of swagger files, would be excellent.) We like to keep things simple and whenever possible according to standards or popular conventions.
(For clarity: my question is not: "how to use $ref")
I'm ASP.NET MVC v4 for my application, and I'm using the web optimization features (bundling and minification of scripts and styles).
Now, what I understand is (please correct me if wrong), the optimization framework will look at the included files at the time of compilation and configure them. It'll create a version number (v=something) based on the contents. Every time the contents change, it'll recreate the version hash, and the client will get updated files.
Now, is there a way to get the following done
[1] Update something inside a js file in my server, and serve the updated one to the clients without re-building & re-starting the application (I'm not changing bundle configuration here, just updating file content inside a script) ?
[2] Update the script configuration itself (e.g. adding a new script to a bundle), and get that served to the clients without Re-compiling & Re-staring the application? Or, at least without re-compiling? (I know, generally we define the bundles inside cs files, but wondering if there is a way out!)
[3] Is there a way to use my own version number (say from a config file, v=myCustomScriptVersion) rather than the auto-generated version hash?
It's bit late, but I'm just sharing my experience on my own questions here.
As discussed in the comments of the question, bundles are defined as part of a cs file (generally BundleConfig.cs inside App_Start). So, the bundles are defined at compile time, and at application start they will get added to collection and become usable.
Now, the interesting bit. At run-time, the optimization framework looks into the included files and creates a hash of the contents, and appends that as a version query-string to the bundle request. So, when the bundle is called the generated uri is like the below one.
http://example.com/Bundles/MyBundledScripts?v=ILpm9GTTPShzteCf85dcR4x0msPpku-QRNlggE42QN81
This version number v=... is completely dynamic. If any file content within the bundle is changed, this version will be regenerated, and will remain same otherwise.
Now to answer the questions,
[1] This is done automatically by the framework, no need to do anything extra for this. Every time a file content is changed, new version number will be generated and the clients will get the updated scripts.
[2] Not possible. If files included in a bundle are changed, is has to be recompiled.
[3] Yes, it can be used. The custom version number can be added as below.
#Scripts.Render("~/Bundles/MyBundledScripts?v=" + ConfigurationManager.AppSettings["ScriptVersion"])
But Caution! This will remove the automatic versioning based on file contents.
And, additionally, if there are multiple versions of the same file available and we always want to include the latest version available, that can be achieved easily by including a {version} wildcard in bundle configuration like below.
bundles.Add(new ScriptBundle("~/Bundles/MyBundledScripts")
.Include(
"~/Scripts/Vendor/someScript-{version}.js"
));
So, if there are 2 scripts in the /Scripts/Vendor folder
someScript-2.3.js
someScript-3.4.js
Then the file someScript-3.4.js (higher version) will get included automatically. And when a new file someScript-4.0.js is added to the folder, that will be served to clients without any need for recompile/restart.
I'm a first time poster long time listener and I would really be interested in reading about some of your localization architectures and, eventually, to get feedback on our approach (as follows).
I would like some advice on an approach we're thinking of using with resource files. We are using MVC 3.0 and have a website project and a resource project. In the resource project we have a structure which mimics the same structure as the website e.g. controller -> view -> file.
We reference the resx files in the views by importing the resource namespace on the top of the view/control e.g. <%# Import Namespace="MyAppResources.Resources.Website.Home" %> and then reference the resx value we need by using <%= Index.SomeText %> where index is the name of the resource file.
What we were thinking of doing and would love some advice is instead of using this approach is to divide the resource resx structure into website areas and use a helper e.g. LocalizationHelper.GetValue("Home", "SomeText") where "Home" is the name of the resource file and "SomeText" is a value in that resx file. The reason we would do this is not to have to keep compiling the resource project for every small copy change we make (as we may need a quick fix for our deployed environment) and also it will probably be the most commonly used helper in the website project so this would keep things short and consistent. The Localization helper would also store the values in a cached dictionary so if a value is used more than once it would retrieve it from the cache.
Does anyone know of a better approach or improvements we have not thought of?
I would recommend using a database to store the localized values instead of a RESX file.
Using a database would prevent you from needing to make any code/file deployments to update your application. Furthermore, you could build a GUI interface for modifying the localized values (which is a great feature for the site administrators/editors).
Basically I need to send to my designer a non-finished rails website.
My Designer doesn't have ruby / Rails environnement installed and should be able to:
modify the CSS
add some html elements
I can manually check the diff after he worked.
Is there a way that I can make an easy extraction of my app or giving an access to the deployed one with capacity to re-root the css from his files?
Based on your question, your designer, in addition to the design, also writes code: he creates css files, and edit your views files. That makes him an integrator.
As such, he should learn the basics of source control management, such as git, svn or any system you prefer (my favorite is Bazaar, for its simplicity).
It is a best practice that will allow you to save some time and avoid a lot of headaches when merging your revisions. A nice side effect: he will be able to easily roll-back to a previous working version of the code, should anything bad have happened.
I would like to create a simple file repository in Ruby on Rails. Users have their accounts, and after one logs in they can upload a file or download files previously uploaded.
The issue here is the security. Files should be safe and not available to anyone but the owners.
Where, in which folder, should I store the files, to make them as safe as possible?
Does it make sense, to rename the uploaded files, store the names in a database and restore them when needed? This might help avoid name conflicts, though I'm not sure if it's a good idea.
Should the files be stored all in one folder, or should they be somewhat divided?
rename the files, for one reason, because you have no way to know if today's file "test" is supposed to replace last week's "test" or not (perhaps the user had them in different directories)
give each user their own directory, this prevents performance problems and makes it easy to migrate, archive, or delete a single user
put metadata in the database and files in the file system
look out for code injection via file name
This is an interesting question. Depending on the level of security you want to apply I would recommend the following:
Choose a folder that is only accessible by your app server (if you chose to store in the FS)
I would always recommend to rename the files to a random generated hash (or incremntally generated name like used in URL shorteners, see the open source implementation of rubyurl). However, I wouldn't store them in a database because filesystems are built for handling files, so let it do the job. You should store the meta data in the database to be able to set the right file name when the user downloads the file.
You should partition the files among multiple folders. This gives you multiple advantages. First, filesystems are not built to handle millions of files in a single folder. If you have operations that try to get all files from a folder this takes significantly more time. If you obfuscate the original file name you could create one directory for each letter in the filename and would get a fairly good distributed number of files per directory.
One last thing to consider is the possible collision of file names. A user should not be able to guess a filename from another user. So you might need some additional checks here.
Depending on the level of security you want to achieve you can apply more and more patterns.
Just don't save the files in the public folder and create a controller that will send the files.
How you want to organise from that point on is your choice. You could make a sub folder per user. There is no need to rename from a security point of view, but do try to cleanup the filename, spaces and non ascii characters make things harder.
For simple cases (where you don't want to distribute the file store):
Store the files in the tmp directory. DON'T store them in public. Then only expose these files via a route and controller where you do the authentication/authorisation checks.
I don't see any reason to rename the files; you can separate them out into sub directories based on the user ID. But if you want to allow the uploading of files with the same name then you may need to generate a unique hash or something for each file's name.
See above. You can partition them any way you see fit. But I would definitely recommend partitioning them and not lumping them in one directory.