Let's say we expose the following entity and its properties to a client application:
Employee {firstname, lastname, address, socialSecurityNumber}
In the client application, we display all or a subset of the properties depending on the user privileges.
However since we queried on the Employee entity, all of the properties were sent back to the client-application. So if we decided to hide the socialSecurityNumber from some users, they would still be able to see the value coming back from the server, just by checking the content of the response.
What approach should we take to prevent this ? Currently I'm thinking to use projections that would be different according to who's logged in....
But some insight would be appreciated.
Especially for sensitive data, only send what is absolutely required for the screen at hand, and only send data that the user is allow to see. I would go the path of the Data Transfer Object or projections.
What he (#NathanFisher) said. You can also control the properties sent at the JSON serialization level. For example, in JSON.NET you can conditionally serialize properties using a ContractResolver.
Related
ASP.NET MVC and Angular based enterprise web application is hosted for external users access. We encountered a scenario like an user can manipulate the values shown in the disabled fields and submit so using the browser developer tool. e.g. (1) Input field of Vehicle Name, description etc. is disabled in the edit mode, but user can set the read-only field property to editable using dev tool and manipulate the actual value to something else.
similarly, e.g. (2) Customer details are fetched by ID from Cust db and shown on the screen. The customer details are expected to be saved in another db with a few more inputted details, but user edits the read-only customer fields using dev tool and submits.
As a solution, introducing a server side validation between retrieved and sent back values on every submission does not seem to be a right approach.
So, how to protect the read-only or static values from manipulating with browser or other dev tools?
As a solution, introducing a server side validation between retrieved and sent back values on every submission does not seem to be a right approach.
Contrary to what you appear to believe, that is the solution.
You cannot prevent the user from crafting their own HTTP request. You cannot prevent the user from hitting F12 and sending you garbage. It is up to you to validate whether the user is allowed to update the data they send you, and whether they are allowed to read the data they request.
Client-side validation is being nice for your users; server-side validation is an absolute necessity.
I am new to Dynamics FnO, and recently followed the articles to access data through oData, and was successful.
What I see missing in the data objects that I normally receive in integrations out of the Microsoft World is the created/updated timestamps.
I am trying to put a synchronous data flow from FnO to my NodeJs application, so that my app keeps polling data from FnO whenever there is a change. This can be achieved easily if there were timestamps with the data that flows in.
Is there a way to setup those timestamps somewhere?
You have to make sure that the underlying table that you are querying has the fields added on it, and also that the data entity you are accessing through odata has the fields setup up on it as well.
Make sure this is setup on the table:
And then you have to drag and drop the field(s) from the datasource field list to the exposed field list in the data entity:
After this, you will have these fields
Currently we are using Breeze.js and Angular to develop our applications. Due to some persistent legacy issues, we have two databases ('Kenya' and 'Rwanda') that cannot be merged at this time, but have the same schema and metadata. Most of the time, the client knows which database to hit and passes the request through the .withParameters() function or the .saveOptions() function. Sometimes we want to request the same query from both databases (for example, if we are requesting a list of all available countries), and we use a EntityManager wrapper on the client to manage this and request the same query from each database. This is implemented through a custom EFContextProvider which uses the data returned to determine the appropriate database and creates the appropriate context in CreateContext().
To further complicate things, in some instances one or the other database won't exist (these are local deployments created through filtered replication), but the client won't know this. Therefore, when querying for a list of all countries, it issues two requests and one will cause failures because the context cannot be instantiated properly.
This is easy enough to detect on the Server. What I would like to do is to detect whether the requested context is available and, if not, return a 200 response and an empty set.
I can detect this in the Breeze DBContextProvider CreateContext() method, but cannot figure out how to cause the request to fallback gracefully to a empty-set response.
Thanks
Not exactly what I was looking for, but it probably makes more sense since most of the work is being done on the client-side:
Instead of trying to change the controller, I added a getAvailableDatabases to the C# controller actions and use that to determine which of the databases I will query from the client.
I have already read Rails - How do I temporarily store a rails model instance? and similar questions but I cannot find a successful answer.
Imagine I have the model Customer, which may contain a huge amount of information attached (simple attributes, data in other tables through has_many relation, etc...). I want the application's user to access all data in a single page with a single Save button on it. As the user makes changes in the data (i.e. he changes simple attributes, adds or deletes has_many items,...) I want the application to update the model, but without committing changes to the database. Only when the user clicks on Save, the model must be committed.
For achieving this I need the model to be kept by Rails between HTTP requests. Furthermore, two different users may be changing the model's data at the same time, so these temporary instances should be bound to the Rails session.
Is there any way to achieve this? Is it actually a good idea? And, if not, how can one design a web application in which changes in a model cannot be retained in the browser but in the server until the user wants to commit them?
EDIT
Based on user smallbutton.com's proposal, I wonder if serializing the model instance to a temporary file (whose path would be stored in the session hash), and then reloading it each time a new request arrives, would do the trick. Would it work in all cases? Is there any piece of information that would be lost during serialization/deserialization?
As HTTP requests are stateless you need some kind of storeage between requests. The session is the easiest way to store data between requests. As for you the session will not be enough because you need it to be accessed by multiple users.
I see two ways to achive your goal:
1) Get some fast external data storage like a key-value server (redis, or anything you prefer http://nosql-database.org/) where you put your objects via serializing/deserializing (eg. JSON).
This may be fast depending on your design choices and data model but this is the harder approach.
2) Just store your Objects in the DB as you would regularly do and get them versioned: (https://github.com/airblade/paper_trail). Then you can just store a timestamp when people hit the save-button and you can always go back to this state. This would be the easier approach i guess but may be a bit slower depending on the size of your data model changes ( but I think it'll do )
EDIT: If you need real-time collaboration between users you should probably have a look at something like Firebase
EDIT2: Anwer to your second question, whether you can put the data into a file:
Sure you can do that. But you would need some kind of locking to prevent data loss if more than one person is editing. You will need that aswell if you go for 1) but tools like redis already include locks to achive your goal (eg. redis-semaphore). Depending on your data you may need to build some logic for merging different changes of different users.
3) Another aproach that came to my mind would be doing all editing with Javascript and save it in one db-transaction. This would go well with synchronization tools like firebase (or your own synchronization via Rails streaming API)
I have a object that i want to store for a moment. The object is in a controller for now, the controller will generate a view. A AJAX request is made from the view to next controller. For that moment i need the object previously stored. Previously, i used session and it worked well. But not sure it is the right thing to do. Is session the answer for this or is there anything else?
I have used cache also.but as per the cache concept.It will access for all the users.So one user data will be override to another.So the cached object data will be change for the same user.I need to handle the data storage for an particular user(Independent).
How is it possible? anyother approach is there please share me.
In Controller I have used Httpcontext.cache["key"]=dataset;
but some one suggested like this.but its not displaying
Explain:
In Controller: httpcontext.current.cache is not coming.
HttpContext.Currenthandler and HttpContext.Currentnotification properties only coming.So How can we handle the temp data storage in MVC.
Please help me.
You could use TempData if you want to store data for the next request only. If data should be accessible between multiple requests, then use Session. Here is short explanation of each one with examples.
As Alex said you could use TempData but if you want to use the data in multiple request, you could use TempData.Keep("YourKey") after reading the value to retain the data for the next request too. For your Information TempData internally uses Session to store your data (temporarily)
I would recommend URL parameters for a HTTP Get, or hidden form fields for a HTTP Post, if this is short lived. This is highly about avoiding the session.
But if it should really persist, then a database might be a reasonable location. Imagine a shopping cart that you don't want to dump just because a session timed out; because you'd like to remind the user next time about items they still haven't purchased.
Why not use the session? I don't generally recommend using the session, as you could find yourself with a global variable that two different browser windows are manipulating. Imagine a glass. One window is trying to fill it with Ice Tea. Another window is trying to fill it with Lemonade. But what do you have? Is it Lemonade? Is it Ice Tea? Or is it an Arnold-Palmer? If you try to put too much stuff on the session, and overly expect it to just be there, you might create an application that is non-deterministic if heaven forbid a user opens a second window or tab, and switches back and forth between the windows.
I'm more ok with Temp Data, if you truly have no other options. But this is not for persisting data for more than a second. Temp data will disappear after the first request reads it, as in, it's meant for a very temporary usage.
I personally only use TempData if I have to do a redirect where I can't otherwise keep it with me, or if I need to have that data for say generating a PDF or image that is going to be called via a HTTP Get by a viewer on the actual page, and then only if the model data is too large for the GET url ( many browsers only support just over 2000 characters, which long description or many fields could fill up.)
But again, pushing items around in hidden form variables, or in url parameters can be safe, because you have no multiple window use conflicts (each carries around its own data for peace of mind.)