I have a custom resource-tool (ledger entry tool) that modifies values of a resource as well as insert additional rows into related resources.
"Account" is the main resources.
"AccountTransaction" and "AccountLog" both get written to when a ledger entry is created. And through events, the account.balance value is updated.
After a successful post of a ledger entry (using Nova.request) in the resource-tool, I would like the new balance value updated in the account detail panel, as well as the new entries in AccountTransaction and AccountLog to be visible.
The simple way would be to simply reload the page, but I am looking for a more elegant solution.
Is it possible to ask these components to refresh themselves from within my resource-tool vue.js component?
Recently had the same issue, until I referred to this block of code
Nova has vuex stores modules, where they have defined storeFilters.
Assigning filters an empty object and then requesting them again "reloads" the resources. Haven't done much more research on this matter, but if you are looking for what I think you are looking for, this should be it.
async reloadResources() {
this.resourceName = this.$router.currentRoute.params.resourceName || this.$router.currentRoute.name;
if (this.resourceName) {
let filters_backup = _.cloneDeep(this.$store.getters[`${this.resourceName}/filters`]);
let filters_to_change = _.cloneDeep(filters_backup);
filters_to_change.push({});
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_to_change);
await this.$store.commit(`${this.resourceName}/storeFilters`, filters_backup);
}
},
Related
I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}
I'm building an autocomplete text field component. We will show popup of items filtered based on what users type. It is going to be async, I will get the details from the server and do some filtering based on the text typed in the field.
So here, I have run this filtering logic whenever I send new data to the component.
I come from angular, there we used to have ngOnChange(). Is there something similar available in svelte3.
Right now, I'm filtering by calling the method from outside by binding bind:this. I don't feel like this is a correct approach.
https://github.com/manojp1988/svelte3-autocomplete/blob/master/dev/App.svelte
Without stores, using a prop
Just using a prop:
export let search = '';
....
$: if (search !== '') { // make it react to changes (in the parent)
doSomeThing(search);
};
Stores
Svelte also has stores. A store is an observable object which can be observed everywhere even beyond you project with RxJS.
Example:
const unsubscribe = search.subscribe(s) => {
doSomeThing(s);
});
onDestroy(unsubscribe);
In another component you can use search.set('Hi');
But looking forward for other solutions to handle these kind of changes in parent <-> child components or calling child Component methods.
From child to parent we can fire events.
But from parent to child ...? we can use a store or Component bind:this or ..? but ....
We're in the process of moving to DTM implementation. We have several variables that are being defined on page. I understand I can make these variables available in DTM through data elements. Can I simply set up a data elem
So set data elements
%prop1% = s.prop1
%prop2% = s.prop2
etc
And then under global rules set
s.prop1 = %s.prop1%
s.prop2 = %s.prop2%
etc
for every single evar, sprop, event, product so they populate whenever they are set on a particular page. Good idea or terrible idea? It seems like a pretty bulky approach which raises some alarm bells. Another option would be to write something that pushes everything to the datalayer, but that seems like essentially the same approach with a redundant step when they can be grabbed directly.
Basically I want DTM to access any and all variables that are currently being set with on-page code, and my understanding is that in order to do that they must be stored in a data element first. Does anyone have any insight into this?
I use this spec for setting up data layers: Data Layer Standard
We create data elements for each key that we use from the standard data layer. For example, page name is stored here
digitalData.page.pageInfo.pageName
We create a data element and standardize the names to this format "page.pageInfo.pageName"
Within each variable field, you access it with the %page.pageInfo.pageName% notation. Also, within javascript of rule tags, you can use this:
_satellite.getVar('page.pageInfo.pageName')
It's a bit unwieldy at times but it allows you to separate the development of the data layer and tag manager tags completely.
One thing to note, make sure your data layer is complete and loaded before you call the satellite library.
If you are moving from a legacy s_code implementation to DTM, it is a good best practice to remove all existing "on page" code (including the reference to the s_code file) and create a "data layer" that contains the data from the eVars and props on the page. Then DTM can reference the object on the page and you can create data elements that map to variables.
Here's an example of a data layer:
<script type="text/javascript">
DDO = {} // Data Layer Object Created
DDO.specVersion = "1.0";
DDO.pageData = {
"pageName":"My Page Name",
"pageSiteSection":"Home",
"pageType":"Section Front",
"pageHier":"DTM Test|Home|Section Front"
},
DDO.siteData = {
"siteCountry":"us",
"siteRegion":"unknown",
"siteLanguage":"en",
"siteFormat":"Desktop"
}
</script>
The next step would be to create data elements that directly reference the values in the object. For example, if I wanted to create a data element that mapped to the page name element in my data layer I would do the following in DTM:
Create a new data element called "pageName"
Select the type as "JS Object"
In the path field I will reference the path to the page name in my data layer example above - DDO.pageData.pageName
Save the data element
Now this data element can be referenced in any variable field within any rule by simply typing a '%'. DTM will find any existing data elements and you can select them.
I also wrote about a simple script you can add to your implementation to help with your data layer validation.Validate your DTM Data Layer with this simple script
Hope this helps.
Basically this comes up as one of the related posts:
Isn't it dangerous to have query information in javascript using breezejs?
It was someone what my first question was about, but accepting the asnwers there, i really would appreciate if someone had examples or tutorials on how to limit the scope of whats visible to the client.
I started out with the Knockout/Breeze template and changed it for what i am doing. Sitting with a almost finished project with one concern. Security.
I have authentication fixed and is working on authorization and trying to figure out how make sure people cant get something that was not intended for them to see.
I got the first layer fixed on the root model that a member can only see stuff he created or that is public. But a user may hax together a query using extend to fetch Object.Member.Identities. Meaning he get all the identities for public objects.
Are there any tutorials out there that could help me out limiting what the user may query.?
Should i wrap the returned objects with a ObjectDto and when creating that i can verify that it do not include sensitive information?
Its nice that its up to me how i do it, but some tutorials would be nice with some pointers.
Code
controller
public IQueryable<Project> Projects()
{
//var q = Request.GetQueryNameValuePairs().FirstOrDefault(k=>k.Key.ToLower()=="$expand").Value;
// if (!ClaimsAuthorization.CheckAccess("Projects", q))
// throw new WebException("HET");// UnauthorizedAccessException("You requested something you do not have permission too");// HttpResponseException(HttpStatusCode.MethodNotAllowed);
return _repository.Projects;
}
_repository
public DbQuery<Project> Projects
{
get
{
var memberid = User.FindFirst("MemberId");
if (memberid == null)
return (DbQuery<Project>)(Context.Projects.Where(p=>p.IsPublic));
var id = int.Parse(memberid.Value);
return ((DbQuery<Project>)Context.Projects.Where(p => p.CreatedByMemberId == id || p.IsPublic));
}
}
Look at applying the Web API's [Queryable(AllowedQueryOptions=...)] attribute to the method or doing some equivalent restrictive operation. If you do this a lot, you can subclass QueryableAttribute to suit your needs. See the Web API documentation covering these scenarios.
It's pretty easy to close down the options available on one or all of your controller's query methods.
Remember also that you have access to the request query string from inside your action method. You can check quickly for "$expand" and "$select" and throw your own exception. It's not that much more difficult to block an expand for known navigation paths (you can create white and black lists). Finally, as a last line of defense, you can filter for types, properties, and values with a Web API action filter or by customizing the JSON formatter.
The larger question of using authorization in data hiding/filtering is something we'll be talking about soon. The short of it is: "Where you're really worried, use DTOs".
i am devloping a site using .net MVC
i have a data access layer which basically consists of static list objects that are created from data within my database.
The method that rebuilds this data first clears all the list objects. Once they are empty it then add the data. Here is an example of one of the lists im using. its a method which generates all the UK postcodes. there are about 50 methods similar to this in my application that return all sorts of information, such as towns, regions, members, emails etc.
public static List<PostCode> AllPostCodes = new List<PostCode>();
when the rebuild method is called it first clears the list.
ListPostCodes.AllPostCodes.Clear();
next it re-bulilds the data, by calling the GetAllPostCodes() method
/// <summary>
/// static method that returns all the UK postcodes
/// </summary>
public static void GetAllPostCodes()
{
using (fab_dataContextDataContext db = new fab_dataContextDataContext())
{
IQueryable AllPostcodeData = from data in db.PostCodeTables select data;
IDbCommand cmd = db.GetCommand(AllPostcodeData);
SqlDataAdapter adapter = new SqlDataAdapter();
adapter.SelectCommand = (SqlCommand)cmd;
DataSet dataSet = new DataSet();
cmd.Connection.Open();
adapter.FillSchema(dataSet, SchemaType.Source);
adapter.Fill(dataSet);
cmd.Connection.Close();
// crete the objects
foreach (DataRow row in dataSet.Tables[0].Rows)
{
PostCode postcode = new PostCode();
postcode.ID = Convert.ToInt32(row["PostcodeID"]);
postcode.Outcode = row["OutCode"].ToString();
postcode.Latitude = Convert.ToDouble(row["Latitude"]);
postcode.Longitude = Convert.ToDouble(row["Longitude"]);
postcode.TownID = Convert.ToInt32(row["TownID"]);
AllPostCodes.Add(postcode);
postcode = null;
}
}
}
The rebuild occurs every 1 hour. this ensures that every 1 hour the site will have fresh set of cached data.
the issue ive got is that occasionally if during a rebuild, the server will be hit by a request and an exception is thrown. The exception is "Index was outside the bounds of the array." it is due to when a list is being cleared.
ListPostCodes.AllPostCodes.Clear(); - // throws exception - although its not always in regard to this list.
Once this exception is thrown application dies, All users are affected. I have to restart the server to fix it.
i have 2 questions...
If i utilise caching instead of static objects would this help ?
Is there any way i can say "while the rebuild is taking place, wait for it to complete until accepting requests"
any help is most appricaiated ;)
truegilly
1 If i utilise caching instead of
static objects would this help ?
Yes, all the things you do are easier done by the caching functionality that is build into ASP.NET
Is there any way i can say "while the
rebuild is taking place, wait for it
to complete until accepting requests"
The common pattern goes like this:
You request data from the Data layer
If the Datlayer sees that there is data in the cache, then it serves the data from cache
If no data is in the cache the data is requested from the db and put into cache. After that it is served to the client
There are rules (CacheDependency and Timeout) when the cache is to be cleared.
The easiest solution would be you stick to this pattern: This way the first request would hit the database and other requests get served from the cache. You trigger the refresh by implementing an SQLCacheDependency
You have to make sure that your list is not modified by one thread while other threads are trying to use it. This would be a problem even if you used the ASP.NET cache since collections are just not thread-safe. One way you can do this is by using a SynchronizedCollection instead of a List. Then make sure to use code like the following when you access the collection:
lock (synchronizedCollection.SyncRoot) {
synchronizedCollection.Clear();
etc...
}
You will also have to use locking when you read the collection. If you are enumerating over it, you should probably make a copy before doing so as you don't want to lock for a long time. For example:
List<whatever> tempCollection;
lock (synchrnonizedCollection.SyncRoot) {
tempCollection = new List<whatever>(synchronizedCollection);
}
//use temp collection to access cached data
The other option would be to create a ThreadSafeList class that uses locking internally to make the list object itself thread-safe.
I agree with Tom, you will have to do synchronization to make this work. One thing that would improve the performance is not clearing the list until you actually receive the new values from the database:
// Modify your function to return a new list instead of filling the existing one.
public static List<PostCode> GetAllPostCodes()
{
List<PostCode> temp = new List<PostCode>();
...
return temp;
}
And when you rebuild the data:
List<PostCode> temp = GetAllPostCodes();
AllPostCodes = temp;
This makes sure that your cached list is still valid while GetAllPostCodes() is executing. It also has the advantage that you can use a read-only list which makes the synchronization a bit easier.
In your case you need to refresh the data every one hour.
1) IT should use cache with absolute expiration set to 1 hour, so it expires after every 1 hour. Check the Cache before using it, by doing a NULL check.If its NULL get the data from DB and populate the Cache.
2) With above approach the disadvantage is that data can be stale by 1 hour. So if u want most updated data at all times, use SQLCacheDependency (PUSH). so whenever there is a change in the select command u r using, cache will be refreshed from the database with updated data.