I have a .Net application written using Version 3 of the Google Docs API. Haven't had time to look at updating to the Drive API.
Within the past few days I've gotten a report from one client that they sometimes get a new collection or sub-collection created when my application uploads a new document. I have code which uses "https://docs.google.com/feeds/default/private/full/folder%3Aroot/contents/-/folder" to find the list of root collections and it then drills down from there as needed to find the proper place to insert the new item using the ID for the collection or sub-collection. For example: Level One\Level Two\Level Three-->new item.
The program usually finds Level One when it looks for it but then will suddenly create a new Level One and add everything under that (it will create sub-levels as needed). Just as suddenly it will go back to using the original Level One (I can see that they have two items named the same but with different IDs) and new items will appear there. They've set permissions to share the original collection but the duplicate collection doesn't get those permissions and that causes a real problem.
Is this some sort of timing or availability problem?
Thanks!
Related
I have rails app witch collect mobile data and use elastic search as search engine in my app, As result that my app is very simple I use elastic as my db, every things goes fine till I found something amazing
In my case each user have two kind of document with different attributes, First one is profile that it's about user profile only and Second one is events that is user actions in mobile app. I have to say، user could update his/her profile and each user just have to one document in my system, that is about profile details. each time user update his profile I delete previous document and create new document for him but assume he send profile twice at same time unconsciously for example push register bottom twice at this point i get two document that are completely same so elastic save both but as i say i need to delete old one and create new one, I know I could handle this problem in mobile layer but I'm looking for some way witch make me sure at this situation document have their priorities.
Basically you need a versioning of your documents and using that you can control whether you need to create a new document or update existing document. this official Elasticsearch version control blog should help you design and implement this use-case.
I really hope I got your point correctly, consider the following points
prevent users from submitting twice in your front end.
update the document instead of deleting the previous one then creates a new document so that you don't have to worry about document priorities because of a single file.
I'm building an app that gets a lot of data from a web service. The app consists of different entries that have relationships to each other. Let's make an example and say I'm building a TV show tracking app, all the data is coming from the web service, but I want to mark episodes as watched, which is a custom property on one entry so far. All of this gets save in Core Data. I have these entries:
Show ⇒ has many seasons and episodes
Season ⇒ has many episodes and one show
Episode ⇒ has one show and one season
The main part I'm currently struggling with is how I can best update all of these entries when the web service has an updated version of the data (maybe the show got a new season or some wrong data got fixed). At this point, the only custom property on these entries which differs from the data the web service provides is the watched attribute I created on the Episode entry.
So far I tried different ways, like removing the old data and just adding the new one (the custom watched attribute is a problem here) and I also looked into merge policies like NSMergeByPropertyObjectTrumpMergePolicy but this doesn't play nice with relationships and I got to a roadblock there.
Is there a better way or best practice how to solve this?
I'm using Firebase database for my application, and basically people post new chapters of different series into the app.
So, I have one parent called "series" which hosts information about the different series:
Then, I have one parent called "chapters" which contains many different children that are the different series keys, and under them are many chapters (so the chapters are under each series).
However, I also want to have a section of the app where the user can view all newly added chapters across all different series, so I made a "latestReleases" parent, which automatically gets added to whenever a new chapter is added to "chapters."
However, the way I am currently displaying latestReleases is to add the entirely of "latestReleases" to an array and then sort by date. Although with a small amount of chapters this works fine, there are now thousands of chapters, so there are thousands of things in latestReleases. Therefore, it takes literally forever for it to load. There must be a better way to do this, correct? I feel like a better way would be to only load part of the latestReleases, and then the user can choose to load more incrementally. However, it this possible? How else would I be able to achieve this? Would I need to create several "latestReleases" parents that get updated automatically? Thanks!
Is it possible to programmatically move a WIT (PBI, Bug, etc.) from one Collection/Project to another?
I have a use case where a bug may be inadvertently opened under the wrong team project, and needs to be "moved" intact (history, attachments, etc.).
I've seen hacks that involve manipulating the underlying SQL tables, but I'd like a cleaner API-based solution.
You cannot move a work item from project to another or one collection to another. But you can copy it to another project in the same collection using the copy option in Web Access (manually). This actually just creates a new work item and copies all the matching field values to the new item.
If you want to do this in code or if you need to do this from one collection to the another collection you will have to create a new work item and copy all the fields you need over to the new instance. If you need the actual move experience you can destroy the old work item after the save of the new item is completed.
Note that you will always get a new ID both since ID's have to be unique within a team project collection. If moving items around is an ongoing concern for you consider moving all related teams into a single team project and use Team's to keep them aparte where necessary. You can move work items from one team to another within the same team project.
I'm building a web app for bookmark storage with a directory system.
I've already got these collections set up:
Path(s)
---> Directories (embedded documents)
---> Links (embedded documents)
User(s)
So performance wise, should I:
- add the user id to the created path
- embed the whole Paths collection into the specific user
I want to pick option 2, but yeah, I dunno...
EDIT:
I was also thinking about making the whole interface ajaxified. So, that means I'll load the directories and links from a specific path (from the logged in user) through ajax. That way, it's faster and I don't have to touch the user collection. Maybe that changes things?
Like I've said in the comments, 1 huge collection in the whole database seems kinda strange. Right?
Well the main purpose of the mongoDB is to support redundant data.I will recommend second option is better because In your scenario what I feel that if you embed path collection into the specific user then by using only single query you can get all data about user as well as related to path collection as well.
And if you follow first option then you have to fire two separates queries to get all data which will increase your work somewhat.
As mongodb brings data into the RAM so after getting data from one collection you can store it into cursor and from that cursor data you can fetch data from another collection. So if we see performance wise I dont think it will affect a lot.
RE: the edit. If you are going to store everything in a single doc and use embedded docs, then when you make your queries make sure you just select the data you need, otherwise you will load the whole doc including the embedded docs.