google drive sdk and the handling of folders - ios

I'm considering adding support for Google Drive in an iOS application I've written. That's not my issue - there is plenty of decent documentation on how to do that.
My issue is that the data source for this application is a folder. This folder could conceivably contain dozens of (nested) child folders and hundreds of files sprinkled throughout.
When I scour the documentation for Google Drive, there's plenty of information on the treatment of individual files; there is however very little information on the treatment of folders (other than folders appear to have a unique MIME type).
So assuming I can run a query to identify a GTLDriveFile that references a folder (data source) I'm interested in, I need to download the entire contents of that folder - hierarchical organization intact - to my application's sandbox. BTW, this application is primarily for tablets (iPad).
How do I do that?
Thanks,

The Google Drive model is internally a flat model with pointers to parents and children, any implementation that projects it into a traditional tree based model needs careful consideration.
Since I don't work in iOS, I'll try to keep it general, referring to JAVA code snippets; it should be easy to translate it to any language (believe iOS has some king of 'C' flavor).
So, every attempt to retrieve folders/files has to start with the Drive root, enumerating it's children. For instance, in new Google Drive API (GDAA) you'll use either.
DriveFolder.listChildren(gac).await()
DriveFolder.queryChildren(gac, query).await();
where 'gac' is GoogleApiClient instance and 'query' comes from Query.Builder()... .
As you keep going, you're getting metadata of objects, giving you full info about the object (mime, status, title, type,...), you handle duplicates (yes you can have multiple folders/files with the same name - but unique IDs). When you hit folder, start another iteration. In the process, you may cache the structure using folders' / files' unique RESOURCE IDs' (the string you see in http address of a file / folder).
There are 2 different APIs in Java/Android world at the moment, the old RESTFul API and the new GDAA (and I don't know how it applies to iOS). And I have some code here, showing recursion down the tree (buildTree()), and code that handles duplicate file/folder names in (findFirst()). But unfortunately it is Java under GDAA flavor, so it may not be very useful to your case.
One more thing worth mentioning is, that you can 'list' or 'query' children of only one folder level (not it's subfolders), or you can query (but not list) all objects globally within your current scope (FILE scope only in GDAA, many scopes in RESTful).

Related

Revit Worksharing - Get active local users

It remains unclear to me, how a Revit addin would know if there are other active local files (other active users) at runtime.
The plugin under consideration needs to provide all scheduled elements with their UniqueID in a shared parameter ‘SPuniqueID’ . The purpose being that this SPuniqueID can then be added to the schedule (it is a pity that it is not possible to add the UniqueID directly to the schedule via the Revit userinterface).
Next, the schedules, with field SPuniqueID added to the schedule, can then be exported to excel. Because SPuniqueID, containing the UniqueID, is added to the excel table, it is possible to then write a ScheduleCompare program, to compare 2 quantity surveys, generated on different moments in the lifetime of the revit project and find the differences (quantities that have changed for certain articles).
I already built this ExportSchedules plugin to work flawless on a standalone revit file, working even with linked elements from revit links. When I run this on a local copy of a central model however, I get of course an exception that some elements are borrowed by other users and that the SPuniqueID can’t be set.
I want to check beforehand if I have full rights on all the scheduled elements.
Is ‘WorksharingUtils.CheckoutElements()’ operating on the list of scheduled elements and catch exceptions, the only way to accomplish this?
I thought there maybe was a log file somewhere that would track the active local users. If and only if this list contains only my name, I would let the plugin proceed, because I would then automatically know all the elements are available for editing.
Kind regards
Paulus
Paulus,
Check out the WorksharingUtils.GetCheckoutStatus() method - it can tell you whether the element is checked out, and if so which user.
Beyond that, the only other place to go is by monitoring the SLOG file in the Central File folder (but - yuck!).
Best Regards,
Matt

Deploying Neo4j database

so I developed a small Neo4j database with the aim of providing users with path-related information (shortest path from A to B and properties of individual sections of the path). My programming skills are very basic, but I want to make the database very user-friendly.
Basically, I would like to have a screen where users can choose start location and end location from dropdown lists, click a button, and the results (shortest path, distance of the path, properties of the path segments) will appear. For example, if this database had been made in MS Access, I would have made a form, where users could choose the locations, then click a control button which would have executed a query and produced results on a nice report.
Please note that all the nodes, relationships and queries are already in place. All I am looking for are some tips regarding the most user-friendly way of making the information accessible to the users.
Currently, all I can do is make the users install neo4j, run neo4j every time they need it, open the browser, run the cypher script and then edit the cypher script (write down strings as locations) and then execute the query. This makes it rather impractical for users and also I am worried that some user might corrupt the data,
I'd suggest making a web application using a web framework like Rails, especially if you're new to programming. You can use the neo4j gem for that to connect to your database and create models to access the data in a friendly way:
https://github.com/neo4jrb/neo4j
I'm one of the maintainers of that gem, so feel free to contact us if you have any questions:
neo4jrb#googlegroups.com
http://twitter.com/neo4jrb
Also, you might be interested in look at my newest project called meta model:
https://github.com/neo4jrb/meta_model
It's a Rails app that lets you define via the web app UI your database model (or at least part of it) and then browse/edit the objects via the web app. It's still very much preliminary, but I'd like to be able to things like what you're talking about (letting users examing data and the relationships between them in a user friendly way)
I general you would write an tiny (web/desktop/forms-)application that contains the form, takes the form values and issues the cypher requests with the form values as parameters.
The results can then be rendered as a table or chart or whatever.
You could even run this from Excel or Access with a Macro (using the Neo4j http endpoint).
Depending on your programming skills (which programming language can you write in) it can be anything. There is also a Neo4j .Net client (see http://neo4j.com/developer/dotnet).
And it's author Tatham Oddie showed a while ago how to do that with Excel

Documents as nodes and Security Mechanism

I'm very new using either neo4jDatabase or neo4jclient driver, I'm trying to create a proof-of-concept to understand if make sense to use this technology and I've the following doubts, (I tried to search over the web but no answers...).
I have some entities that have Documents associated with them, (PDFs, DOCx ...), is it possible to have a Node property pointing to those documents? or Can documents be added as a Graph Node with a Lucene index so that a search could return document node and related relationships?
How does the Security works? is it possible to the users have access to the nodes taking in consideration their profile? Imagine that the nodes represent documents how can be implemented a security mechanism that the users only access their nodes (Documents)?
Q1: You can simply add a node property with a URI referencing the document of choice. That could be pointing to blob storage, local disk, wherever you store your documents. You could add binary objects in a node's property (by using a byte array) but I wouldn't advise doing that, since that just adds bulk to the database footprint. For reference, here are all the node property types supported.
Q2: Security is going to be on the database itself, not on nodes. Node-level (or document-level in your case) security would need to be implemented in your application. To keep data secure, you should consider hiding your Neo4j server (and related endpoint) behind a firewall and not expose it to the web. For example, in Windows Azure, you'd deploy it to a Virtual Machine without any Input Endpoints, and just connect via an internal connection. For all the details around neo4j security, take a look at this page.
1) What David said.
2) For resource level security, you need to model this in to your graph. There's an example at http://docs.neo4j.org/chunked/milestone/examples-acl-structures-in-graphs.html

How many ways to share data among activities in monodroid?

I need to share some sensitive data among activities.
I have two EditText which are basically username and password
I am consuming a webservice which on the base of provided username and password return some user info (DataType:String). Like userid,useremail etc.. which is basically in CSV format
I need these piece of information throughout my application.But i can't figure out which is the better way.
-- One way i could found out so far is to use sqlite with MonoAndroid
-- Other way i found out is using Application class
I just started to learn android today , but i want to know if there are some other ways to share data ?
As you mentioned, a global Application class and the database are two good ways to share application-wide data. One thing to be careful with is that your Application class could be recycled when the app is in the background, so you would lose any data that hasn't been persisted to something more permanent.
In addition to the database, you can also persist data down to the filesystem as well. This recipe from Xamarin has an example of writing directly to a file. Most of the classes you'll need to do file access are found in the System.IO namespace. Mono for Android also supports isolated storage, which provides a higher level API for reading and writing files.
If you simply need to pass data directly between activities, you can do so by adding it as an extra to the intent. This recipe explains how to do that.
If you want to wrap up access to a particular resource in a more managed fashion that can be accessed by either other parts of your application or even external applications, you can look into implementing a content provider. Android itself provides several built-in content providers for resources like contacts and media, if you need an example of what it's like to use one. This recipe explains how to read from the contacts provider.

Basic database (MongoDB) performance question

I'm building a web app for bookmark storage with a directory system.
I've already got these collections set up:
Path(s)
---> Directories (embedded documents)
---> Links (embedded documents)
User(s)
So performance wise, should I:
- add the user id to the created path
- embed the whole Paths collection into the specific user
I want to pick option 2, but yeah, I dunno...
EDIT:
I was also thinking about making the whole interface ajaxified. So, that means I'll load the directories and links from a specific path (from the logged in user) through ajax. That way, it's faster and I don't have to touch the user collection. Maybe that changes things?
Like I've said in the comments, 1 huge collection in the whole database seems kinda strange. Right?
Well the main purpose of the mongoDB is to support redundant data.I will recommend second option is better because In your scenario what I feel that if you embed path collection into the specific user then by using only single query you can get all data about user as well as related to path collection as well.
And if you follow first option then you have to fire two separates queries to get all data which will increase your work somewhat.
As mongodb brings data into the RAM so after getting data from one collection you can store it into cursor and from that cursor data you can fetch data from another collection. So if we see performance wise I dont think it will affect a lot.
RE: the edit. If you are going to store everything in a single doc and use embedded docs, then when you make your queries make sure you just select the data you need, otherwise you will load the whole doc including the embedded docs.

Resources