Revit Worksharing - Get active local users - local

It remains unclear to me, how a Revit addin would know if there are other active local files (other active users) at runtime.
The plugin under consideration needs to provide all scheduled elements with their UniqueID in a shared parameter ‘SPuniqueID’ . The purpose being that this SPuniqueID can then be added to the schedule (it is a pity that it is not possible to add the UniqueID directly to the schedule via the Revit userinterface).
Next, the schedules, with field SPuniqueID added to the schedule, can then be exported to excel. Because SPuniqueID, containing the UniqueID, is added to the excel table, it is possible to then write a ScheduleCompare program, to compare 2 quantity surveys, generated on different moments in the lifetime of the revit project and find the differences (quantities that have changed for certain articles).
I already built this ExportSchedules plugin to work flawless on a standalone revit file, working even with linked elements from revit links. When I run this on a local copy of a central model however, I get of course an exception that some elements are borrowed by other users and that the SPuniqueID can’t be set.
I want to check beforehand if I have full rights on all the scheduled elements.
Is ‘WorksharingUtils.CheckoutElements()’ operating on the list of scheduled elements and catch exceptions, the only way to accomplish this?
I thought there maybe was a log file somewhere that would track the active local users. If and only if this list contains only my name, I would let the plugin proceed, because I would then automatically know all the elements are available for editing.
Kind regards
Paulus

Paulus,
Check out the WorksharingUtils.GetCheckoutStatus() method - it can tell you whether the element is checked out, and if so which user.
Beyond that, the only other place to go is by monitoring the SLOG file in the Central File folder (but - yuck!).
Best Regards,
Matt

Related

AIP Sensitivity Labels added via Power Automate

I am interested to know if its possible to apply a sensitivity label to a document received via an email and then save the document to a specific directory in one drive.
For example, lets say company xyz sends a mail with files attached that we must process, I would like the files to be removed from the mail, marked with a custom sensitivity label like xzy_secret and then store the file in a OneDrive folder called xyz_company
So all the files in that folder eventually are labelled as per the customer.
Does anyone know if this is possible? The idea is that we can then apply DLP to our customers files and ensure we can track them within the business.
Anyone have any ideas? Is there an API for doing this or a power automate method?
As far as I know, Send an email action (with power automate) does not support applying the sensitivity label to the email currently. Being said that, you may need to implement your needs through the Rest API, please check this article and see if it helps:
https://joannecklein.com/2019/05/06/setting-a-retention-label-in-sharepoint-from-microsoft-flow/

tfs configuration : include user phone number from active directory

New to TFS configuration/manipulation and looking to be pointed in the right direction thanks.
Our bug reports are often posted with minimal information and its often necessary to call the creator to get clarification. It would be beneficial if we could display the phone number alongside the creators name. Is it possible to pull this info out of the directory ?
I cannot think of a simple way of doing it, apart from writing code. I can think of these techniques:
pulling data from Active Directory and updating work items;
a custom control that query AD (via a REST web service) just in time
This latter can evolve to became a 2015 extension

Mvc azure storage, auto delete storage after certain time

Im developing a azure website where users can upload blob and metadata. I want uploaded stuff too be deleted after some time.
The only way i can think off is going for a cloudapp instead of a website with a worker role that checks like every hour if the uploaded file has expired and continue and delete it. However im going for a simple website here without workerroles.
I have a function that checks if the uploaded item should be deleted and if the user do something on the page i can easily call this function, BUT.. If the user isnt doing anything and the time runs out it wont delete it because the user never calls the function.. The storage will never be deleted. How would you solve this?
Thanks
Too broad to give one right answer, as you can solve this in many ways. But... from an objective perspective because you're using Web Sites I do suggest you look at Web Jobs and see if this might be the right tool for you (as this gives you the ability to run periodic jobs without the bulk of extra VMs in web/worker configuration). You'll still need a way to manage your metadata to know what to delete.
Regarding other Azure-specific built-in mechanisms, you can also consider queuing delete messages, with an invisibility time equal to the time the content is to be available. After that time expires, the queue message becomes visible, and any queue consumer would then see the message and be able to act on it. This can be your Web Job (which has SDK support for queues) or really any other mechanism you build.
Again, a very broad question with no single right answer, so I'm just pointing out the Azure-specific mechanisms that could help solve this particular problem.
Like David said in his answer, there can be many solutions to your problem. One solution could be to rely on blob itself. In this approach you can periodically fetch the list of blobs in the blob container and decide if the blob should be removed or not. The periodic fetching could be done through a Azure WebJob (if application is deployed as a website) or through a Azure Worker Role. Worker role approach is independent of how your main application is deployed. It could be deployed as a cloud service or as a website.
With that, there are two possible approaches you can take:
Rely on Blob's Last Modified Date: Whenever a blob is updated, its Last Modified property gets updated. You can use that to identify if the blob should be deleted or not. This approach would work best if the uploaded blob is never modified.
Rely on Blob's custom metadata: Whenever a blob is uploaded, you could set the upload date/time in blob's metadata. When you fetch the list of blobs, you could compare the upload date/time metadata value with the current date/time and decide if the blob should be deleted or not.
Another approach might be to use the container name to be the "expiry date"
This might make deletion easier, as you then could just remove expired containers

google drive sdk and the handling of folders

I'm considering adding support for Google Drive in an iOS application I've written. That's not my issue - there is plenty of decent documentation on how to do that.
My issue is that the data source for this application is a folder. This folder could conceivably contain dozens of (nested) child folders and hundreds of files sprinkled throughout.
When I scour the documentation for Google Drive, there's plenty of information on the treatment of individual files; there is however very little information on the treatment of folders (other than folders appear to have a unique MIME type).
So assuming I can run a query to identify a GTLDriveFile that references a folder (data source) I'm interested in, I need to download the entire contents of that folder - hierarchical organization intact - to my application's sandbox. BTW, this application is primarily for tablets (iPad).
How do I do that?
Thanks,
The Google Drive model is internally a flat model with pointers to parents and children, any implementation that projects it into a traditional tree based model needs careful consideration.
Since I don't work in iOS, I'll try to keep it general, referring to JAVA code snippets; it should be easy to translate it to any language (believe iOS has some king of 'C' flavor).
So, every attempt to retrieve folders/files has to start with the Drive root, enumerating it's children. For instance, in new Google Drive API (GDAA) you'll use either.
DriveFolder.listChildren(gac).await()
DriveFolder.queryChildren(gac, query).await();
where 'gac' is GoogleApiClient instance and 'query' comes from Query.Builder()... .
As you keep going, you're getting metadata of objects, giving you full info about the object (mime, status, title, type,...), you handle duplicates (yes you can have multiple folders/files with the same name - but unique IDs). When you hit folder, start another iteration. In the process, you may cache the structure using folders' / files' unique RESOURCE IDs' (the string you see in http address of a file / folder).
There are 2 different APIs in Java/Android world at the moment, the old RESTFul API and the new GDAA (and I don't know how it applies to iOS). And I have some code here, showing recursion down the tree (buildTree()), and code that handles duplicate file/folder names in (findFirst()). But unfortunately it is Java under GDAA flavor, so it may not be very useful to your case.
One more thing worth mentioning is, that you can 'list' or 'query' children of only one folder level (not it's subfolders), or you can query (but not list) all objects globally within your current scope (FILE scope only in GDAA, many scopes in RESTful).

Rails newbie - how to structure a calendar generating app

I'm working on a service to provide our students and faculty with one single calendar (ICS subscription) of their academic dates (start and end of term & exam periods, class dates and times, exam dates and times, etc). I'm a Ruby and Rails newbie; we're starting to work more with it here so I figured this would be a good learning opportunity. I'm having trouble figuring out how to structure and model (if at all) certain parts of the app.
The app is conceptually pretty basic:
User logs in and a user record is created for them. A UUID is generated and stored on the user's record; it's used to generate their ICS URL (http://myservice.foo/feeds/johndoe_ce4970706f320130588b109add5c7cb0.ics).
When the user requests their ICS file (through the above URL), I need to query a bunch of different systems in order to get information in order to build a calendar:
The Student Information System (SIS) contains the user's schedule (e.g. johndoe is taking ENGL 100 on MWF from 10:30 - 11:20). I need to parse this data and create events.
Our online learning management system, Canvas, provides a calendar of assignments for courses contained inside it. It's accessible as an ICS file, so I need to pull down that file, parse it and include it in the "master" calendar that my app will generate.
Instructors can specify an additional ICS URL for their course so that they can include arbitrary events not provided by either of the two above sources. Like the Canvas calendar, I need to download and parse that ICS.
I have the first part working; I can log in through our single sign-on system (CAS) and a user record is created with a generated UUID. I'm not sure how to handle the second part, though. I don't need to store much permanent data; basically, I only need to keep around the user record (which contains their username, the generated UUID, and some access tokens for the Canvas LMS). The feed will be generated the first time it's requested, cached for some period (and regenerated on-demand when needed).
Where should I be putting the parsing and generating code? I'd like it to be somewhat modular as I expect that we'd be adding other data sources as they crop up. Should I be creating calendar and event models if I'm not actually persisting that data?
No, there is no need to create an empty model for interaction with 3rd party services. I had a similar problem, where I needed to receive data from an external service and wanted it to be modular. One of recommended solutions I found was to create a class (that handles business logic of the interaction with the external service) in the "lib" folder under the root directory of your rails project.
It later can be required in your controller and used to receive date from the third party service. Or if you want it autoloaded, then you can add path to lib directory in your application.rb file under config.autoload_paths setting.

Resources