Fetching drives of Microsoft Teams group/team after creating Plan - microsoft-graph-api

I am using Microsoft Graph API to
1) Create Team and Channels
2) Installing an App
3) Adding Plans, buckets, and tasks to the planner app
4) Copy items from another Drive to the newly created channels drive.
My code is executing in this order
1) Create Group and Team
2) Create Channel under team
3) Create Plan in Planner **/planner/plans**
4) Install another app
5) adding buckets and tasks to the created PLan
6) get list of drives using `groups/{group-id}/drive/items/{item-id}/children`
7) Copy items to drive of Created channel
So the problem in this order is that point 7 only return drive of the General chanel and not for the
newly created channel.
But if I change this code execution order to
1> 2>7>4>3>5>6>7
then it returns all drives on fetching drives step.
So in my finding its problem in creating a plan before to fetch the drives. it works if I a fetching drives before creating the Plan in the planner.
Why this is behaving like this.
What should be the right order of code execution so that all works as expected?

Related

save/load thingsboard configuration

Is it possible to somehow serialize current Thingsboard (let's call it TBoard) configuration, save it and than latter load saved configuration on TBoard startup.
I am specifically interested in loading device profiles, rule chains, and dashboards.
I want to save configuration together with my project in git repository so than latter I could just use docker-compose to start multiple services from project (let's call them sensors) and single TBoard instance with saved configuration which will be used for collecting telemetry from sensors and drawing dashboards.
Another reason for saving configuration is what happens if for some reason TBoard container crashes or somehow get corrupted so it can't be started again, would I have to click on the things again in order to create all device profiles, dashboards, configure rule chains ... etc etc ... ?
Regarding this line
I am specifically interested in loading device profiles, rule chains, and dashboards. I want to save configuration together with my project in git repository
I have just recently implemented version control for my Thingsboard deployment. The way i am doing it is with the python REST client.
I have written functions to export all dashboards/data converters/integrations/rule chains/widgets into json files which I save into a github repository.
I have also written the reverse script to push the stored files to a fresh environment, essentially "flashing" it. Surprisingly, this works perfectly.
I have an idea to publish this as a package, but it's something I've never done before so I'm unsure if I will get to it.
Just letting you know that it is definitely possible to get source control operational via the API.

In TFS Online, How do I share a code branch with our customer

We have an enterprise customer that we have delivered a system for. It is part of the agreement for us to supply them with the source code of the latest release. We are using TFVC on TFS online, and we thought it would be easiest to give them access to our Main branch. But I have difficulties with only allowing them to access the code and nothing else. The user I am testing with, can see too much: I.e. things like dashboard, current team members etc.
Is it possible for me to only expose code from the Main branch and nothing else to an external user?
Giving access to TFS Main Branch out of Organization (AD) is not advisable considering security.. Instead consider giving source code into zip format there are lot of large file sending (FTP sites) are available..
Still for your request of restricting access to user have a look over this
https://www.visualstudio.com/en-us/docs/setup-admin/restrict-access-tfs
you can consider replicating your part of source code into separate stream and give reader read only access to that stream.
Hope this helps... :)
Refer to these steps to set the permission:
Add user to your VSTS (Basic)
Remove this user from all group if you added
Go to admin page of a team project Version Control (Setting > Version Control)
Select a folder/branch
Click Add > Add User to add that user
Select the user that you added
Set Read permission to Allow
Go to Security page (click Security)
Click Create group to create a new group
Set View project-level information to Allow and deny other permissions for this group
Click Members of that new group
Click Add to add that user to this group
After that, this user can access the code (Just the folder/branch the user has the read permission) on web access (Code > Files).

Assigned To field not showing user with the same name as a deleted user

We had a person leave our company and their windows domain account for Active Directory was deleted. They have since come back but have been given a different windows domain account user name. Now when we attempt to assign them tasks it's always associated with the old account. I assume this is because the name is still the same and TFS is doing some kind of duplication check. I've tried removing cache and have verified that the Team Foundation Server Periodic Identity Synchronization job is running properly. I can also see the old active directory account show up when attempting to Add a windows user or group via the dialog along with the new Active Directory user.
What's strange is this user is not showing up as a member of any groups in TFS for any of the Team Project Collections. So why are they still showing up in the [Team Project Collection]\Project Collection Valid Users group?
Seems the main issue is deleted users still in "Assigned To" List. First try to throw down the issue.
If you are using VALIDUSER rule, it contains all valid users in TFS. You may check collection level Project Collection Valid Users group, you may need to check every group to delete the user. And use TFSSecurity /imx command to display information about that group, thn delete the user from right group.
After delete the old user, you need to try to let TFS sync with Active Directory, for detail steps, you can refer to:
Force TFS to sync with Active Directory
Active Directory Groups not Syncing with Team Foundation Server 2010

Power BI Embedded, updating a dashboard

I am referring to the problem of updating a dashboard that has been published to azure via power BI embedded (Azure tutorial).
When I publish to azure an updated version of a dashboard (using step 6 of the desktop solution provided on the link above), I am able to publish the updated pbix file to the same workspace and dataset name. When I retrieve the list of datasets for the workspace I can see both the old and new dashboard with the same name and different id. I have found this confusing as I would have expected using the same name would overwrite the old version of the pbix file.
What would be the recommended procedure to update a dashboard? Would it be to use a new name for the dataset each time? This doesn't seem ideal as it can have implications as well in the embedding web app.
From my own experience what I have done is to just delete the original and upload again with the same information. I don't rely on the IDs generated from the power bi workspace, I keep track of everything in a table I control. For example I have a table which holds the power bi meta data with an ID I give it. If I ever need to upload a new version of the PBIX I would delete the one from the azure workspace and upload a new one using the same information and then get that new ID and store it with my local ID.
So in use I would look up the report based on my local ID in my application to get the information needed to pass to power bi api to then view the report.
Hope this helps.

Is there a way to automatically import issues from Jira into Taiga?

Just started using Taiga.io, and was wondering if there was a way to auto-import issues/stories so I don't have to manually rewrite them.
You could make some sort of sync program/script that pulls from a JIRA project into a Tangia project.
Example. Have a file that contains the latest JIRA issue key that exists within Taiga and then the script runs every hour. Upon execution it does a REST call to get any issues above that JIRA key (Ex. TEST-5):
/rest/api/2/search?jql=key%20>%20TEST-5%20order%20by%20key%20desc
Then it updates the file with the highest key value and then pushes each issue into Taiga which can be done using the Taiga REST API.
Additionally you may be able to do something with the JIRA Workflow so that when issues are created something occurs within the workflow that calls the Taiga REST API and creates the task automatically.
Just save them in a format recognized by both Jira and Taiga than each time the file changes you just import it to Taiga (I used csv Excel and it worked pretty fine). The only downside is that you need to stay loggedin 24/7 if you work with worldwide distributed teams in order for the auto-function to work...

Resources