How do I add a term to a listItem in Microsoft Graph API?
For simple String types (ProductSegment in the example) I do the following:
PATCH https://graph.microsoft.com/v1.0/sites/{{sharepoint_site_id}}/lists/{{sharepoint_list_id}}/items/{{num}}/fields
{
"DisplayedName": "asdasfsvsvdvsdbvdfb",
"DocumentType": "FLYER",
"ProductSegment": ["SEG1"],
"TEST_x0020_2_x0020_ProductSegment": [{
"TermGuid": "c252c37d-1fa3-4860-8d3e-ff2cdde1f673"
}],
"Active": true,
"ProductSegment#odata.type": "Collection(Edm.String)",
"TEST_x0020_2_x0020_ProductSegment#odata.type": "Collection(Edm.Term)"
}
Obviously it won't work for TEST_x0020_2_x0020_ProductSegment. But I just cannot find any hints in the documentation.
I got one step closer thanks to the duplicated issue. First I found the name (not the id) of the hidden field TEST 2 ProductSegment_0 (notice the _0 suffix). Then assembled the term value to send: -1;#MyLabel|c352c37d-1fa3-4860-8d3e-ff2cdde1f673.
PATCH https://graph.microsoft.com/v1.0/sites/{{sharepoint_site_id}}/lists/{{sharepoint_list_id}}/items/{{num}}/fields
{
"DisplayedName": "asdasfsvsvdvsdbvdfb",
"DocumentType": "FLYER",
"ProductSegment": ["SEG1"],
"i9da5ea20ec548bfb2097f0aefe49df8": "-1;#MyLabel|c352c37d-1fa3-4860-8d3e-ff2cdde1f673",
"Active": true,
"ProductSegment#odata.type": "Collection(Edm.String)"
}
and so I can add one item. I would need to add multiple, so I wanted to add the values to an array and set the field type (i9da5ea20ec548bfb2097f0aefe49df8#odata.type) to Collection(Edm.String).
Now I get an error with the code generalException as opposed to an invalidRequest.
As far as I know, graph API does not support updating SharePoint taxonomy. For now, you can go with classic SharePoint REST API for example to accomplish "advanced" things like updating taxonomy terms. Probably a duplicate of: Can't Update Sharepoint Managed Meta Data Field from Microsoft Graph Explorer
Finally I got it.
Thanks #Nikolay for the linked issue.
As I also added this to the end of the question, first you need the name (not the id!) of the hidden field TEST 2 ProductSegment_0 (notice the _0 suffix). Then assemble the term values to send: -1;#MyLabel|c352c37d-1fa3-4860-8d3e-ff2cdde1f673 and -1;#SecondLabel|1ef2af46-1fa3-4860-8d3e-ff2cdde1f673, and separate them with ;# (actually the content of the label is irrelevant but some string needs to be there).
Looks utterly ridiculous but works.
PATCH https://graph.microsoft.com/v1.0/sites/{{sharepoint_site_id}}/lists/{{sharepoint_list_id}}/items/{{num}}/fields
{
"DisplayedName": "asdasfsvsvdvsdbvdfb",
"DocumentType": "FLYER",
"ProductSegment": ["SEG1"],
"i9da5ea20ec548bfb2097f0aefe49df8": "-1;#MyLabel|c352c37d-1fa3-4860-8d3e-ff2cdde1f673";#-1;#SecondLabel|1ef2af46-1fa3-4860-8d3e-ff2cdde1f673,
"Active": true,
"ProductSegment#odata.type": "Collection(Edm.String)"
}
Related
I am currently working on a powerapps app, and I have multiple pages with forms and different SharePoint lists connected to it. But every code I tried for submitting the data to the SharePoint doesn‘t seem to be working. Does anyone have any ides?
Patch('5S Fragen_1';varFormData1; Form1.Updates; Form3.Updates; Form3_1.Updates; Form3_2.Updates; Form3_3.Updates; Form3_4.Updates; Form3_5.Updates; Form3_6.Updates; Form3_7.Updates;Form3_8.Updates;Form3_9.Updates;Form3_10.Updates;Form3_11.Updates; Form3_12.Updates;Form3_13.Updates;Form3_14.Updates;Form3_15.Updates;Form3_16.Updates;Form3_17.Updates;Form3_18.Updates;Form3_19.Updates;Form3_20.Updates;Form3_21.Updates;Form3_22.Updates)
5S Fragen_1 is the name of my SharePoint List and varFormData1 is the item name
You will probably need to write Patch function in a different way:
Patch('5S Fragen_1';varFormData1; {Column1:textbox1.Text, Column2:Textbox2.Text, Column3Number:Value(Textbox3.Text)})
Alternatively, if you are using Forms, you can simply use SubmitForm(FormName1);SubmitForm(Form2); and so on.
Explanation: One way to write a Patch function is:
Patch( DataSource, BaseRecord, ChangeRecord1 [, ChangeRecord2, … ])
Datasource: SharePoint, SQL, Dataverse, etc
BaseRecord: A "record" Type object in Power Apps.
LookUp(DataSource, ID = Value(Textbox1.Text))
or
{ID:Value(Textbox1.Text)}
or
First(DataSource)
or
Defaults(DataSource) - this creates a brand new record in your datasource
ChangeRecord1: What you want to change in that BaseRecord object
{
Title: Box1.Text, //(or maybe ; instead of ,)
Status: dropdown.Selected.Value
//....
}
Starting with a Google Doc that looks like:
* Item
I'm hoping to make a series of API calls to turn the doc into:
* Item
- Subitem
But, I can't figure out how to do this with the API. A CreateParagraphBulletRequest doesn't have an indent level I can specify. The documentation suggests:
The nesting level of each paragraph will be determined by counting leading tabs in front of each paragraph. To avoid excess space between the bullet and the corresponding paragraph, these leading tabs are removed by this request. This may change the indices of parts of the text.
However, prepending tabs to the beginning of an InsertTextRequest will prepend the tab character, rather than changing the indent:
* Item
* Subitem
Does anyone have any ideas for what I may be doing wrong?
I believe your goal as follows.
You want to create a nested list using Google Docs API.
At first, a list, which has one item as 1st level, is existing in the Google Document. It's as follows.
- item1
Under this situation, you want to insert a nested item to the existing list as 2nd level. It's as follows.
- item1
- item2
Points for achieving your goal:
In this case, in order to insert the item to the existing list as the 2nd level, in my experience, I couldn't directly insert it. In my case, as a workaround, I'm using the following flow.
Insert a text \n\titem2\n for 2nd level using insertText request.
In this case, the 1st level is also inserted. It seems that in order to insert the deep level items, it is required to set from the 1st level and convert to the list with the bullets.
Using createParagraphBullets, it gives the bullets to the list. By this, \t is converted to the nested items.
Remove the bullet of the 1st level.
Remove the line break.
Sample request body:
When above flow is reflected to the request body of the method of batchUpdate in Docs API, it becomes as follows.
{
"requests": [
{
"insertText": {
"text": "\n\titem2\n",
"location": {
"index": 7
}
}
},
{
"createParagraphBullets": {
"range": {
"startIndex": 1,
"endIndex": 15
},
"bulletPreset": "BULLET_DISC_CIRCLE_SQUARE"
}
},
{
"deleteParagraphBullets": {
"range": {
"startIndex": 7,
"endIndex": 8
}
}
},
{
"deleteContentRange": {
"range": {
"startIndex": 7,
"endIndex": 8
}
}
}
]
}
Result:
When above request body is used, the following result is obtained.
Before:
After:
Note:
Although I had looked for other methods for using Docs API without changing the existing list, unfortunately, I cannot still find them. I thought that in order to insert the deep nested items to the existing list, in the current stage, the items might be required to be given from 1st level using \t. Unfortunately, I'm not sure whether this is the specification. So, for example, how about requesting this for the issue tracker as the future request? Ref
References:
Working with lists
documents.batchUpdate
I have a list of elements (OData set) and use a binding to show this list.
One field is for a quantity value and this value could sometimes need some decimal places.
The requirement is: only show that amount of decimal numbers that is also available in the OData service.
Annotation techniques can't be used.
I 'hacked' something that is misusing a formatter to update the type of a binding. But this is 'a hack' and it is not possible to convert it to XML views. (The reason is a different handling of the scope the formatter will be called).
So I am searching for a working solution for XML views.
The following code would not work but shows the issue:
new sap.m.Input({ // looking for an XML solution with bindings
value: {
path: "Quantity",
type: new sap.ui.model.type.Float({
// formatOptions
maxFractionDigits: "{QuantityDecimals}",
// ...
}, {
// constraints
minimum: 0
}),
// ...
}
});
The maxFractionDigits : "{QuantityDecimals}" should be "dynamic" and not a constant value.
Setting formatOptions and constraints dynamically in XML (via bindings or a declared function) is unfortunately not (yet) supported. But IMHO this is a valid enhancement request that app developers would greatly benefit from, since it encourages declarative programming.
I already asked for the same feature some years ago but in a comment at https://github.com/SAP/openui5/issues/2449#issuecomment-474304965 (See my answer to Frank's question "If XMLViews would allow a way to specify the dynamic constraints as well (e.g. enhanced syntax), would that fix the problem?").
Please create a new issue via https://github.com/SAP/openui5/issues/new with a clear description of what kind of problems the feature would resolve and possibly other use cases (You can add a link to my comment). I'll add my 👍 to your GitHub issue, and hopefully others too.
I'll update this answer as soon as the feature is available.
Get your dynamic number from your model and store it in a JS variable.
var nQuantityDecimals = this.getModel().getProperty("/QuantityDecimals");
new sap.m.Input({
value : {
path : "Quantity",
type : new sap.ui.model.type.Float({
maxFractionDigits : nQuantityDecimals,
source : {
groupingSeparator: ",",
decimalSeparator: ".",
groupingEnabled: false
}
}, {
minimum:0
})
}
}),
The situation is as follows: I have a table in my database that recieves about 3 million rows each day. We want to archive this table on a regular base, so that only the 8 most recents weeks are in the table. The rest of the data can be archived tot AZure Data lake.
I allready found out how to do this by one day at a time. But now I want to run this pipeline each week for the first seven days in the table. I assume I should do this with the "For Each" component. It should itterate along the seven distinct dates that are present in the dataset I want to backup. This dataset is copied from the source table to an archive table on forehand.
It's not difficult to get the distinct dates with a SQL query, but how to get the result of this query into an array that is used for the "For Each" component?
The issue is solved thanks to a co-worker.
What we have to do is assign a parameter to the dataset of the sink. Does not matter how you name this and you do not have to assign a value to it. But let's assume this parameter is called "date"
After that you can use this parameter in the filename of the sink (also in dataset) with by using "#dataset().Date".
After that you go back to the copyactivity and in the sink you assign a dataset property to #item().DateSelect. (DateSelect is the field name from the array that is passed to the For Each activity)
See also the answer from Bo Xioa as part of the answer
This way it works perfectly. It's just a shame that this is not well documented
You can use lookup activity to fetch the column content, and the output will be like
{
"count": "2",
"value": [
{
"Id": "1",
"TableName" : "Table1"
},
{
"Id": "2",
"TableName" : "Table2"
}
]
}
Then you can pass the value array to the Foreach activity items field by using the pattern of #activity('MyLookupActivity').output.value
ref doc: Use the Lookup activity result in a subsequent activity
I post this as an answer, because the error does not fit into a comment :D
have seen antoher option to accomplish this. That is by executing a pipeline from another pipeline. And in that way I can define the dates that I should iterate over as a parameter in the second pipeline (learn.microsoft.com/en-us/azure/data-factory/…). But unformtunately this leads to the same rsult as when just using the foreach parameter. Because in the filename of my data lake file I have to use: #{item().columname}. I can see in the monitoring view that the right values are passed in the iteration steps, but I keep getting an error:
{
"errorCode": "2200",
"message": "Failure happened on 'Sink' side. ErrorCode=UserErrorFailedFileOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The request to 'Unknown' failed and the status code is 'BadRequest', request id is ''. {\"error\":{\"code\":\"BadRequest\",\"message\":\"A potentially dangerous Request.Path value was detected from the client (:). Trace: cf3b4c3f-1681-4073-b225-17e1c07ec76d Time: 2018-08-02T05:16:13.2141897-07:00\"}} ,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The remote server returned an error: (400) Bad Request.,Source=System,'",
"failureType": "UserError",
"target": "CopyDancerDatatoADL"
}
I am trying to store network layout in Couch DB, but my solution provides rather randomized graph.
I store a nodes with a document:
{_id ,
nodeName,
group}
and storing links in traditional:
{_id, source_id, target_id, value}
Following multiple tutorials on handling joins and multiple relationship in Couch DB I created view:
function(doc) {
if(doc.type == 'connection') {
if (doc.source_id)
emit("source", {'_id': doc.source_id});
if(doc.target_id)
emit("target", {'_id': doc.target_id});
}
}
which should have emitted sequence of source and target id, then I pass it to the list function with include_docs=true, assumes that source and target come in pairs stitches everything back in a structure like this:
{
"nodes":[
{"nodeName":"Name 1","group":"1"},
{"nodeName":"Name 2","group":"1"},
],
"links": [
{"source":7,"target":0,"value":1},
{"source":7,"target":5,"value":1}
]
}
Although my list produce a proper JSON, view map returns number of rows of source docs and then target docs.
So far I don't have any ideas how to make this thing working properly - I am happy to fetch additional values from document _id in the list, but so far I havn't find any good examples.
Alternative ways of achieving the same goal are welcome. _id values are standard for CouchDB so far.
Update: while writing a question I came up with different view which sorted my immediate problem, but I still would like to see other options.
updated map:
function(doc) {
if(doc.type == 'connection') {
if (doc.source_id)
emit([doc._id,0,"source"], {'_id': doc.source_id});
if(doc.target_id)
emit([doc._id,1,"target"], {'_id': doc.target_id});
}
}
Your updated map function makes more sense. However, you don't need 0 and 1 in your key since you have already "source"and "target".