Error: Non-optional output [outputFile.dwg] is missing - autodesk-designautomation

When I executed a WorkItem, I have this error:
[07/18/2019 09:24:00] Error: Non-optional output [outputFile.dwg] is missing .
[07/18/2019 09:24:00] Error: An unexpected error happened during phase Publishing of job.
In Activity I have the follow code:
"outputFile": {
"zip": false,
"ondemand": false,
"verb": "put",
"description": "output file",
"localName": "outputFile.dwg",
"required": "true"
}
And in WorkItem:
"outputFile": {
"url": "https://developer.api.autodesk.com/oss/v2/buckets/{{ TokenKey}}/objects/outputFile.dwg",
"headers": {
"Authorization": "Bearer {{ oAuthToken }}",
"Content-type": "application/octet-stream"
},
"verb": "put"
},
What may be change?

The error says that "outputFile.dwg" was not generated. It is a non-optional (i.e. required) output so this is an error. I suspect there's something wrong with your script. Look higher up in the report to see if you can find something that gives you a clue.

this is Qun Lu from Forge Design Automation / AutoCAD team. The execution of your activity (with your input arguments as inputs) has to generate the expected result file, in your case, "outputFile.dwg", so it can be uploaded using your URL. It should be done by your "Rota" command, or other AutoCAD built-in command that your script in activity specifies. It appears that either your command (or script in general) missed the step of saving the drawing as "outputFile.dwg", or your "PluginPrueba.dll" module did not load properly hence the "Rota" command is not found. Can you give us the full report so we can check further? You can also ping me at qun.lu#autodesk.com. Thanks!

Related

Get an Auth code from ms graph for an application - returning error about missing request_type

I am creating a Power Automate flow to get MS Booking information. Having trouble with getting an Authorization using https://login.microsoftonline.com/***TENANT ID****/oauth2/token. I receive an error that I am missing grant_type although I supply it. I registered the app in azure, the HTTP request in power automate looks like this:
{
"uri": "https://login.microsoftonline.com//oauth2/token",
"method": "POST",
"headers": {
"content-type ": "application/x-www-form-urlencoded"
},
"body": "client_id=&resource=https://graph.microsoft.com&grant_type=password&client_secret=&username=username&password=password"
I receive the error:
{"error":"invalid_request","error_description":"AADSTS900144: The request body must contain the following parameter: 'grant_type'.
Anyone have an idea what I am doing wrong or missing? Thank you in advance.
Just a quick follow-up. Thanks Expiscornpvus, you pointed me in the right direction, although there were spaces after the content type header, I corrected this and things worked well

How to update a test execution when importing Junit Multipart using Xray?

I was looking at the documentation of the Xray plugin for Jenkins: https://docs.getxray.app/display/XRAY/Import+Execution+Results+-+REST#ImportExecutionResultsREST-JUnitXMLresultsMultipart
And what I found, is a bit confusing, after a few attempts.
If I'm NOT trying to import executions using the multipart, I can update a test execution by specifying a Test Execution Key.
When I do try the multipart, I have this JSON
"fields": {
"project": {
"key": "${ProjectKey}"
},
"summary": "Temp Test execution",
"issuetype": {
"name": "Test Execution"
},
"labels": [],
"fixVersions": [
{
"name": "testrelease"
}
]
}
}
This always creates a new Test Execution within JIRA.
In their examples I see no way to send the test execution key for it to be updated.
Which is strange, because by importing without multipart, I can set it.
Anyone has any idea how to achieve this?
Currently, if you use the "multipart" kind of endpoints, a new Test Execution will always be created. To update existing Test Execution issues you need to use the standard endpoints (e.g., JUnit); however, these don't allow you to customize fields on the Test Execution.
There's an improvement in the backlog in order to enhance the existing behaviour; please vote on it and watch it, so the Xray team can become aware of your interest on this.

Why is the exact difference between "violation" and "deny" in OPA/Rego?

In Open Policy Agent (https://www.openpolicyagent.org/)
regarding to Kubernetes, depending which engine is used:
Gatekeeper: https://github.com/open-policy-agent/gatekeeper
OR
Plain OPA with kube-mgmt: https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/#how-does-it-work-with-plain-opa-and-kube-mgmt
There are different ways to define validation rules:
In Gatekeeper the violation is used. See sample rules here: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/general
In plain OPA samples, the deny rule, see sample here:
https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/#how-does-it-work-with-plain-opa-and-kube-mgmt
It seems to be the OPA constraint framework defines it as violation:
https://github.com/open-policy-agent/frameworks/tree/master/constraint#rule-schema
So what is the exact "story" behind this, why it is not consistent between the different engines?
Notes:
This doc reflects on this: https://www.openshift.com/blog/better-kubernetes-security-with-open-policy-agent-opa-part-2
Here is mentioned how to support interoperability in the script: https://github.com/open-policy-agent/gatekeeper/issues/1168#issuecomment-794759747
https://github.com/open-policy-agent/gatekeeper/issues/168 In this issue is the migration mentioned, is just because of "dry run" support?.
Plain OPA has no opinion on how you choose to name your rules. Using deny is just a convention in the tutorial. The real Kubernetes admission review response is going to look something like this:
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1beta1",
"response": {
"allowed": false,
"status": {
"reason": "container image refers to illegal registry (must be hooli.com)"
}
}
}
So whatever you choose to name your rules the response will need to be transformed into a response like the above before it's sent back to the Kubernetes API server. If you scroll down a bit in the Detailed Admission Control Flow section of the Kubernetes primer docs, you'll see how this transformation is accomplished in the system.main rule:
package system
import data.kubernetes.admission
main = {
"apiVersion": "admission.k8s.io/v1beta1",
"kind": "AdmissionReview",
"response": response,
}
default response = {"allowed": true}
response = {
"allowed": false,
"status": {
"reason": reason,
},
} {
reason = concat(", ", admission.deny)
reason != ""
}
Note in particular how the "reason" attribute is just built by concatenating all the strings found in admission.deny:
reason = concat(", ", admission.deny)
If you'd rather use violation or some other rule name using plain OPA, this is where you would change it.

IoTAgent-LoRaWAN is apparently not working as expected

I was trying to provisioning the IoTAgent-LoRaWAN using the TTN credentials, I'm following the official docs and this is my POST request:
{
"devices": [
{
"device_id": "{{node}}",
"entity_name": "LORA-N-0",
"entity_type": "LoraDevice",
"timezone": "Europe/Madrid",
"attributes": [
{
"object_id": "potVal",
"name": "Pot_Value",
"type": "Number"
}
],
"internal_attributes": {
"lorawan": {
"application_server": {
"host": "eu.thethings.network",
"username": "{{TTN_app_id}}",
"password": "{{TTN_app_pw}}",
"provider": "TTN"
},
"dev_eui": "{{TTN_dev_eui}}",
"app_eui": "{{TTN_app_eui}}",
"application_id": "{{TTN_app_id}}",
"application_key": "{{TTN_app_skey}}"
}
}
}
]
}
Obviously I'm using Postman to manage all those HTTP requests in a collection and I've setup a few environment variables that are:
{{node}} -> the device ID node_0
{{TTN_app_id}} -> my app id which I've chosen dendrometer
{{TTN_app_pw}} -> the application access key shown in the picture (It can be found in the same view than the Application Overview; https://console.thethingsnetwork.org/applications/<application_id>)
{{TTN_dev_eui}} and {{TTN_app_eui}} also shown in the following picture (regarding to device; I think these are not sensitive info because TTN is not hiding them, that's because I'm posting the picture)
{{TTN_app_skey}} -> The Application Session Key also shown in the following picture (the last one)
The point is ... once I've provisioned IoTAgent using that request, the docker-compose logs -f iot-agent shows the following errors:
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.689Z","level":"info","message":"New message in topic"}
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.690Z","level":"info","message":"IOTA provisioned devices:"}
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.691Z","level":"info","message":"Decoding CaynneLPP message:+XQ="}
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.691Z","level":"error","message":"Error decoding CaynneLPP message:Error: Invalid CayennLpp buffer size"}
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.691Z","level":"error","message":"Could not cast message to NGSI"}
So I think there is something not working properly. That's my docker-compose.yml, btw http://ix.io/2pWd
However I don't think the problem is caused by docker, all containers are working as expected apparently because I can request their versions and I don't see error messages in the logs.
Also ... I feel the docs like incomplete, I'd like more info, about how to subscribe those provisioned devices with OrionCB (?) or Delete them (that's not shown in the docs, although is just a DELETE request to the proper URL.)
Anyway ... What I'm doing wrong? Thank you all.
EDIT: I feel like there is something wrong in the IoTAgent itself, there is a typo in the following error messages:
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.691Z","level":"info","message":"Decoding CaynneLPP message:+XQ="}
fiware-iot-agent | {"timestamp":"2020-06-23T11:45:53.691Z","level":"error","message":"Error decoding CaynneLPP message:Error: Invalid CayennLpp buffer size"}
Because it isn't CaynneLPP but CayenneLPP. I've also opened an issue in its GitHub repo but I don't expect they answer any time soon. I actually feel like this project has been abandoned.
It's apparently a problem with encoding, I was using the encoding method suggested by arduinio-lmic library but FIWARE does work under CayenneLPP data model. So I'm going to try replace that encoding method.
Thank you all anyway and specially to #arjan

Two identical metadata requests, first one returns status 200, the 2nd one 404

Below you will find 2 metadata requests to the same OData Service. Except the cookie digit stream, they look like 2 drops of water, completely identical.
First one is triggered via the manifest.json file and its result is successful. Copied the data source definition to the manifest.json file of a second different application and put at work the debugger, expecting the same successful result.
"dataSources": { "mainService": { "uri": "/Uni_Sandpit_Virtual/sap/opu/odata/SAP/ZCONTRACTS_SRV/", "type": "OData", "settings": { "odataVersion": "2.0" } } },
To my absolute surprise the second metadata call returns 404 (not found). What do I miss here?
Best regards,
Greg
Request returning status 200
Request returning status 404
​Found the issue, posting it in case some other runs on the same problem.
Missing "path" block inside the neo-app.json file:
{ "path": "/Uni_Sandpit_Virtual", "target": { "type": "destination", "name": "Uni_Sandpit_Virtual" },

Resources