Autodesk Forge WorkItem API: Error: Output is missing [result.pdf] - autodesk-designautomation

I'm using autodesk Design Automation API to convert DGW to PDF. In Job output file i get this:
[10/09/2017 03:01:35] Command: -export Enter file format [Dwf/dwfX/Pdf] <dwfX>_pdf Enter plot area [Current layout/All layouts]<Current Layout>: _all
[10/09/2017 03:01:35] Enter file name <visualization_condominium_with_skylight-Layout1.pdf>: result.pdf
[10/09/2017 03:01:35] There were no plottable sheets in the current operation.
[10/09/2017 03:01:35] Command: _.quit
[10/09/2017 03:01:35] Process exit code: 0
[10/09/2017 03:01:35] End AutoCAD Core Console output
[10/09/2017 03:01:35] End script phase.
[10/09/2017 03:01:35] Start upload phase.
[10/09/2017 03:01:35] Error: Output is missing [result.pdf].
[10/09/2017 03:01:35] Error: An unexpected error happened during phase Publishing of job.
I've tried two different DWG files (both downloaded from autodesk samples) to no avail.
Here is link to the DWG File

As this line indicates, this drawing does not have a plottable sheet. This is normally because this drawing has not been activated with any sheet (paper) while the default PDF Export script of Design Automation works for sheet (paper), instead of model space. Note: the command lines of export to PDF are different in Model Space and Paper Space. So, you could simply activate a sheet and upload it to web driver again.
OR, you could create your own Activity to export PDF for Model Space, and invoke the script in the WorkItem.
There were no plottable sheets in the current operation.
If you still have the issue, please provide a small sample drawing (after removing confidential information).

Related

How can we get result of summary report parameters (throughput, received & sent bytes) in JMeter script through non gui mode?

How can we get result of summary report parameters (throughput, received & sent bytes) in JMeter script through non gui mode? I have to implement the benchmarking on whole script rather than each thread to mark the status of script pass/fail by comparing the result to the static .csv file which contains the value of parameters .Kindly let me know the approach to opt.
The easiest way is going for JMeterPluginsCMD Command Line Tool, it can generate various tables and charts out of JMeter's .jtl results file.
So you will need to add the following command as the Post Build Step:
JMeterPluginsCMD --generate-csv SummaryReport.csv --input-jtl result.jtl --plugin-type AggregateReport
You can install JMeterPluginsCMD Command Line Tool using JMeter Plugins Manager

How to generate documentation of robot file using sphinx?

I have a robot file (calc_check.robot) in which each test case has separate documentation.
*** Settings ***
Documentation
... The test cases are designed to test the calculator .
Library ../../Library/AddNumbers
*** Test Cases ***
Calc_check_test Testcase01_a
[Documentation]
... Verify that two numbers are added or not
[Tags] add calculator
${addition}= Add numbers 10 20
Calc_check_test Testcase01_b
[Documentation]
... Verify that two numbers are added or not with negative sign
[Tags] add calculator
${addition}= Add numbers 10 -20
When i try to generate documentation for that robot file using the rst file (call_check.rst) i'm getting complete test case along with documentation as well, but i need only "[Documentation]" part only.
calc_check
======================================
.. robot-settings::
:source:/Users/sphinx/calc_check.robot
.. robot-tests::
:source:/Users/sphinx/calc_check.robot
I want documentation (i.e., only [Documentation] part of test case) from two test cases excluding the test case code.
Please tell me how to generate only the documentation part of it.
Robot provides documentation generation libraries called libdoc:
https://robot-framework.readthedocs.io/en/2.9.2/_modules/robot/libdoc.html
Problem is that it generates only for libraries and resources files (those without ***Testcase*** part).
If you need to generate docs from test suites, I would recommand to temporary change TestSuite into Resource file (change section to Keywords) and run libdoc for such file:
python -m robot.libdoc <path to res/lib> <list/show>

Why does my python script not recognize speech from audio file?

I have the following piece of code successfully recognizing short (less than 1 min) test audio file, but failing with recognition another long audiofile (1.5h).
from google.cloud import speech
def run_quickstart():
speech_client = speech.Client()
sample = speech_client.sample(source_uri="gs://linear-arena-2109/zoom0070.flac", encoding=speech.Encoding.FLAC)
alternatives = sample.recognize('uk-UA')
for alternative in alternatives:
print(u'Transcript: {}'.format(alternative.transcript))
with open("Output.txt", "w") as text_file:
for alternative in alternatives:
text_file.write(alternative.transcript.encode('utf8'))
if __name__ == '__main__':
run_quickstart()
Both files are uploaded to Google Cloud.
The first one:
https://storage.googleapis.com/linear-arena-2109/sample.flac
The second one:
https://storage.googleapis.com/linear-arena-2109/zoom0070.flac
Both were converted from mp3 with ffmpeg utility:
ffmpeg -i sample.mp3 -ac 1 sample.flac
ffmpeg -i zoom0070.mp3 -ac 1 zoom0070.flac
First file was successfully recognized, but second file outputs the following error:
google.gax.errors.RetryError: GaxError(Exception occurred in retry method that was not classified as transient, caused by <_Rendezvous of RPC that terminated with (StatusCode.INVALID_ARGUMENT, Sync input too long. For audio longer than 1 min use LongRunningRecognize with a 'uri' parameter.)>)
But I have already used uri parameter in my python script. What is wrong?
update
#NieDzejkob helped to understand the error. So, method long_running_recognize should be used instead of recognize. The comprehensive long_running_recognize usage example can be found on the corresponding document page
For any audio file longer than 1 minute, you need to use Asynchronous Speech Recognition and the file has to be uploaded to Google Cloud Storage so that you can pass in a gcs_uri.
In addition, you will need to use the .long_running_recognize method in your script. An example from GCP documentation can be found here.
I realize that OP figured it out but thought it would be useful to provide an answer and generalize it a bit.

Exit with error code from SPSS syntax

I am using a batch file which calls an SPSS production job which runs many syntax files.
In the syntax files I want to able to check some variables, and if certain conditions are not met then I want to stop the production job, exit SPSS and return an error code to the batch file.
The batch file needs to stop running the next commands based on the error code returned. I know how to do this in the batch file already.
The most basic solution could be if the error code is not 0 then stop, and the error text will be output to a separate text file from within the syntax. A bonus would be a different error code which I could then match to where in the syntax that code is thrown.
What is the best way to achieve this in the SPSS syntax and or production file?
One way to do this would be to execute Statistics as an external mode Python job. Then you could interrogate any results, catch exceptions, and set exit codes and messages however, you like. Here is an example:
jobs.py:
Python jobs
import sys
sys.path.append(r"""c:/spss23/python/lib/site-packages""")
import spss
try:
spss.Submit("""INSERT FILE="c:/temp/syntax1.sps".""")
except:
print "syntax1.spss failed"
exit(code=1)
try:
spss.Submit("""INSERT FILE="c:/temp/syntax2.sps".""")
except:
print "syntax1.spss failed"
exit(code=2)
Then the bat file would do
python c:/myjobs/jobs.py
print %ERRORLEVEL%
or similar. The job would need to save the output in appropriate format using OMS or shell redirection. (The blocks after try and except should be indented.)
In external mode, you could use code like this or you could interrogate items in the Viewer.
import spss, spssdata
curs = spssdata.Spssdata("variable2")
for case in curs:
if case[0] == 6:
exit(99)
curs.CClose

Display link to code in Qt creator output pane

When running QTest in Qt Creator with terminal setting disabled, the output pane display links to the test line failing, ie:
FAIL! : MidiTest::testMTCWriter() Compared values are not the same
Actual (tcCount): 1
Expected (2) : 2
Loc: [../../Joker/tests/AutoTest/MidiTest.cpp(271)]
Is it possible to make such link? For example I configured doxygen as an external tool and I'd like to produce the same output for documentation errors.

Resources