.tick script:
stream
|from()
.measurement('httpjson_example')
|alert()
.crit(lambda: "temperature" < 70)
// Whenever we get an alert write it to a file.
.message('test')
.log('/tmp/test.log')
Output test.log:
..."message":"test","CRITICAL","data":{"series":[{"name":"httpjson_example","tags":{"host":"influxdata","server":"http://...:8080/readings"},"columns":["time","dewPoint","heatIndex","humidity","response_time","temperature"],"values":[["2016-06-23T12:38:42Z",12.06,22.15,51.6,2.078549411,22.5]]}]}}
This script write to file but I just want string 'test' written.
At the moment this isn't possible without a bit of work writing your own UDF.
If you'd like to see this feature in Kapacitor, open a feature request that details your use case.
Related
I have a yaml file in a Bazel monorepo that has constants I'd like to use in several languages. This is kind of like how protobuffs are created and used.
How can I parse this yaml file at build time and depend on the outputs?
For instance:
item1: "hello"
item2: "world"
nested:
nested1: "I'm nested"
nested2: "I'm also nested"
I then need to parse this yaml file so it can be used in many different languages (e.g., Rust, TypeScript, Python, etc.). For instance, here's the desired output for TypeScript:
export default {
item1: "hello",
item2: "world",
nested: {
nested1: "I'm nested",
nested2: "I'm also nested",
}
}
Notice, I don't want TypeScript code that reads the yaml file and converts it into an object. That conversion should be done in the build process.
For the actual conversion, I'm thinking of writing that in Python, but it doesn't need to be. This would then mean the python also needs to run at build time.
P.S. I care mostly about the functionality, so I'm flexible with the exactly how it's done. I'm even fine using another file format aside from yaml.
Thanks to help from #oakad, I was able to figure it out. Essentially, you can use genrule to create outputs.
So, assuming you have some target like python setup to generate the output named parse_config, you can just do this:
genrule(
name = "generated_output",
srcs = [],
outs = ["output.txt"],
cmd = "./$(execpath :parse_config) > $#" % name,
exec_tools = [":parse_config"],
visibility = ["//visibility:public"],
)
The generated file is output.txt. And you can access it via //lib/config:generated_output.
Note, essentially the cmd is piping the stdout into the file contents. In Python that means anything printed will appear in the generated file.
I am learning the Lua IO library. I'm having trouble with io.write(). In Programming Design in Lua, there is a piece of code that iterates through the file line by line and precedes each line with a serial number.
This is the file I`m working on:
test file: "iotest.txt"
This is my code
io.input("iotest.txt")
-- io.output("iotest.txt")
local count = 0
for line in io.lines() do
count=count+1
io.write(string.format("%6d ",count), line, "\n")
end
This is the result of the terminal display, but this result cannot be written to the file, whether I add IO. Output (" iotest.txt ") or not.
the results in terminal
This is the result of file, we can see there is no change
The result after code running
Just add io.flush() after your write operations to save the data to the file.
io.input("iotest.txt")
io.output("iotestout.txt")
local count = 0
for line in io.lines() do
count=count+1
io.write(string.format("%6d ",count), line, "\n")
end
io.flush()
io.close()
Refer to Lua 5.4 Reference Manual : 6.8 - Input and Output Facilities
io.flush() will save any written data to the output file which you set with io.output
See koyaanisqatsi's answer for the optional use of file handles. This becomes especially useful if you're working on multiple files at a time and gives you more control on how to interact with the file.
That said you should also have different files for input and output. You'll agree that it doesn't make sense to read and write from and to the same file alternatingly.
For writing to a file you need a file handle.
This handle comes from: io.open()
See: https://www.lua.org/manual/5.4/manual.html#6.8
A file handle has methods that acts on self.
Thats the function after the : at file handle.
So io.write() puts out on stdout and file:write() in a file.
Example function that can dump a defined function to a file...
fdump=function(func,path)
assert(type(func)=="function")
assert(type(path)=="string")
-- Get the file handle (file)
local file,err = io.open(path, "wb")
assert(file, err)
local chunk = string.dump(func,true)
file:write(chunk)
file:flush()
file:close()
return 'DONE'
end
Here are the methods, taken from io.stdin
close = function: 0x566032b0
seek = function: 0x566045f0
flush = function: 0x56603d10
setvbuf = function: 0x56604240
write = function: 0x56603e70
lines = function: 0x566040c0
read = function: 0x56603c90
This makes it able to use it directly like...
( Lua console: lua -i )
> do io.stdout:write('Input: ') local result=io.stdin:read() return result end
Input: d
d
You are trying to open the same file for reading and writing at the same time. You cannot do that.
There are two possible solutions:
Read from file X, iterate through it and write the result to another file Y.
Read the complete file X into memory, close file X, then delete file X, open the same filename for writing and write to it while iterating through the original file (in memory).
Otherwise, your approach is correct although file operations in Lua are more often done using io.open() and file handles instead of io.write() and io.read().
I followed this page:
https://cloud.google.com/speech/docs/getting-started
and I could reach the end of it without problems.
In the example though, the file
'uri':'gs://cloud-samples-tests/speech/brooklyn.flac'
is processed.
What if I want to process a local file? In case this is not possible, how can I upload my .flac via command line?
Thanks
You're now able to process a local file by specifying a local path instead of the google storage one:
gcloud ml speech recognize '/Users/xxx/cloud-samples-tests/speech/brooklyn.flac' \ --language-code='en-US'
You can send this command by using the gcloud tool (https://cloud.google.com/speech-to-text/docs/quickstart-gcloud).
Solution found:
I created my own bucket (my_bucket_test), and I upload the file there via:
gsutil cp speech.flac gs://my_bucket_test
If you don't want to create a bucket (costs extra time and money) - you can stream the local files. The following code is copied directly from the Google cloud docs:
def transcribe_streaming(stream_file):
"""Streams transcription of the given audio file."""
import io
from google.cloud import speech
client = speech.SpeechClient()
with io.open(stream_file, "rb") as audio_file:
content = audio_file.read()
# In practice, stream should be a generator yielding chunks of audio data.
stream = [content]
requests = (
speech.StreamingRecognizeRequest(audio_content=chunk) for chunk in stream
)
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code="en-US",
)
streaming_config = speech.StreamingRecognitionConfig(config=config)
# streaming_recognize returns a generator.
responses = client.streaming_recognize(
config=streaming_config,
requests=requests,
)
for response in responses:
# Once the transcription has settled, the first result will contain the
# is_final result. The other results will be for subsequent portions of
# the audio.
for result in response.results:
print("Finished: {}".format(result.is_final))
print("Stability: {}".format(result.stability))
alternatives = result.alternatives
# The alternatives are ordered from most likely to least.
for alternative in alternatives:
print("Confidence: {}".format(alternative.confidence))
print(u"Transcript: {}".format(alternative.transcript))
Here is the URL incase the package's function names are edited over time: https://cloud.google.com/speech-to-text/docs/streaming-recognize
Using Apple's UI Automation, I have been successful in building and executing my test scripts through a bash script.
I'm trying to automate testing for an app that requires comparing data in a sqlite file with data shown in the app.
I've written a python script which saves the sqlite data as javascript variables in a file called settings.js. Using performTaskWithPathArgumentsTimeout, I can execute this script to create the settings.js file:
var target = UIATarget.localTarget();
var host = target.host();
var result = host.performTaskWithPathArgumentsTimeout("/usr/bin/python",["/Users/Matt/Code/automation/DBData/UIAsettingsKVDump.py", "/Users/Matt/Code/automation/DBData/settings.sqlite"],20);
UIALogger.logDebug("exitCode: " + result.exitCode);
UIALogger.logDebug("stdout: " + result.stdout);
UIALogger.logDebug("stderr: " + result.stderr);
#import "./../DBData/settings.js"
This successfully creates the settings.js file. However, when I try to import the settings.js file like above, I get an "Import file not found(null)" error before the three logDebug messages are output onto the console -this leads me to believe that the #import is done before the script is executed.
What can I do to make sure my settings.js file is created before the #import is performed?
There's no documentation about this, but from my experience Apple is preprocessing the JavaScript files and performing the imports at script parsing time before the script is evaluated. I think this happens because the #import statement isn't part of of the JS language.
Rather than try to import the JavaScript file, what if you just had your Python script print the settings JavaScript to standard out. That way you can get that output from the result returned from performTaskWithPathArgumentsTimeout and use eval() to convert it into a JavaScript result. That may be finicky and you're certainly going to have trouble debugging it, but that might be the quickest way to get what you want.
Can someone recommend a load testing tool which allows you to either:
a. replay an IIS (7) log(s) to simulate a real live site daily run;
b. import a CSV or equivalent list of URLS so we can achieve a similar thing as above but at a URL level;
c. .net API so I can create simple tests easily from my list of URLS is also a good way to go.
I do not really want to record my tests.
I think I can do B) with WAPT but need to create an XML file manually, not too much grief, but wondering if any tools cover these scenarios out the box.
Visual Studio Test Edition would require some code to parse the file into a suitable test run.
It is a great load testing solution.
Our load testing service lets you write a very simple script using JavaScript to pull data out of a CSV file and then fetch those URLs. For example, the following code would pluck 10 random URLs from the CSV file and fetch them as part of a single session:
var c = browserMob.openHttpClient();
var csv = browserMob.getCSV("urls.csv");
browserMob.beginTransaction();
for (var i = 0; i < 10; i++) {
browserMob.beginStep("Step 1");
var url = csv.random().get("url");
c.get(url);
browserMob.endStep();
}
browserMob.endTransaction();
The CSV file itself needs to be a normal CSV file with the first row containing a header named "url". This script would be run repeatedly for each virtual user participating in a load test.
We have support for so called 'uri-format' in our open-source tool called Yandex.Tank You simply put all your uris to a file, one uri -- one line, then specify headers in your load.ini like this:
[phantom]
address=example.org
rps_schedule=line(1, 1600, 2m)
headers = [Host: mts-maps.yandex.ru]
[Connection: close] [Bloody: yes]
ammo_file = ammo.uri
ammo.uri:
/
/index.html
/1/example.html
/2/example.html