How to automatically export Instruments data to CSV - ios

I'm looking at a way to automate the gathering of iOS memory usage. So far, I've been using the Instruments cli to do this:
instruments -w <ID> -t "/Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/Resources/templates/Activity Monitor.tracetemplate" -l 30000
My problem now is exporting the data to be parsed. I noticed in the Instruments GUI there is an option to export to CSV, however there doesn't appear to be anything like this for the CLI.
I unzipped the .trace package that the Instruments CLI outputs and found a lot of binary data, which isn't too useful.
Is it possible to export this data or convert it to a parsable format?
Thanks

Related

Influx v1.8 CLI query gets Killed

I'm looking at options to export data from InfluxDB to MySQL. I'm exploring the option to export the data to flat files for the import (so we don't have to hit our production InfluxDB instance).
When I execute the command influx -database 'mydb' -execute 'SELECT * FROM "1D"' -format csv > my-influx-all.csv it runs for about a minute and then outputs Killed.
My test DB is about 2.1GB in size atm so not large. The production DB is 51GB. Is there a flag I can pass so Influx CLI doesn't die? Or is there an alternate way to export data into a flat file?
The query you can might hit an OOM. Further details should be found in the logs.
If you want to export the data in an readable format, you could try influx_inspect :
sudo influx_inspect export -database yourDatabase -out "influx_backup.db"

Apache Jena: riot does not produce output

I recently installed Apache Jena 3.17.0, and have been trying to use it to convert nquads files to ntriples.
As per the instructions, here (https://jena.apache.org/documentation/tools/) I first set up my WSL (Ubuntu 20.04) environment
$ export JENA_HOME=apache-jena-3.17.0/
$ export PATH=$PATH:$JENA_HOME/bin
and then attempted to run riot to do the conversion (triail.nq is my nquads file).
$ riot --output=NTRIPLES -v triail.nq
When I ran this, I got no output to the terminal. I'm not sure what is going wrong here, since there is no error message. Does anyone know what could be causing this / what the solution could be?
Thanks in advance!
The command will read the quad (multiple graph) data and output only the default graph. Presumably there is no default graph data in triail.nq.
If "convert" means combine all the quads into a single graph, then remove the graph field on each line of the data file with a text editor.
Otherwise, read into a RDF dataset and copy the named graphs into a single graph and output that.

Error downloading YouTube-8M dataset with curl in Windows 8.1

I'm trying to download a small chunk of the YouTube-8M dataset. It is just a dataset with video features and labels and you can create your own model to classify them.
The command that they claim will download the dataset is this :
curl storage.googleapis.com/data.yt8m.org/download_fix.py | shard=1,100 partition=2/frame/train mirror=us python
This actually didn't worked at all and the error produced is :
'shard' is not recognized as an internal or external command,operable program or bash file.
I found someone posted on a forum. It says to add 'set' to the variables which seems to fix my problem partially.
curl storage.googleapis.com/data.yt8m.org/download_fix.py | set shard=1,100 partition=2/video/train mirror=us python
The download seemingly started for a split second and an error pop up. The error right now is (23) Failed writing body.
So what is the command line for downloading the dataset.
I'd try using the Kaggle API instead. You can install the API using:
pip install Kaggle
Then download your credentials (step-by-step guide here). Finally, you can download the dataset like so:
kaggle competitions download -c youtube8m
If you only want part of the dataset, you can first list all the downloadable files:
kaggle competitions files -c youtube8m
And then only download the file(s) you want:
kaggle competitions download -c youtube8m -f name_of_your_file.extension
Hope that helps! :)

Export instruments trace data via command-line for leaks

I am using the following script to run leaks instruments from the command-line.
instruments -t /Applications/Xcode.app/Contents/Applications/Instruments.app/Contents/Resources/templates/Leaks.tracetemplate <app path>
after executing the command, i get instrumentscli0.trace file. How can i get readable data about leaks from that file.
Is there any way to export the results to a text file via any command. In automation template we can specify an output folder using the switch -e UIARESULTSPATH.
you can click "instrument > export data" in the menu to export the recent result to a .csv file. but i don't know how to do it with command line

Analyze core-dumps created while running wireshark on linux

I am running wireshark build on linux. I get a crash,while doing some activities. A core dump is also being generated. But,when i give the following command
gdb ./wireshark core.
It says,file format not recognized. Also,when i do a
cat on "./wireshark",it seems to be some kind of script.
so how to analyze core dumps?
Check the script to see what is the actual wireshark binary being run.
gdb is good for coredump analysis.
when i do a cat on "./wireshark",it seems to be some kind of script.
Probably because you've built Wireshark from source in that directory, in which case it is a script (generated by libtool as a wrapper script).
What you need to do, instead of
gdb ./wireshark core`
is
./libtool --mode=execute gdb ./wireshark core
which will do the right magic to run GDB on the actual executable rather than on the script (and to pass it the right magic to find the shared libraries in your build directory).

Resources