OSLog write struct values and plot logs to a file - ios

I have two questions about logging using OSLog lib.
I used to use log4j and similar logging libraries and now i'm confusing with OSLog.
I have a struct called coord with x and y vars inside it. I can plot it directly using print but cannot add it in os_log function :
os_log("Step: %{coord}d", log: OSLog.default, type: .info, myCoord)
I know OSLog store log entries in memory and I must use console.app to see them but, can I spool all the log entries of my app to a log file ? I see some property files in apple doc but doesn't care about write them into a text file.
2.1 Is it possible to write to a file only a category of log entries to only analyze some categories, for example to analyze only an algorithm?

Related

How to save data from a radar device?

I have a problem to save my data from a radar device to a mat.file. Why do I wanna use a mat.file? Cause i want to use a SAR-algorithm afterwards in matlab. The radar what i am using is the 2 PI Labs SENSE X1155S-E radar. Additionally i am pretty new to python, that means my knowledge comes from forums and the documentation of python. I spent days testing things like "scipy.io.savemat", "pickle.dump", "file.write/.read" or "self.write", but I still can not save my data as i need it.
Here is an extract of my code:
# Dump a configuration object with all configured settings
config = device.sense.dump()
logger.info(f'Configuration: {config}')
logger.info('+ Running sweep(s)')
acq = device.initiate.immediate_and_receive()
# Read all data in one go
logger.info(' Read data')
data = acq.read()
# print(type(data)) # datatype check
logger.info(' Save data')
# create dict for storage
exportDict = {'data': data}
io.savemat("test.mat", exportDict)
With the commandline "data = acq.read()" i get the data from my radar. Now i want to use io.savemat() to save my data in a mat.file, but for some reason I always get an error:
"TypeError: Could not convert None (type <class 'NoneType'>) to array"
I could not fix it until today. Maybe someone can help me to understand, what i am doing wrong. If you need some more information, I can poste more for sure.
Here are 2 pictures of my data and the error. Hope it helps. Thanks.
screenshot of my variable "data" ;
screenshot of my error
Finally i managed to solve my problem.
As u can see in my first picture "variable data" there is a subpoint called "array", so i edit my code as follow:
import numpy as np
import scipy.io as
logger.info(' Save data')
# create dict for storage
exportDict = {'data': data.array}
io.savemat("test.mat", exportDict)
Now i can save the right data as i need it.

Which csv format is appropriate for influxdb2?

I'm going to put the csv file into the bucket using influxdb v2.1.
Attempting to insert a simple example file results in the following error:
error in csv.from(): failed to read metadata: failed to read annotations: expected annotation datatype
The csv file that I was going to write is as follows.
#datatype measurement,tag,double,dateTime:RFC3339
m,host,used_percent,time
mem,host1,64.23,2020-01-01T00:00:00Z
mem,host2,72.01,2020-01-01T00:00:00Z
mem,host1,62.61,2020-01-01T00:00:10Z
mem,host2,72.98,2020-01-01T00:00:10Z
mem,host1,63.40,2020-01-01T00:00:20Z
mem,host2,73.77,2020-01-01T00:00:20Z
This is the example data in the official document of influxdata.
If you look at the first line of the example, you can see that datatype is annotated, but why does the error occur?
How should I modify it?
This looks like invalid annotated CVS.
In the csv.from function documentation, you can find examples (as string literals) of both annotated and raw CVS that the cvs.from supports.

Problem SamplingRateCalculatorList (00000283DDC3C0E0) : All classes are empty ! OTB + QGis

I use OTB (Orfeo Tool Box) in QGis for classification. When I use the ImageTrainClassifier tool in a batch process, I have a problem for some images. Instead of returning a model in a xml/txt file format, it returns several files with those extensions : .xml_rates_1, .xml_samples_1.dbf, .xml_samples_1.prj, .xml_samples_1.shp, .xml_samples_1.shx, .xml_stats_1 (I have the same files with txt instead of xml if I use txt file format as output).
During the execution of the algorithms, I have only one warning message :
(WARNING): file ..\Modules\Learning\Sampling\src\otbSamplingRateCalculatorList.cxx, line 99, SamplingRateCalculatorList (00000283DDC3C0E0): All classes are empty !
And after that :
(FATAL) TrainImagesClassifier: No samples found in the inputs!
The problem is that after that, I want to use ImageClassifier, that takes the model of ImageTrainClassifier in input, that I don’t have.
Thanks for your help

How can I pass a pointer to a file in helm upgrade command?

I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. helm upgrade --set-file does not take a binary file. This seem to be the issue I found here: https://github.com/helm/helm/issues/3276. This truststore files are stored in Jenkins Credential store.
As the command itself is described below:
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
it is also important to know The Format and Limitations of
--set.
The error you see: Error: failed parsing --set-file data... means that the file you are trying to use does not meet the requirements. See the example below:
--set-file key=filepath is another variant of --set. It reads the
file and use its content as a value. An example use case of it is to
inject a multi-line text into values without dealing with indentation
in YAML. Say you want to create a brigade project with certain value
containing 5 lines JavaScript code, you might write a values.yaml
like:
defaultScript: |
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
Being embedded in a YAML, this makes it harder for you to use IDE
features and testing framework and so on that supports writing code.
Instead, you can use --set-file defaultScript=brigade.js with
brigade.js containing:
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
I hope it helps.

Parsing NYC Transit/MTA historical GTFS data (not realtime)

I've been puzzling on this on and off for months and can't find a solution.
The MTA claims to provide historical data in form of daily dumps in GTFS format here:
[http://web.mta.info/developers/MTA-Subway-Time-historical-data.html][1]
See for yourself by downloading the example they provide, in this case Sep, 17th , 2014:
[https://datamine-history.s3.amazonaws.com/gtfs-2014-09-17-09-31][1]
My problem? The file is gobbledygook. It does not follow GTFS specifications, has no extension, and when I open it using a text editor it looks like 7800 lines of this:
n
^C1.0^X �枪�^Eʞ>`
^C1.0^R^K
^A1^R^F^P����^E^R^K
^A2^R^F^P����^E^R^K
^A3^R^F^P����^E^R^K
^A4^R^F^P����^E^R^K
^A5^R^F^P����^E^R^K
^A6^R^F^P����^E^R^K
^AS^R^F^P����^E^R[
^F000001^ZQ
6
^N050400_1..S02R^Z^H20140917*^A1�>^V
^P01 0824 242/SFY^P^A^X^C^R^W^R^F^Pɚ��^E"^D140Sʚ>^F
^AA^R^AA^RR
^F000002"H
6
Per the MTA site (appears untrue)
All data is formatted in GTFS-realtime
Any idea on the steps necessary to transform this mystery file into usable GTFS data? Is there some encoding I am missing? I have looked for 10+ and been unable to come up with a solution.
Also, not to be a stickler but I am NOT referring to the MTA's realtime data feed, which is correctly formatted and usable. I am specifically referring to the historical data dumps I reference above (have received many "solutions" referring only to realtime data feed)
The file you link to is in GTFS-realtime format, not GTFS, and the page you linked to does a very bad job of explaining which format their data is actually in (though it is mentioned in your quote).
GTFS is used to store schedule data, like routes and scheduled arrival times.
GTFS-realtime is generally used to transfer actual transit performance data in real-time, like vehicle locations and expected or actual arrival times. It is a protobuf, a specification for compiled binary data publicized by Google, which means you can't usefully read it in a text editor, but you instead have to load it programmatically using the Google protobuf tools. It can be used as a historical data format in the way MTA is here, by making daily dumps of the GTFS-rt feed publicly available. It's called GTFS-realtime because various data fields in the realtime like route_id, trip_id, and stop_id are designed to link to the published GTFS schedules.
I confirmed the validity of the data you linked to by decompiling it using the gtfs-realtime.proto specification and the Google protobuf tools for Python. It begins:
header {
gtfs_realtime_version: "1.0"
timestamp: 1410960621
}
entity {
id: "000001"
trip_update {
trip {
trip_id: "050400_1..S02R"
start_date: "20140917"
route_id: "1"
}
stop_time_update {
arrival {
time: 1410960713
}
stop_id: "140S"
}
}
}
...
and continues in that vein for a total of 55833 lines (in the default string output format).
EDIT: the Python script used to convert the protobuf into string representation is very simple:
import gtfs_realtime_pb2 as gtfs_rt
f = open('gtfs-rt.pb', 'rb')
raw_str = f.read()
msg = gtfs_rt.FeedMessage()
msg.ParseFromString(raw_str)
print msg
This requires gtfs-realtime.proto to have been compiled into gtfs_realtime_pb2.py using protoc (following the instructions in the Python protobuf documentation under "Compiling Your Protocol Buffers") and placed in the same directory as the Python script. Furthermore, the binary protobuf downloaded from the MTA needs to be named gtfs-rt.pb and located in the same directory as the Python script.

Resources