JMESPath to extract key where values match - aws-secrets-manager

Given source which looks like:
{
"Name": "sandbox-config",
"VersionList": {
"version-2": [ "STAGING" ],
"version-1": [ "CURRENT", "NEXT" ],
"version-0": [ "ANCIENT" ]
}
}
I'm looking for a jmespath query which would give me:
{
"Name": "sandbox-config",
"Version": "version-1"
}
where version-1 is the first key where the value array contains "CURRENT".
So, a query like,
{ Name:Name, Version:VersionList.*[?#==`CURRENT`] | [] | [0]}
gives me:
{
"Name": "sandbox-config",
"Version": "CURRENT"
}
which isn't what I'm after. Similarly:
{Name:Name, Version:VersionList.keys(#)}
which gives me:
{
"Name": "sandbox-config",
"Version": [
"version-2",
"version-1",
"version-0"
]
}
Any suggestions? I feel like I'm circling around a solution and not quite getting there.
(Context for this: I'm trying to process the output of aws secretsmanager list-secrets, which has SecretVersionsToStages with ARN values as keys with an array containing "AWSCURRENT".)

https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_DescribeSecret.html#API_DescribeSecret_ResponseSyntax
if you want just get the version number for one secret with stage [AWSCURRENT], I recommend you use describe secret rather than list secret.
And one thing I need to call out is that SecretVersionsToStages with versoin number as keys with an array containing "AWSCURRENT".

Related

How to indicate that the response body is a List in Swagger UI docs using FastAPI?

I describe the structure of the outgoing JSON in the model class, however I cannot make the output as a list.
My model class:
class versions_info(BaseModel):
""" List of versions """
version : str = Field(..., title="Version",example="2.1.1")
url : str = Field(..., title="Url",example="https://ocpi.wedwe.ww/ocpi/2.1.1/")
And in the documentation I see:
However, I need it to be displayed as:
[
{
"version": "2.1.1",
"url": "https://www.server.com/ocpi/2.1.1/"
},
{
"version": "2.2",
"url": "https://www.server.com/ocpi/2.2/"
}
]
What am I doing wrong?
You can indicate that you're returning a List by wrapping the response_model you've defined in List[<model>].
So in your case it'd be:
#app.get('/foo', response_model=List[versions_info])

Elasticsearch saves document as string of array, not array of strings

I am trying to contain array as a document value.
I succeed it in "tags" field as below;
This document contains array of strings.
curl -XGET localhost:9200/MY_INDEX/_doc/132328908
#=> {
"_index":"MY_INDEX",
"_type":"_doc",
"_id":"132328908",
"found":true,
"_source": {
"tags": ["food"]
}
}
However, when I am putting items in the same way as above,
the document is SOMETIMES like that;
curl -XGET localhost:9200/MY_INDEX/_doc/328098989
#=> {
"_index":"MY_INDEX",
"_type":"_doc",
"_id":"328098989",
"found":true,
"_source": {
"tags": "[\"food\"]"
}
}
This is string of array, not array of strings, which I expected.
"tags": "[\"food\"]"
It seems that this situation happens randomly and I could not predict it.
How could it happen?
Note:
・I use elasticsearch-ruby client to index a document.
This is my actual code;
es_client = Elasticsearch::Client.new url: MY_ENDPOINT
es_client.index(
index: MY_INDEX,
id: random_id, # defined elsewhere
body: {
doc: {
"tags": ["food"]
},
}
)
Thank you in advance.

Druid - Order data by timestamp column

I've set up a Druid cluster to ingest real-time data from Kafka.
Question
Does Druid support fetching data that's sorted by timestamp? For example, let's say I need to retrieve the latest 10 entries from a Datasource X. Can I do this by using a LimitSpec (in the Query JSON) that includes the timestamp field? Or is there another better option supported Druid?
Thanks in advance.
Get unaggregated rows
To get unaggregated rows, you can do a query with "queryType: "select".
Select queries are also useful when pagination is needed - they let you set a page size, and automatically return a paging identifier for use in future queries.
In this example, if we just want the top 10 rows, we can pass in "pagingSpec": { "pageIdentifiers": {}, "threshold": 10 }.
Order by timestamp
To order these rows by "timestamp", you can pass in "descending": "true".
Looks like most Druid query types support the descending property.
Example Query:
{
"queryType": "select",
"dataSource": "my_data_source",
"granularity": "all",
"intervals": [ "2017-01-01T00:00:00.000Z/2017-12-30T00:00:00.000Z" ],
"descending": "true",
"pagingSpec": { "pageIdentifiers": {}, "threshold": 10 }
}
Docs on "select" type queries
You can use a group by query to do this, So group by __time as an extraction function then set granularity to all and use the limitSpec to sort/limit that will work. Now if you want to use a timeseries query it is more tricky to get the latest 10. One way to do it is to set the granularity to the desired one let say Hour then set the interval to be 10H starting from the most recent point in time. This sounds more easy to say than achieve. I will go the first way unless you have a major performance issue.
{
"queryType": "groupBy",
"dataSource": "wikiticker",
"granularity": "all",
"dimensions": [
{
"type": "extraction",
"dimension": "__time",
"outputName": "extract_time",
"extractionFn": {
"type": "timeFormat"
}
},
],
"limitSpec": {
"type": "default",
"limit": 10,
"columns": [
{
"dimension": "extract_time",
"direction": "descending"
}
]
},
"aggregations": [
{
"type": "count",
"name": "$f2"
},
{
"type": "longMax",
"name": "$f3",
"fieldName": "added"
}
],
"intervals": [
"1900-01-01T00:00:00.000/3000-01-01T00:00:00.000"
]
}

Mongodb - search without key

I'd like to search documents in collection only by value. Let's say my collection contains documents like below:
[
{
"_id": "57a443c74d854d192afcc451",
"somekey": "123",
"otherkey": "zxc"
},
{
"_id": "57a443ca4d854d192afcc452",
"key": "123",
"otherkey": "123zxcvbnm"
}
]
and now I want to get all documents where value of any key contains 123.
I tried to do something like (written in Ruby and using mongoid):
new_search_query = { /.*/ => /#{v}/ }
collection.find(new_search_query)
but it looks like it is not suported becuase I get:
BSON::InvalidKey (Regexp instances are not allowed as keys in a BSON document.):
Is there any other manner or some workaround to do it?
Try full_text_search of mongoid for rails app.

JSON-LD normalization - ignore JSON nesting

I'm working on JSON-LD serialization, and ideally I would like to have a #context which I can add to the existing GeoJSON output (together with some #ids and #types), so that both the Turtle output and the JSON-LD output will normalize to the same triples.
Data is organized as follows: each object/feature has an ID and a name, and data on one or more layers. Per layer, there is a data field, which contains a JSON object.
Example GeoJSON output:
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"id": "admr.nl.appingedam",
"name": "Appingedam",
"layers": {
"cbs": {
"data": {
"name": "Appingedam",
"population": 1092
}
},
"admr": {
"data": {
"name": "Appingedam",
"gme_code": 4654,
"admn_level": 3
}
}
}
},
"geometry": {…}
}
]
}
Example Turtle output:
<admr.nl.appingedam>
a :Node ;
dc:title "Appingedam" ;
:createdOnLayer <layer/admr> ;
:layerData <admr.nl.appingedam/admr> ;
:layerData <admr.nl.appingedam/cbs> .
<admr.nl.appingedam/admr>
a :LayerData ;
:definedOnLayer <layer/admr> ;
<layer/admr/name> "Appingedam" ;
<layer/admr/gme_code> "4654" .
<layer/admr/admn_level> "3" .
<admr.nl.appingedam/cbs>
a :LayerData ;
:definedOnLayer <layer/cbs> ;
<layer/cbs/name> "Appingedam" ;
<layer/cbs/population> "1092" ;
The properties object does not have its own URI. Is there a way to create a JSON-LD context which takes the contents of the properties into account, but further 'ignores' its precence?
Answered by Gregg Kellogg on JSON-LD mailing list:
This is something that keeps coming up: having a transparent layer,
that basically folds properties up a level. This was discussed during
the development of JSON-LD, but ultimately it was rejected.
I don't see any prospects for doing something in the short-term, but
it could be revisited in a possible future WG chartered with revising
the spec. Feedback like this is quite useful.
In the mean time, you can play with different JSON-LD encodings that
match your RDF though tools like http://json-ld.org/playground and my
own http://rdf.greggkellogg.net/distiller.
Gregg

Resources