I see in the Docker Remote API Docs that filter can be used to filter on status but I'm unsure how to form the request:
https://docs.docker.com/reference/api/docker_remote_api_v1.16/#list-containers
GET /containers/json?filters=status[exited] ?????
How should this be formatted to display ONLY exited containers?
jwodder is correct on the filter but I wanted to go through this step by step as I wasn't familiar with the Go data types.
The Docker API documentation refers to using a map[string][]string for a filter, which is a Go map (hash table)
map[string] defines a map with keys of type string
[]string is the type definition for the values in the map. A slice
[] is an array without fixed length. Then the slice is made up of
string values.
So the API requires a hash map of arrays containing strings. This Go Playground demonstrates marshalling the Go filter data:
mapS := map[string][]string{ "status":[]string{"exited"} }
Into JSON:
{ "status": [ "exited" ] }
So adding that JSON to the Docker API request you get:
GET /containers/json?all=1&filters={%22status%22:[%22exited%22]}
all=1 is included to report exited containers (like -a on the command line).
It might be easier for non Go people if they just documented the JSON structure for the API : /
The most elegant way to use docker with curl and don't bother with encoding I found in this answer. Basically, it's tell curl to use data as query parameter and encode it. To get exited container the query may look like:
curl -G -XGET "http://localhost:5555/containers/json" \
-d 'all=1' \
--data-urlencode 'filters={"status":["exited"]}' | python -m json.tool
By my reading of the docs, it should be:
GET /containers/json?filters={"status":["exited"]}
Some of that might need to be URL-encoded, though.
Related
I am using Apollo client for GraphQL client integrations. I have added the following run script, which is suggested in the official documentation.
cd "${SRCROOT}/${TARGET_NAME}/GraphQL/Open"
$APOLLO_FRAMEWORK_PATH/check-and-run-apollo-codegen.sh generate $(find
. -name '*.graphql') --schema schema.json
--output APIClient.swift
But the problem that is coming up is all the scalar are right now coming up as String.
For Example:- while logging in if I create a mutation of email and password, my schema returns response as JSON while APIClient created shows response as String(instead of JSON).
Due to this there is an error received which says
Apollo.GraphQLResultError(path: ["login", "response"], underlying: Apollo.JSONDecodingError.couldNotConvert
this is because String is received instead of JSON and string can not be converted into required JSON.
Is anyone facing the same issue?
So I had figured it out. The solution is to add
--passthrough-custom-scalars
in the run script. This will pass custom scalars like JSON. So the complete runscript becomes
cd "${SRCROOT}/${TARGET_NAME}/GraphQL/Open"
$APOLLO_FRAMEWORK_PATH/check-and-run-apollo-codegen.sh generate $(find
. -name '*.graphql') --schema schema.json --passthrough-custom-scalars
--output APIOpen.swift
Now when the code is recomplied along with this run script the JSON scalar becomes valid.
This took me a lot of time to figure out. Hope it helps someone and save their time. Thanks
I have run into a cumbersome limitation of the bitbucket API 2.0 - I am hoping there is a way to make it more usable.
When one wants to retrieve a list of repositories from the bitbucket API 2.0, this url can be used:
https://api.bitbucket.org/2.0/repositories/{teamname}
This returns the first 10 repos in the list. To access the next 10, one simply needs to add a page parameter:
https://api.bitbucket.org/2.0/repositories/{teamname}?page=2
This returns the next 10. One can also adjust the number of results returned using the pagelen parameter, like so:
https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100
The maximum number can vary per account, but 100 is the maximum any team is able to request with each API call. The cumbersome part is that I cannot find a way to get page 2 with a pagelen of 100. I have tried variations on the following:
https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100&page=2
https://api.bitbucket.org/2.0/repositories/{teamname}?page=2&pagelen=100
I've also tried using parameters such as limit or size to no avail. Is the behavior I seek even possible? Some relevant documentation can be found here.
EDIT: It appears this behavior is possible, however the bitbucket 2.0 API will only recognize multiple parameters if the entire url is in quotes.
Example:
curl "https://api.bitbucket.org/2.0/repositories/{teamname}?pagelen=100&page=2"
ORIGINAL ANSWER: I was able to get around this by creating a bash script that looped through each page of 10 results, adding each new 10 repos to a temporary file and then cloning into those 10 repos. The only manual thing that needs to be done is to update the upper limit in the for loop to be the last page expected.
Here is an example script:
for thisPage in {1..23}
do
curl https://api.bitbucket.org/2.0/repositories/[organization]?page=$thisPage -u [username]:[password] > repoinfo
for repo_name in `cat repoinfo | sed -r 's/("slug": )/\n\1/g' | sed -r 's/"slug": "(.*)"/\1/' | sed -e 's/{//' | cut -f1 -d\" | tr '\n' ' '`
do
echo "Cloning " $repo_name
git clone https://[username]#bitbucket.org/[organization]/$repo_name
echo "---"
done
done
Much help was gleaned from:
https://haroldsoh.com/2011/10/07/clone-all-repos-from-a-bitbucket-source/
and http://adomingues.github.io/2015/01/10/clone-all-repositories-from-a-user-bitbucket/ Thanks!
Via the HTTP API, we can delete an arbitrary element from a set without fetching the whole content:
curl -X POST http://127.0.0.1:8098/types/sets/buckets/travel/datatypes/cities -H "content-type: application/json" -d '{ "remove" : "Toronto" }'
(to verify:
tcpdump -i any -s 0 -n 'src port 8087 or src port 8098 and host 127.0.0.1')
However via protocol buffers client, we need to perform the following steps in order to delete an element from a set:
{ok, MySet} = case riakc_pb_socket:fetch_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>) of {error,{notfound,set}}-> {ok, riakc_set:new()}; {ok, Set} -> {ok, Set} end.
ModSet=riakc_set:del_element(lists:last(ordsets:to_list(riakc_set:value(MySet))), MySet).
riakc_pb_socket:update_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>, riakc_set:to_op(ModSet)).
As its name suggests, riakc_pb_socket:fetch_type retrieves the whole set. I could not find any methods in the Erlang client using protobuf to just send the delete request without retrieving the whole set first.
Is there a way to avoid fetching the whole set object via the protobuf client when deleting an element?
Update: protocol buffers API to update datatypes seems useful:
http://docs.basho.com/riak/latest/dev/references/protocol-buffers/dt-set-store/
riak_pb/src/riak_pb_dt_codec.erl
The last argument to riakc-pb-socket:modify_type (source code) is a set of changes. If you already know which element you want removed it looks like you could, in theory, create a new empty set and build a remove operation
Empty = riakc_set:new(Context),
Removal = riakc_set:del_element(<<"Toronto">>,Empty),
Op = riakc_set:to_op(Removal),
riakc_pb_socket:update_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>, Op).
The key here is the Context which is an opaque value generated by the server. You may be able to send the request without one, or with an empty one (<<>>), but that is probably not a Good Thing(tm). The context is how Riak determines causality. It is updated by each actor each time an action is taken and is used to determine the final consistent value. So if you send a set operation with no context it may fail or be processed out of order, especially if any other updates are happening around the same time.
In the case of the HTTP API the entire object is fetched by a coordinator to get the context, then the operation is submitted with that context.
When performing a regular get operation, you can specify head in the options to get back just the metadata, which include the context, but not the data. I haven't tested with fetch_type yet, but there may be similar functionality for convergent types. If there is, you would just need to fetch the head to get the context, and submit your operation with that context.
-EDIT-
According to the docs:
%% You cannot fetch a Data Type's context directly using the Erlang
%% client. This is actually quite all right, as the client automatically
%% manages contexts when making updates.
It would appear that you can pass a fun to riakc_pb_socket:modify_type so that you don't have to explicitly fetch the old value, but that will just fetch it behind the scenes, so you only really save a tiny bit of code.
riakc_pb_socket:modify_type(Pid,
fun(MySet) -> riakc_set:del_element(lists:last(ordsets:to_list(riakc_set:value(MySet))), MySet)
end, {<<"sets">>, <<"travel">>}, <<"cities">>,[create]).
I'm using Riak 2.0.2 and Riak-Erlang-Client 2.0.0 The documentation suggest that "Search is preferred for querying", here the full excerpt :
In general, you should consider Search to be the default choice for
nearly all querying needs that go beyond basic CRUD/KV operations. If
your use case demands some sort of querying mechanism and you're in
doubt about what to use, you should assume that Search is the right
tool for you.
There is extensive documentation on how to use Riak Datatype, set-up bucket type, creating search index and so on. I was hoping to see riak client example on http://docs.basho.com/riak/latest/dev/search/search-data-types/ but i found none.
I try the following path.
Creating a bucket type that both uses Riak datatype and contains search index
riak-admin bucket-type create counters '{"props":{"datatype":"counter"}}'
riak-admin bucket-type activate counters
curl -XPUT $RIAK_HOST/search/index/scores \
-H 'Content-Type: application/json' \
-d '{"schema":"_yz_default"}'
riak-admin bucket-type update counters '{"props":{"search_index":"scores"}}'
Used code in app.
Counter = riakc_counter:new().
ChristopherHitchensCounter = riakc_counter:increment(5, Counter).
{ok, Pid} = riakc_pb_socket:start("127.0.0.1",8087).
ChristopherHitchens = riakc_obj:new({<<"counters">>, <<"people">>}, <<"christopher_hitchens">>,
ChristopherHitchensCounter,
"application/riak_counter"),
riakc_pb_socket:put(Pid, ChristopherHitchens).
At this point, i expect i could query some counter using
{ok, Results} = riakc_pb_socket:search(Pid, <<"scores">>, <<"counter:[* TO 15]">>),
io:fwrite("~p~n", [Results]),
Docs = Results#search_results.docs,
io:fwrite("~p~n", [Docs]).
But it doesn't seems to be working. Any guide on this would be appreciated.
Thanks.
UPDATE
In case someone stumbling upon similar issue (until Riak documentation includes example for erlang client on http://docs.basho.com/riak/latest/dev/search/search-data-types/), the guy from riak mailing list provides link to riak test suite, and its turned out that riakc_pb_socket:update_type/4 is the required method to associate riak data type. I modify previous used code to :
Counter = riakc_counter:new().
ChristopherHitchensCounter = riakc_counter:increment(5, Counter).
{ok, Pid} = riakc_pb_socket:start("127.0.0.1",8087).
riakc_pb_socket:update_type(Pid,{<<"counters">>,<<"people">>},<<"christopher_hitchens">>,riakc_counter:to_op(ChristopherHitchensCounter)).
And now i can perform search query on my indexes :)
Counters and other data types are not manipulated via riakc_obj. Refer to the documentation page here http://docs.basho.com/riak/latest/dev/using/data-types/ and select the "Erlang" tab on examples.
Problem
I'm trying to filter a json JQ result to only show a substring of the original string. For example if a JQ filter grabed the value
4ffceab674ea8bb5ec421c612536696839bbaccecf64e851dfc270d795ee55d1
I want it to only return the first 10 characters 4ffceab674.
What I've tried
On the Official JQ website you can find an example that should give me what I need:
Command: jq '.[2:4]'
Input: "abcdefghi"
Output: "cd"
I've tried to test this out with a simple example in the unix terminal:
# this works fine, => "abcdefghi"
echo '"abcdefghi"' | jq '.'
# this doesn't work => jq: error: Cannot index string with object
echo '"abcdefghi"' | jq '.[2:4]'
So, it turns out most of these filters are not yet in the released version. For reference see issue #289
What you could do is download the latest development version and compile from source. See download page > From source on Linux
After that, if indexing still doesn't work for strings, you should, at least, be able to do explode, index, implode combination, which seems to have been your plan.
Looking at the jq-1.3 manual I suspect there isn't a solution using that version since it offers no primitives for extacting parts of a string.