I'm using Riak 2.0.2 and Riak-Erlang-Client 2.0.0 The documentation suggest that "Search is preferred for querying", here the full excerpt :
In general, you should consider Search to be the default choice for
nearly all querying needs that go beyond basic CRUD/KV operations. If
your use case demands some sort of querying mechanism and you're in
doubt about what to use, you should assume that Search is the right
tool for you.
There is extensive documentation on how to use Riak Datatype, set-up bucket type, creating search index and so on. I was hoping to see riak client example on http://docs.basho.com/riak/latest/dev/search/search-data-types/ but i found none.
I try the following path.
Creating a bucket type that both uses Riak datatype and contains search index
riak-admin bucket-type create counters '{"props":{"datatype":"counter"}}'
riak-admin bucket-type activate counters
curl -XPUT $RIAK_HOST/search/index/scores \
-H 'Content-Type: application/json' \
-d '{"schema":"_yz_default"}'
riak-admin bucket-type update counters '{"props":{"search_index":"scores"}}'
Used code in app.
Counter = riakc_counter:new().
ChristopherHitchensCounter = riakc_counter:increment(5, Counter).
{ok, Pid} = riakc_pb_socket:start("127.0.0.1",8087).
ChristopherHitchens = riakc_obj:new({<<"counters">>, <<"people">>}, <<"christopher_hitchens">>,
ChristopherHitchensCounter,
"application/riak_counter"),
riakc_pb_socket:put(Pid, ChristopherHitchens).
At this point, i expect i could query some counter using
{ok, Results} = riakc_pb_socket:search(Pid, <<"scores">>, <<"counter:[* TO 15]">>),
io:fwrite("~p~n", [Results]),
Docs = Results#search_results.docs,
io:fwrite("~p~n", [Docs]).
But it doesn't seems to be working. Any guide on this would be appreciated.
Thanks.
UPDATE
In case someone stumbling upon similar issue (until Riak documentation includes example for erlang client on http://docs.basho.com/riak/latest/dev/search/search-data-types/), the guy from riak mailing list provides link to riak test suite, and its turned out that riakc_pb_socket:update_type/4 is the required method to associate riak data type. I modify previous used code to :
Counter = riakc_counter:new().
ChristopherHitchensCounter = riakc_counter:increment(5, Counter).
{ok, Pid} = riakc_pb_socket:start("127.0.0.1",8087).
riakc_pb_socket:update_type(Pid,{<<"counters">>,<<"people">>},<<"christopher_hitchens">>,riakc_counter:to_op(ChristopherHitchensCounter)).
And now i can perform search query on my indexes :)
Counters and other data types are not manipulated via riakc_obj. Refer to the documentation page here http://docs.basho.com/riak/latest/dev/using/data-types/ and select the "Erlang" tab on examples.
Related
In the Tinkerpop or Titan documentation, all operations are based on a sample graph. How to creat a new empty graph to work on?
I am programming in erlang connecting to Tinkergraph, planned to use Titan later in production. There is no erlang driver for both so I am connecting by REST. It is easy to read from graph, but if I want to read from user's input then write into the graph, for example, to create a person named teddy:
screenshot 1
I got those errors. What is the correct way?
Thank you.
Update: For following situation:
23> Newperson=terry.
terry
24> Newperson.
terry
If I want to add this terry, below two will not work. What's the correct way to do it?
screenshot 2
1
TitanGraph titanGraph = TitanFactory.open(config); will open a titan graph without the sample data.
If you have already commited the sample data to your keyspace then you can just change the keyspace defined in your config file.
For example if you are using a cassandra backend you would change storage.cassandra.keyspace=xxxxxx .
You can also clear any keyspace using TitanCleanup.clear(graph);
2
As for the error you are seeing. It looks like you are trying to label your vertex incorrectly. I posted the following and it worked:
{
"gremlin" : "g.addV(label, x).property(y,z)",
"bindings" :
{
"x" : "person",
"y" : "name",
"z" : "Teddy"
}
}
A final note, when you start using Titan 1.0.0 make sure you checkout this section of the tinkerpop docs. Especially make sure to change the channel in the gremlin-server.yaml config to:
channelizer: com.tinkerpop.gremlin.server.channel.HttpChannelizer
Answer to my own question: construct a Body by lists:concat() or ++, then post
Via the HTTP API, we can delete an arbitrary element from a set without fetching the whole content:
curl -X POST http://127.0.0.1:8098/types/sets/buckets/travel/datatypes/cities -H "content-type: application/json" -d '{ "remove" : "Toronto" }'
(to verify:
tcpdump -i any -s 0 -n 'src port 8087 or src port 8098 and host 127.0.0.1')
However via protocol buffers client, we need to perform the following steps in order to delete an element from a set:
{ok, MySet} = case riakc_pb_socket:fetch_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>) of {error,{notfound,set}}-> {ok, riakc_set:new()}; {ok, Set} -> {ok, Set} end.
ModSet=riakc_set:del_element(lists:last(ordsets:to_list(riakc_set:value(MySet))), MySet).
riakc_pb_socket:update_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>, riakc_set:to_op(ModSet)).
As its name suggests, riakc_pb_socket:fetch_type retrieves the whole set. I could not find any methods in the Erlang client using protobuf to just send the delete request without retrieving the whole set first.
Is there a way to avoid fetching the whole set object via the protobuf client when deleting an element?
Update: protocol buffers API to update datatypes seems useful:
http://docs.basho.com/riak/latest/dev/references/protocol-buffers/dt-set-store/
riak_pb/src/riak_pb_dt_codec.erl
The last argument to riakc-pb-socket:modify_type (source code) is a set of changes. If you already know which element you want removed it looks like you could, in theory, create a new empty set and build a remove operation
Empty = riakc_set:new(Context),
Removal = riakc_set:del_element(<<"Toronto">>,Empty),
Op = riakc_set:to_op(Removal),
riakc_pb_socket:update_type(Pid, {<<"sets">>, <<"travel">>}, <<"cities">>, Op).
The key here is the Context which is an opaque value generated by the server. You may be able to send the request without one, or with an empty one (<<>>), but that is probably not a Good Thing(tm). The context is how Riak determines causality. It is updated by each actor each time an action is taken and is used to determine the final consistent value. So if you send a set operation with no context it may fail or be processed out of order, especially if any other updates are happening around the same time.
In the case of the HTTP API the entire object is fetched by a coordinator to get the context, then the operation is submitted with that context.
When performing a regular get operation, you can specify head in the options to get back just the metadata, which include the context, but not the data. I haven't tested with fetch_type yet, but there may be similar functionality for convergent types. If there is, you would just need to fetch the head to get the context, and submit your operation with that context.
-EDIT-
According to the docs:
%% You cannot fetch a Data Type's context directly using the Erlang
%% client. This is actually quite all right, as the client automatically
%% manages contexts when making updates.
It would appear that you can pass a fun to riakc_pb_socket:modify_type so that you don't have to explicitly fetch the old value, but that will just fetch it behind the scenes, so you only really save a tiny bit of code.
riakc_pb_socket:modify_type(Pid,
fun(MySet) -> riakc_set:del_element(lists:last(ordsets:to_list(riakc_set:value(MySet))), MySet)
end, {<<"sets">>, <<"travel">>}, <<"cities">>,[create]).
I've running the following Riak map phase:
-module(delete_map_function).
-export([get_keys/3]).
%Returns bucket and key pairs from a map phase
get_keys(Value,_Keydata,_Arg) ->
[[riak_object:bucket(Value),riak_object:key(Value)]].
And the following Riak reduce phase: http://contrib.basho.com/delete_keys.html
I keep getting this error message:
{"phase":0,"error":"function_clause","input":"{{error,notfound},{<<\"my_bucket\">>,<<\"item_key\">>},undefined}","type":"error","stack":"[{riak_object,bucket,[{error,notfound}],[{file,\"src/riak_object.erl\"},{line,251}]},{delete_map_function,get_keys,3,[{file,\"delete_map_function.erl\"},{line,7}]},{riak_kv_mrc_map,map,3,[{file,\"src/riak_kv_mrc_map.erl\"},{line,164}]},{riak_kv_mrc_map,process,3,[{file,\"src/riak_kv_mrc_map.erl\"},{line,140}]},{riak_pipe_vnode_worker,process_input,3,[{file,\"src/riak_pipe_vnode_worker.erl\"},{line,444}]},{riak_pipe_vnode_worker,wait_for_input,2,[{file,\"src/riak_pipe_vnode_worker.erl\"},{line,376}]},{gen_fsm,...},...]"}
I'm running the job via Java:
MapReduceResult mapReduceResult = RiakUtils.getPBClient().mapReduce(iq)
.addMapPhase(new NamedErlangFunction("delete_map_function", "get_keys"))
.addReducePhase(new NamedErlangFunction("delete_reduce_function", "delete"))
.execute();
I've read somewhere that I should use the filter_notfound argument in the Map phase, but I still keep getting the error even after adding it:
MapReduceResult mapReduceResult = RiakUtils.getPBClient().mapReduce(iq)
.addMapPhase(new NamedErlangFunction("delete_map_function", "get_keys"), "filter_notfound")
.addReducePhase(new NamedErlangFunction("delete_reduce_function", "delete"))
.execute();
I'm running Riak 1.3 and using the Riak Java client v1.1.0
First. I think it is not efficient way to delete keys via map/reduce phases. if you got keys list and feed it to map phase riak will first read all objects and then give it to your functions. So if you only need to delete object, better do it without read.
Second. All your map/reduce functions should be written with following exceptions:
instead of Value, you can get {error, notfound} because of eventual nature of riak.
also you can get deleted riak object as Value. You can know that object was deleted by special flag dict:is_key(<<"X-Riak-Deleted">>, riak_object:get_metadata(RiakObj)):
Third. To fix your error you should filter notfound keys from the list:
get_keys({error, notfound},_Keydata,_Arg) ->
[];
get_keys(Value,_Keydata,_Arg) ->
[[riak_object:bucket(Value),riak_object:key(Value)]].
I am trying out Node.js Express framework, and looking for plugin that allows me to interact with my models via console, similar to Rails console. Is there such a thing in NodeJS world?
If not, how can I interact with my Node.js models and data, such as manually add/remove objects, test methods on data etc.?
Create your own REPL by making a js file (ie: console.js) with the following lines/components:
Require node's built-in repl: var repl = require("repl");
Load in all your key variables like db, any libraries you swear by, etc.
Load the repl by using var replServer = repl.start({});
Attach the repl to your key variables with replServer.context.<your_variable_names_here> = <your_variable_names_here>. This makes the variable available/usable in the REPL (node console).
For example: If you have the following line in your node app:
var db = require('./models/db')
Add the following lines to your console.js
var db = require('./models/db');
replServer.context.db = db;
Run your console with the command node console.js
Your console.js file should look something like this:
var repl = require("repl");
var epa = require("epa");
var db = require("db");
// connect to database
db.connect(epa.mongo, function(err){
if (err){ throw err; }
// open the repl session
var replServer = repl.start({});
// attach modules to the repl context
replServer.context.epa = epa;
replServer.context.db = db;
});
You can even customize your prompt like this:
var replServer = repl.start({
prompt: "Node Console > ",
});
For the full setup and more details, check out:
http://derickbailey.com/2014/07/02/build-your-own-app-specific-repl-for-your-nodejs-app/
For the full list of options you can pass the repl like prompt, color, etc: https://nodejs.org/api/repl.html#repl_repl_start_options
Thank you to Derick Bailey for this info.
UPDATE:
GavinBelson has a great recommendation for running with sequelize ORM (or anything that requires promise handling in the repl).
I am now running sequelize as well, and for my node console I'm adding the --experimental-repl-await flag.
It's a lot to type in every time, so I highly suggest adding:
"console": "node --experimental-repl-await ./console.js"
to the scripts section in your package.json so you can just run:
npm run console
and not have to type the whole thing out.
Then you can handle promises without getting errors, like this:
const product = await Product.findOne({ where: { id: 1 });
I am not very experienced in using node, but you can enter node in the command line to get to the node console. I then used to require the models manually
Here is the way to do it, with SQL databases:
Install and use Sequelize, it is Node's ORM answer to Active Record in Rails. It even has a CLI for scaffolding models and migrations.
node --experimental-repl-await
> models = require('./models');
> User = models.User; //however you load the model in your actual app this may vary
> await User.findAll(); //use await, then any sequelize calls here
TLDR
This gives you access to all of the models just as you would in Rails active record. Sequelize takes a bit of getting used to, but in many ways it is actually more flexible than Active Record while still having the same features.
Sequelize uses promises, so to run these properly in REPL you will want to use the --experimental-repl-await flag when running node. Otherwise, you can get bluebird promise errors
If you don't want to type out the require('./models') step, you can use console.js - a setup file for REPL at the root of your directory - to preload this. However, I find it easier to just type this one line out in REPL
It's simple: add a REPL to your program
This may not fully answer your question, but to clarify, node.js is much lower-level than Rails, and as such doesn't prescribe tools and data models like Rails. It's more of a platform than a framework.
If you are looking for a more Rails-like experience, you may want to look at a more 'full-featured' framework built on top of node.js, such as Meteor, etc.
I have written some extension modules for eJabberd most of which pass pieces of information to RabbitMQ for various reasons. All has been fine until we brought the server up in staging where we have a Rabbit cluster rather than a single box.
In order to utilize the cluster you need to pass "x-ha-policy" parameter to Rabbit with either the "all" or "nodes" value. This works fine for the Java and Python Producers and Consumers, but the eJabberd (using the Erlang AMQP client of course) has me a bit stumped. The x-ha-policy parameter needs to be passed into the "client_properties" parameter which is just the "catchall" for extra parameters.
In Python with pika I can do:
client_params = {"x-ha-policy": "all"}
queue.declare(host, vhost, username, password, arguments=client_params)
and that works. However the doc for the Erlang client says the arguments should be passed in as a list per:
[{binary(), atom(), binary()}]
If it were just [{binary(), binary()}] I could see the relationship with key/value but not sure what the atom would be there.
Just to be clear, I am a novice Erlang programmer so this may be a common construct that I am not familiar with, so no answer would be too obvious.
I found this in amqp_network_connection.erl, which looks like a wrapper to set some default values:
client_properties(UserProperties) ->
{ok, Vsn} = application:get_key(amqp_client, vsn),
Default = [{<<"product">>, longstr, <<"RabbitMQ">>},
{<<"version">>, longstr, list_to_binary(Vsn)},
{<<"platform">>, longstr, <<"Erlang">>},
{<<"copyright">>, longstr,
<<"Copyright (c) 2007-2012 VMware, Inc.">>},
{<<"information">>, longstr,
<<"Licensed under the MPL. "
"See http://www.rabbitmq.com/">>},
{<<"capabilities">>, table, ?CLIENT_CAPABILITIES}],
lists:foldl(fun({K, _, _} = Tuple, Acc) ->
lists:keystore(K, 1, Acc, Tuple)
end, Default, UserProperties).
Apparently the atom describes the value type. I don't know the available types, but there's a chance that longstr will work in your case.