auto generated key in Riak via Java client - auto-generate

Riak supports auto generated keys when storing an object:
http://wiki.basho.com/Basic-Riak-API-Operations.html:
Store a new object and assign random key #
If your application would rather leave key-generation up to Riak, issue a POST request to
the bucket URL instead of a PUT to a bucket/key pair: POST
/riak/bucket If you don’t pass Riak a “key” name after the bucket, it
will know to create one for you.
is it possible to do the same when using the java client?
it seems that key must be provided when storing an object.

Edited to update: This is now available in the Java client. It was added in the 1.0.7 client release. See: https://github.com/basho/riak-java-client/pull/168
Unfortunately ... right now the Java client doesn't support this.
Someone has opened an issue for this: https://github.com/basho/riak-java-client/issues/141
I agree that it needs to be added. We've got a number of things we're working on at the moment for the Riak 1.2 release that are slightly higher priority, but I hope to work on this and get it added in the near future.

Related

How to create an upload (large, ie ~400MB) bytestream service in Vaadin?

In an earlier post from a few minutes ago, I asked a "general" question regarding creating general webservices in vaadin: How can one create webservices in Vaadin 12?
However, one specific unique case that I mainly need to support is the uploading via https of large (eg ~400MB) bytestream objects that would presumably be sent to Vaadin via an https "post" command (with the paylod being provided I presume in raw binary format as a bytestream.) I saw that Vaadin has built-in support for uploading files (which is essentially a post command of a bytestream, I presume?) and then I saw a reference to StreamReceiver here: https://vaadin.com/docs/v12/flow/advanced/tutorial-stream-resources.html
which seems to sound like a custom file importer, but I couldn't find any (simple & more-or-less complete) examples on how to use it. Ideally, a quick few lines of Java to show the "receiving" of the bytestream and a few quick lines (ideally in Java) which "posts" to the receivestream's url would be all that's needed to show how this manual upload of bytes can be accomplished in Vaadin. (In DropWizard & Jersey, I can find such examples reasonably easily, but I'm not sure how to gain that level of control in Vaadin.)
(Very very minor bonus: is there a size limit to the post command? eg, can a bytestream of over say ~4GB be sent and received?)
In Vaadin the Upload API is optimised for streaming into File (unlike handling the stream as in Servlet and JAX-RS API). One way is to first stream to a temp file and then when the file is fully on the server side, handle the data from temp file.
Alternatively you can use Flow Viritin add-on and a helper class UploadFileHandler, which give you and API where you read the contents from InputStream, in same way as with Servlet API. See a usage example is in this test.
This isn't a first time this is asked and I actually have a more verbose blog draft about this subject. I'll add a link to that once I get that published.

Stream remote file to client in ruby/rails 4/unicorn/nginx

I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)

Using an API key in Rails App

I just got an API key for a database I wish to access and want to start building my Rails app. However I dont know where to begin with the API key. Specifically I want to use the brewerydb data and I am building an app where users can find the closest brewery to their location. Can anyone tell me how to get started? I am new to Rails and have never used an API before. I don't know where to begin. What file should I put it in, etc... I know I should probably update the GEMFILE, where else?
Thanks!
Check the documentation for the API. That's all I can say. (well... not really: )
Most API's rely on REST or SOAP, which is basically making HTTP request to certain URI's. An example may be
http://api.somewebsite.com/beers.json
Which would return, for instance, a JSON array of certain beers and their properties.
Furthermore, more often than not, you can test API's (that do not require certain HTTP headers for authentication, which makes it harder) by manually constructing the URI's and opening them in your browser. This way, you can verify that your request is okay before you try it in your Rails application an cannot figure out why it's not working.
You should use the 'figaro' gem https://github.com/laserlemon/figaro
It creates a "application.yml" file in which you can add your API key.

What is the standard way to handle twitter API keys in GPL'd desktop applications?

While developing an desktop application that needs to access twitter API , one must somehow pass the API key (application specific consumer key and consumer secret ) for the application to the user. Twitter's API TOS states that the application's API key cannot be publicly available and if that happens, they reset it. When that application is under GPL , meaning the developer needs to provide the source code to the user, how that user would be able to obtain the API key without it being publicly available ? Is there a standard way to handle this issue ?
Thanks.
Edit:
To clarify the situation, I was storing them in plain text in my code for cree.py so far as a conscious decision. But yesterday Twitter support team contacted me that they have reseted my key and their reasoning was the following :
C. You should not solicit another developer's consumer keys or consumer secrets especially if they will be stored or used for actions outside of that developer's control. Keys and secrets that are compromised will be reset by Twitter. For example, online services that ask for these values in order to provide a "tweet-branding" service are not allowed.
https://dev.twitter.com/terms/api-terms
If an application's keys are posted publicly, it allows for external parties to hijack the application's API access. This presents an enormous abuse risk, and as such we've reset your API keys. Please take care to ensure that these keys are not posted publicly again.
Thanks,
Twitter API Policy
Well, TTYtter evidently uses the honour system:
# yes, this is plaintext. obfuscation would be ludicrously easy to crack,
# and there is no way to hide them effectively or fully in a Perl script.
# so be a good neighbour and leave this the fark alone, okay? stealing
# credentials is mean and inconvenient to users. this is blessed by
# arrangement with Twitter. don't be a d*ck. thanks for your cooperation.
$oauthkey = (!length($oauthkey) || $oauthkey eq 'X') ?
"XXXXXXXXXXXXXXXXXXXXX" : $oauthkey;
$oauthsecret = (!length($oauthsecret) || $oauthsecret eq 'X') ?
"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" : $oauthsecret;
(I have replaced the actual keys with Xs, to make it a little less likely that anyone will go to the trouble to abuse them, but rest assured that they are present in full in the actual source!)
Also, I don't see anything in the Rules of the Road actually requiring you to keep these things secret: the closest thing I see is the statement "Keys and secrets that are compromised will be reset by Twitter."; they never actually say what "compromised" means, though.
I might be dense here, but why don't you store them in a configuration file, the Windows registry etc and get them from there? Then distribute the application without the file, and you're done.
Maybe another solution would be to use a server, the server interacts with the twitter api, and the you request information to your server with your desktop application
Like that, the API key is only stored on the server, and not any user can get it.

Implementing a 2 Legged OAuth Provider

I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx

Resources