A too long link life for S3 object Boto3 - url

After i've generated link on s3 object with boto3, it is living to long...
Too many then it's writen.
url = client.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket_id,
'Key': key
},
ExpiresIn=linc_exp_time
)
What i'm doing wrong?

Your browser might be caching the results. You can test it in a different browser, or by using wget or curl to download rather than using a browser. -- John Rotenstein

Related

Crystal-lang: How to Find End URL After a Redirect?

I'm just dipping a toe in the water with Crystal at the moment and, as an exercise, trying to port one of my Python scripts across.
The script in question downloads the 'latest' PDF from a URL which takes the form: "http://somesite.com/download/latest/". When visited that URL automatically redirects to the page for the latest download eg. "http://somesite.com/download/4563/"
I'm having difficulty working out how to implement this in Crystal so that I can grab the actual URL that the redirect ends up on.
In Python I do:
currenturl = urllib.request.urlopen(latesturl)
#above will redirect to URL of format http://somesite.com/download/XXXXX/
#where XXXXX is the current d/load
endurl = currenturl.geturl()
...which gives me the end URL in the "endurl" variable.
But, reading the docs for Crystal's "http/client" I can't see any way to return the actual URL that a redirect ends up on. Is it possible?
Crystal's HTTP::Client currently can't automatically follow redirects.
Please note that you're reading an outdated version of the API docs, the current is at https://crystal-lang.org/api/latest/HTTP/Client.html (I don't think there have been relevant changes between 0.24.1 and 0.26.1 though).
But you can easily access the redirect URL from reading the Location header of an HTTP response:
response = HTTP::Client.get latesturl
endurl = response.headers["Location"]

How to Authenticate Google Vision/Cloud Using ENV Variable in Ruby on Rails

My app is hosted on Heroku, so I'm trying to figure out how to use the JSON Google Cloud provides (to authenticate) as an environment variable, but so far I can't get authenticated.
I've searched Google and Stack Overflow and the best leads I found were:
Google Vision API authentication on heroku
How to upload a json file with secret keys to Heroku
Both say they were able to get it to work, but they don't provide code that I've been able to get work. Can someone please help me? I know it's probably something stupid.
I'm currently just trying to test the service in my product model leveraging this sample code from Google. Mine looks like this:
def self.google_vision_labels
# Imports the Google Cloud client library
require "google/cloud/vision"
# Your Google Cloud Platform project ID
project_id = "foo"
# Instantiates a client
vision = Google::Cloud::Vision.new project: project_id
# The name of the image file to annotate
file_name = "http://images5.fanpop.com/image/photos/27800000/FOOTBALL-god-sport-27863176-2272-1704.jpg"
# Performs label detection on the image file
labels = vision.image(file_name).labels
puts "Labels:"
labels.each do |label|
puts label.description
end
end
I keep receiving this error,
RuntimeError: Could not load the default credentials. Browse to
https://developers.google.com/accounts/docs/application-default-credentials for more information
Based on what I've read, I tried placing the JSON contents in secrets.yml (I'm using the Figaro gem) and then referring to it in a Google.yml file based on the answer in this SO question.
In application.yml, I put (I overwrote some contents in this post for security):
GOOGLE_APPLICATION_CREDENTIALS: {
"type": "service_account",
"project_id": "my_project",
"private_key_id": "2662293c6fca2f0ba784dca1b900acf51c59ee73",
"private_key": "-----BEGIN PRIVATE KEY-----\n #keycontents \n-----END PRIVATE KEY-----\n",
"client_email": "foo-labels#foo.iam.gserviceaccount.com",
"client_id": "100",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url":
"https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url":
"https://www.googleapis.com/robot/v1/metadata/x509/get-product-labels%40foo.iam.gserviceaccount.com"
}
and in config/google.yml, I put:
GOOGLE_APPLICATION_CREDENTIALS = ENV["GOOGLE_APPLICATION_CREDENTIALS"]
also, tried:
GOOGLE_APPLICATION_CREDENTIALS: ENV["GOOGLE_APPLICATION_CREDENTIALS"]
I have also tried changing these variable names in both files instead of GOOGLE_APPLICATION_CREDENTIALS with GOOGLE_CLOUD_KEYFILE_JSON and VISION_KEYFILE_JSON based on this Google page.
Can someone please, please help me understand what I'm doing wrong in referencing/creating the environmental variable with the JSON credentials? Thank you!
It's really annoying that Google decides to buck defacto credential standards by storing secrets via a file instead of a series of environment variables.
That said, my solution to this problem is to create a single .env variable GOOGLE_API_CREDS.
I paste the raw JSON blob into the .env then remove all newlines. Then in the application code I use JSON.parse(ENV.fetch('GOOGLE_API_CREDS') to convert the JSON blob into a real hash:
The .env file:
GOOGLE_API_CREDS={"type": "service_account","project_id": "your_app_name", ... }
Then in the application code (Google OCR client as an example):
Google::Cloud::Vision::ImageAnnotator.new(credentials: JSON.parse(ENV.fetch('GOOGLE_API_CREDS'))
Cheers
Building on Dylan's answer, I found that I needed to use an extra line to configure the credentials as follows:
Google::Cloud::Language.configure {|gcl| gcl.credentials = JSON.parse(ENV['GOOGLE_APP_CREDS'])}
because the .new(credentials: ...) method was not working for Google::Cloud::Language
had to look in the (sparse) ruby reference section of Google Cloud Language.
And yeah... storing secrets in a file is quite annoying, indeed.
I had the same problem with Google Cloud Speech, using the "Getting Started" doc from Google.
The above answers helped a great deal, coupled with updating my Google Speech Gem to V1 (https://googleapis.dev/ruby/google-cloud-speech-v1/latest/Google/Cloud/Speech/V1/Speech/Client.html)
I simply use a StringIO object so that Psych thinks that it's an actual file that I read:
google:
service: GCS
project: ''
bucket: ''
credentials: <%= StringIO.new(ENV['GOOGLE_CREDENTIALS']) %>

Reading File on Google Drive using Dart

I created a configuration file (Simple Text File) on my Google Drive and now I would like to read it from my Chrome Packaged Dart Application. But I'm not able to get more information of the file than it's name, size etc.
For accessing Google Drive I use the google_drive_v2_api.
Any suggestion on how to get the contents of my configuration file would be great! Thanks!
I just did some test in my own chrome app, uploading and downloading a simple file:
chrome.identity.getAuthToken(new chrome.TokenDetails(interactive: true ))
.then((token){
OAuth2 auth = new SimpleOAuth2(token);
var drive = new gdrive.Drive(auth)..makeAuthRequests=true;
drive.files.insert({},content:window.btoa('hello drive!')).then((sentMeta){
print("File sent! Now retrieving...");
drive.files.get(sentMeta.id).then((repliedMeta){
HttpRequest request = new HttpRequest()..open('GET', repliedMeta.downloadUrl)
..onLoad.listen((r)=>print('here is the result:'+r.target.responseText));
auth.authenticate(request).then((oAuthReq)=>oAuthReq.send());
});
});
});
It works, but the HttpRequest to get content back seems heavy...
But i really recommend you to a take look to chrome.storage.sync if your config file size is < to 4ko... If not, you could also use the chrome SyncFileSystem API... They are both easier to use, and SyncFileSystem use Drive as backend.
This page on downloading files talks through the process for getting the contents of a file.

Best way to upload files to Box.com programmatically

I've read the whole Box.com developers api guide and spent hours on the web researching this particular question but I can't seem to find a definitive answer and I don't want to start creating a solution if I'm going down the wrong path. We have a production environment where as once we are finished working with files our production software system zips them up and saves them into a local server directory for archival purposes. This local path cannot be changed. My question is how can I programmatically upload these files to our Box.com account so we can archive these on the cloud? Everything I've read regarding this involves using OAuth2 to gain access to our account which I understand but it also requires the user to login. Since this is an internal process that is NOT exposed to outside users I want to be able to automate this otherwise it would not be feasable for us. I have no issues creating the programs to trigger everytime a new files gets saved all I need is to streamline the Box.com access.
I just went through the exact same set of questions and found out that currently you CANNOT bypass the OAuth process. However, their refresh token is now valid for 60 days which should make any custom setup a bit more sturdy. I still think, though, that having to use OAuth for an Enterprise setup is a very brittle implementation -- for the exact reason you stated: it's not feasible for some middleware application to have to rely on an OAuth authentication process.
My Solution:
Here's what I came up with. The following are the same steps as outlined in various box API docs and videos:
use this URL https://www.box.com/api/oauth2/authorize?response_type=code&client_id=[YOUR_CLIENT_ID]&state=[box-generated_state_security_token]
(go to https://developers.box.com/oauth/ to find the original one)
paste that URL into the browser and GO
authenticate and grant access
grab the resulting URL: http://0.0.0.0/?state=[box-generated_state_security_token]&code=[SOME_CODE]
and note the "code=" value.
open POSTMAN or Fiddler (or some other HTTP sniffer) and enter the following:
URL: https://www.box.com/api/oauth2/token
create URL encoded post data:
grant_type=authorization_code
client_id=[YOUR CLIENT ID]
client_secret=[YOUR CLIENT SECRET]
code= < enter the code from step 4 >
send the request and retrieve the resulting JSON data:
{
"access_token": "[YOUR SHINY NEW ACCESS TOKEN]",
"expires_in": 4255,
"restricted_to": [],
"refresh_token": "[YOUR HELPFUL REFRESH TOKEN]",
"token_type": "bearer"
}
In my application I save both auth token and refresh token in a format where I can easily go and replace them if something goes awry down the road. Then, I check my authentication each time I call into the API. If I get an authorization exception back I refresh my token programmatically, which you can do! Using the BoxApi.V2 .NET SDK this happens like so:
var authenticator = new TokenProvider(_clientId, _clientSecret);
// calling the 'RefreshAccessToken' method in the SDK
var newAuthToken = authenticator.RefreshAccessToken([YOUR EXISTING REFRESH TOKEN]);
// write the new token back to my data store.
Save(newAuthToken);
Hope this helped!
If I understand correctly you want the entire process to be automated so it would not require a user login (i.e run a script and the file is uploaded).
Well, it is possible. I am a rookie developer so excuse me if I'm not using the correct terms.
Anyway, this can be accomplished by using cURL.
First you need to define some variables, your user credentials (username and password), your client id and client secret given by Box (found in your app), your redirect URI and state (used for extra safety if I understand correctly).
The oAuth2.0 is a 4 step authentication process and you're going to need to go through each step individually.
The first step would be setting a curl instance:
curl_setopt_array($curl, array(
CURLOPT_URL => "https://app.box.com/api/oauth2/authorize",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "content-type: application/x-www-form-urlencoded",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_POSTFIELDS =>
"response_type=code&client_id=".$CLIENT_ID."&state=".$STATE,
));
This will return an html text with a request token, you will need it for the next step so I would save the entire output to a variable and grep the tag with the request token (the tag has a "name" = "request_token" and a "value" which is the actual token).
Next step you will need to send another curl request to the same url, this time the post fields should include the request token, user name and password as follows:
CURLOPT_POSTFIELDS => "response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&request_token=".$REQ_TOKEN."&login=".$USER_LOGIN."&password=".$PASSWORD
At this point you should also set a cookie file:
CURLOPT_COOKIEFILE => $COOKIE, (where $COOKIE is the path to the cookie file)
This will return another html text output, use the same method to grep the token which has the name "ic".
For the next step you're going to need to send a post request to the same url. It should include the postfields:
response_type=code&client_id=".$CLIENT_ID."&state=".$STATE."&redirect_uri=".$REDIRECT_URI."&doconsent=doconsent&scope=root_readwrite&ic=".$IC
Be sure to set the curl request to use the cookie file you set earlier like this:
CURLOPT_COOKIEFILE => $COOKIE,
and include the header in the request:
CURLOPT_HEADER => true,
At step (if done by browser) you will be redirected to a URL which looks as described above:
http://0.0.0.0(*redirect uri*)/?state=[box-generated_state_security_token]&code=[SOME_CODE] and note the "code=" value.
Grab the value of "code".
Final step!
send a new cur request to https//app.box.com/api/oauth2/token
This should include fields:
CURLOPT_POSTFIELDS => "grant_type=authorization_code&code=".$CODE."&client_id=".$CLIENT_ID."&client_secret=".$CLIENT_SECRET,
This will return a string containing "access token", "Expiration" and "Refresh token".
These are the tokens needed for the upload.
read about the use of them here:
https://box-content.readme.io/reference#upload-a-file
Hope this is somewhat helpful.
P.S,
I separated the https on purpuse (Stackoverflow wont let me post an answer with more than 1 url :D)
this is for PHP cURL. It is also possible to do the same using Bash cURL.
For anyone looking into this recently, the best way to do this is to create a Limited Access App in Box.
This will let you create an access token which you can use for server to server communication. It's simple to then upload a file (example in NodeJS):
import box from "box-node-sdk";
import fs from "fs";
(async function (){
const client = box.getBasicClient(YOUR_ACCESS_TOKEN);
await client.files.uploadFile(BOX_FOLDER_ID, FILE_NAME, fs.createReadStream(LOCAL_FILE_PATH));
})();
Have you thought about creating a box 'integration' user for this particular purpose. It seems like uploads have to be made with a Box account. It sounds like you are trying to do an anonymous upload. I think box, like most services, including stackoverflow don't want anonymous uploads.
You could create a system user. Go do the Oauth2 dance and store just the refresh token somewhere safe. Then as the first step of your script waking up go use the refresh token and store the new refresh token. Then upload all your files.

CouchDB Update XML Attachement

Ive read at least 5 articles on this but I can't seem to get it. I have an xml file that is already in memory in the browser and I am attempting to update a document from my db, for which I already have the doc id. What is the best way of doing this? Is there support for this built into jquery.couch.js, because I can't seem to find any.
Ive attached some code with hard coded values for the sake of my sanity:
var xmlTemp = this.fullscoreApp.MusicXML.document;
$.couch.db("mydb").saveDoc({
"_id": "67e98623efefe16d27e2177b44000aee",
"_rev": "4-830aad7c3dc9e1d5004439ed1c9196d3",
"type":"score",
"_attachments":xmlTemp
}, {
success: function() {
console.log("PLZ");
}
});
I get a DOM 18 error...but I'm using a public server. Thoughts?
What protocol are you using to open your JavaScript file? Are you running it via a webserver (such as http://localhost) or just opening the file (which will show as file:// in the browser)?
If the latter, the browser will report DOM 18, because file:// suffers various restrictions not present for pages served by a webserver. More info from this question.

Resources