Stream Audio from AWS to Rails App Using Fog/Carrierwave - ruby-on-rails

I have a Rails app that lets a user upload an mp3 to their Profile model and then stores it with AWS S3. I am trying to figure out how to make a connection via Fog and stream their song when it's clicked, similar to the functionality of Soundcloud or Bandcamp. I have been looking at the Fog docs and similar posts, but am stuck. I think I have to use send_file, but am not sure. Any info would be greatly appreciated. I plan on using jplayer or something similar to actually play the audio, just need to make that connection happen on click.
UPDATE:
I am currently trying to implement something to this effect in my Profile controller, follwing this doc http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/Bucket.html
bucket = s3.buckets.create('name', :grants => {
:grant_read => [
{ :uri => "http://acs.amazonaws.com/groups/global/AllUsers" },
],
:grant_full_control => [
{ :id => 'abc...mno' } # cannonical user id
{ :email_address => 'foo#bar.com' }, # email address
]
})
In my view I have the following, just so I have something to look at and work with.
<audio id="song" class="audio_player" controls="false" preload="true">
<source src="/audios/ignite.mp3" type="audio/mpeg">
</audio>
So there is a disconnect here, kind of feel like I am taking shots in the dark. I think a little bit of clarity will put me on the right track. I have also unsuccessfully gotten Fog to work in my console, which would help things, but is a separate thread. Thank you for any help.

I suspect you won't actually want to stream the audio through your server. It would probably be better to just have the source attribute there point directly at s3. You have a couple options there. First, since you appear to have it set to public readability, you can just link directly to the url (I believe carrierwave has methods to give you this url). You could also do this without public read by using signed/expiring urls, but that gets a bit more complicated.
All that said, if it is really important that it appear to be coming from your server (instead of S3), you may want to look in to setting up a CNAME to S3 so that it is still being served from there, but will appear to come from a subdomain of your host (or similar). Hope that helps to point you in the right direction, but let me know otherwise and I can try to add further specificity. Thanks!

Related

How can I preserve storage space and load time with Active Storage?

I have a user submission form that includes images. Originally I was using Carrierwave, but with that the image is sent to my server for processing first before being saved to Google Cloud Services, and if the image/s is/are too large, the request times out and the user just gets a server error.
So what I need is a way to upload directly to GCS. Active Storage seemed like the perfect solution, but I'm getting really confused about how hard compression seems to be.
An ideal solution would be to resize the image automatically upon upload, but there doesn't seem to be a way to do that.
A next-best solution would be to create a resized variant upon upload using something like #record.images.first.variant(resize_to_limit [xxx,xxx]) #using image_processing gem, but the docs seem to imply that a variant can only be created upon page load, which would obviously be extremely detrimental to load time, especially if there are many images. More evidence for this is that when I create a variant, it's not in my GCS bucket, so it clearly only exists in my server's memory. If I try
#record.images.first.variant(resize_to_limit [xxx,xxx]).service_url
I get a url back, but it's invalid. I get a failed image when I try to display the image on my site, and when I visit the url, I get these errors from GCS:
The specified key does not exist.
No such object.
so apparently I can't create a permanent url.
A third best solution would be to write a Google Cloud Function that automatically resizes the images inside Google Cloud, but reading through the docs, it appears that I would have to create a new resized file with a new url, and I'm not sure how I could replace the original url with the new one in my database.
To summarize, what I'd like to accomplish is to allow direct upload to GCS, but control the size of the files before they are downloaded by the user. My problems with Active Storage are that (1) I can't control the size of the files on my GCS bucket, leading to arbitrary storage costs, and (2) I apparently have to choose between users having to download arbitrarily large files, or having to process images while their page loads, both of which will be very expensive in server costs and load time.
It seems extremely strange that Active Storage would be set up this way and I can't help but think I'm missing something. Does anyone know of a way to solve either problem?
Here's what I did to fix this:
1- I upload the attachment that the user added directly to my service provider ( I use S3 ).
2- I add an after_commit job that calls a Sidekiq worker to generate the thumbs
3- My sidekiq worker ( AttachmentWorker ) calls my model's generate_thumbs method
4- generate_thumbs will loop through the different sizes that I want to generate for this file
Now, here's the tricky part:
def generate_thumbs
[
{ resize: '300x300^', extent: '300x300', gravity: :center },
{ resize: '600>' }
].each do |size|
self.file_url(size, true)
end
end
def file_url(size, process = false)
value = self.file # where file is my has_one_attached
if size.nil?
url = value
else
url = value.variant(size)
if process
url = url.processed
end
end
return url.service_url
end
In the file_url method, we will only call .processed if we pass process = true. I've experimented a lot with this method to have the best possible performance outcome out of it.
The .processed will check with your bucket if the file exists or not, and if not, it will generate your new file and upload it.
Also, here's another question that I have previously asked concerning ActiveStorage that can also help you: ActiveStorage & S3: Make files public
I absolutely don't know Active Storage. However, a good pattern for your use case is to resize the image when it come in. For this
Let the user store the image in Bucket1
When the file is created in Bucket1, an event is triggered. Plug a function on this event
The Cloud Functions resizes the image and store it into Bucket2
You can delete the image in Bucket1 at the end of the Cloud Function, or keep it few days or move it to cheaper storage (to keep the original image in case of issue). For this last 2 actions, you can use Life Cycle to delete of change the storage class of files.
Note: You can use the same Bucket (instead of Bucket1 and Bucket2), but an event to resize the image will be sent every time that a file is create in the bucket. You can use PubSub as middleware and add filter on it to trigger your function only with the file is created in the correct folder. I wrote an article on this

set a video's game using youtube data api

I am looking for a way to programatically set a video's game. By "game" I mean the video setting that makes these things appear in the video's description:
I can set and read the category using the categoryId field, e.g. "20" for Gaming. But I was unable to find any official way to set some kind of game id.
The Youtube Studio seems to perform this action to achieve the goal (shortened):
POST https://studio.youtube.com/youtubei/v1/video_manager/metadata_update
{
"encryptedVideoId": "UEUN1xD6BFI",
"videoReadMask": {...},
"gameTitle": {
"newKgEntityId": "/g/11gfhqhs78"
},
"context": {...}
}
And it uses something called SAPISIDHASH for authorization, which some people seem to have reverse-engineered, but before I even try to do that I wanted to see if there's an official supported way of doing this.
No ! There is not any supported way to do this as I know, an old friend of mine used the SAPISIDHASHmethod, and that was working like a charm, maybe a day they will open this for everyone

How Do I Write Text to a Google Doc using Drive API's Ruby Client?

Hello SO Ruby/Rails/Google community!
I'm looking for the correct way of making changes to the body of a Google Doc stored in Google Drive using Drive API Ruby Client.
Let's say I've required 'google/api_client', have an authorized client and drive instance, and have a starting template written in HTML.
I have no trouble creating fresh templates and multipart-uploading them to drive using
#drive.files.insert
to join file metadata:
#file = #drive.files.insert.request_schema.new({
'title' => "#{Time.now.strftime('%m_%d_%Y_')}EOD",
'description' => "The End of Day Log for #{Time.now.strftime('%m_%d_%Y')}",
'mimeType' => 'text/html',
'parents' => [{'id' => folder_id}]
})
and the html template represented by:
#media = Google::APIClient::UploadIO.new('eod_template.html', 'text/html')
Having successfully uploaded the base template to Drive and capturing its file_id, I would like users to be able to append text entries to the base template using a form on my Rails site.
Given the existence of a micropost model and form, I suspect I'll be writing a method that get called on save that appends the submitted text to the Google Doc Template but I can't find how to do this in any of the Drive API documentation.
I'm looking for functionality identical to the apps-script appendText(text) method found at
https://developers.google.com/apps-script/reference/document/text
Anyone have any clever solutions? Documentation I should read? The Perfect Gem?
Thanks a mil for your time!
-B
I gather from the deafening silence that such a thing is not (easily) possible.
Bummer!
Not to be deterred, I implemented the following workaround - hope it works for you, future readers.
Instead of writing posts to a document dynamically, I let a day's worth of posts pool in my app until the end of day, at which point I write all posts to the Google Doc at once with Nokogiri. It's not great, but with some front-end trickery whereby I simulate the feel of a Google Doc, it's good enough.

Advantages and disadvantages of BLOBS (security)

A year ago, when I did simple PHP sites for people with a simple MySQL database, I was brought up to think that storing an entire image in the database was possible but a terrible idea. Instead you should store the image in the filesystem and simply store an image path in the database. I did agree with that from the start, despite my inexperience. It must keep the database light when you're backing it up to an external service, and makes it faster during actual local use. This later point, however, is complete speculation, and I'd like someone to clarify my theories:
When you store the images associated with objects in the database as a BLOB, when you request this object, is the whole object and its attributes (including this huge amount of image information) written to memory, even when it's not needed? E.g.
2.0.0p247 :001 > Object.column_names
=> ["id", "name", "blob"]
2.0.0p247 :001 > Object.first.blob
=> # not sure what this will return! I'm guessing a matrix-like wall of image information?
2.0.0p247 :003 > Object.first.name
User Load (0.8ms) SELECT "users".* FROM "users" ORDER BY "users"."id" ASC LIMIT 1
=> "Kitty"
I understand that the call to Object.first.blob will take a relatively long amount of time because we're retrieving a large amount of image information. But will Object.first.name take the same amount of time because Object.first writes everything, id, name and blob all to memory? If the answer to this question is yes, that's a pretty good reason to never use BLOBS. However, if the answer is no, and rails is smart enough to only write requested attributes to memory then BLOBS suddenly become very attractive.
To be quite honest with you guys I'm really crossing my fingers that you'll say storing images in a BLOB is fine and dandy. It'll make things so much easier. Backing up will be simple. It'll feel very nice to back up the dynamic content of the site in one 'modular' upload instead of resorting to some elaborate whenever augmented rake task to make sure the paths and their respective images are uploaded to an external location.
More so it is absolutely impossible to make certain images private with Rails. I've searched high, I've searched low, I've asked here on SO. Got a few upvotes, but no solid response. No tutorials online. Nothing. Bags of tutorials on how to store images in the assets folder, but nothing to make images private.
Let's say I have three types of user, typeA, typeB and typeC. And let's say I have three types of images. So database schema would be as follows:
images
=> ["image_path","blob","type"]
users
=> ["name","type"]
What I want is that the users can request only the following:
typeA:
Can only view images with a type of A
Cannot view images with a type of B
Cannot only view images with a type of C
typeB:
Can only view images with a type of B
Cannot view images with a type of A
Cannot only view images with a type of C
typeC:
Can only view images with a type of C
Cannot view images with a type of A
Cannot only view images with a type of B
And yes, I could have given you the example with two types of user and image, but I really want to make sure you understand the problem; the actual system I have in mind will have hundreds of types.
Like I say, pretty simple idea, but I've found it impossible with rails, because all images are stored in the public folder! So a typeB user can just type /assets/typeAImage.jpg and they've got it! Heck, even someone who isn't a user can do it.
The send_file method won't work for me, because I'm not sending them the image as download per sae, I'm showing them image in the view.*
Now, using BLOBS would very neatly solve this problem if the images were stored in the database. I'm not sure of the actual syntax, but I could do something like this in a user view:
<% if current_user.type == image.type do %>
<%= image_tag image.blob #=> <img src="/assets/typaAImage.jpg" alt="..." class="..."> %>
<% end %>
And yeah, you could do exactly the same thing with a path:
<% if current_user.type == image.type do %>
<%= image_tag image.path #=> <img src="/assets/typaAImage.jpg" alt="..." class="..."> %>
<% end %>
but like I say, someone who isn't even a user could simply request /assets/typeAImage.jpg. Impossible to do this if it's stored in a BLOB.
In conclusion:
What's the problem with popplers BLOBS? I'm running a postgres database on Heroku with dozens of users per second. So yeah, not Facebook, but not Allegory on the Pointless of Life either, so performance matters. They've also got a strong mobile following so speed is of the essence. Will using BLOBS clash with this?
How do I display an image stored in a BLOB in a view?
And just to confirm, BLOBS will allow me to securely show secure secret images to certain members over https?
What about database backup speed? That'll take a hit, but I want to backup the images anyway and it's a nightly thing so who cares if it's slow? Right?
The images will be secure so long as the backup is encrypted, right? And just as passwords are stored as hashes within the database, should I store my super-secret BLOBS in an encrypted format as well? I'm thinking yes... Do you reckon bcrypt will be up to the task? I don't see why not.
Are BLOBS considered amateurish and lazy?
...and finally a bonus point (possibly outside the scope of the question):
*= As I wrote this I was thinking 'yes, but showing the image in the view is downloading the image to them. So can the send_file method be used to create private images in the way I describe and use the filesystem to store the images?
To answer the first question: Yes, it's possible in rails to lazy load some attributes, but by default Active Record does not support it There is a gem for this though. (DataMapper does support this by default, and there is a plugin for Sequel as well).
For the second part: the biggest drawback of your approach is performance. Assets are best served via a fast, static web server that can load the files from the filesystem, and not through a dynamic application. Clogging up the database server to query large blobs is not recommended, especially since it's much harder to scale a database than a file system.
And for the last part: there are various options you can use to hide files from the user, and only serve them when needed. One of the options is X-SendFile, or X-Accel-Redir where you can specify a filename inside the returned headers in Rails, and the web-server that is proxying the requests (and which also support this header) will pick that up, and serve that file. This is not a redirect, so the URL will still be the same, and the file can still be hidden from normal access. Of course for this you have to proxy your requests to rails through a web-server, which is usually already happening at least at the load-balancing level, or if you are using passenger.
Also note that you can tell Rails to send X-SendFile headers when serving the ordinary asset files as well.
Also see this answer.

Rails faye realtime notification system

Im trying to build a simple faye realtime based notification system so I can execute a certain javascript on certain actions.The idea is relatively simple though Im having problems implementing it, not sure wich road to take on this after reading the documentation of faye.
Current thoughts are
One unique faye channel per logged-in user, so you can push a action (popup, set text) to a certain user only
One div in my app layout that I can write text to
One div in my app that holds layout for a popup
Now I've seen the railscast tutorial about faye but ryan works from a controller/action method create. I don't want to insert stuff in the db, just call the JS function from anywhere ( building an application helper would be a good idea I think ), I would just want to do something like "execute javascript 'set_text'" and execute javascript 'show_popup'
What would be the best way to build such functionality with Faye, Basically I only have to
Execute a javascript function on a certain Faye channel
To accomplish the popup and text message.
A bit lost on this anyone can point me in the right direction or maybe have build such functionality already? thx in advanche!
On the server side, you can just do (this needs eventmachine):
client = Faye::Client.new('http://localhost:9292/faye')
client.publish('/notifications/1', 'alert(1);')
Or via HTTP:
message = {:channel => '/notifications/1', :data => 'alert(1);'}
uri = URI.parse("http://localhost:9292/faye")
Net::HTTP.post_form(uri, :message => message.to_json)
Then on the client side, you can then do anything with the data.
var client = new Faye.Client('http://localhost:9292/faye');
client.subscribe('/notifications/1', function(data) {
eval(data);
});
Just replace alert(1); with whatever JS you want to execute.

Resources