Inserting record in Firebase DB from Storage - firebase-realtime-database

I need to store some images inside Firebase Storage and I was wondering if there's any way to add a record inside Realtime DB with the download url when adding an image inside Storage.
I don't want to do it necessarily from an app, is there a way to add a record to DB just by adding an image inside Storage from the console?
I was trying to retrieve the images urls directly from Storage but in some cases it resulted tedious and I thought that putting the urls inside the DB would be easier. And it was a good opportunity to try Realtime DB too.

You can use Cloud Storage Triggers for Cloud Functions that'll add the URL in database after a file is uploaded like this:
exports.updateDb = functions.storage.object().onFinalize(async (object) => {
// read object data
// add required data to Firebase RTDB
});
You cannot get the Firebase storage download URLs with the ?token= parameter using Admin SDK so also checkout How can I generate access token to file uploaded to firebase storage?.
If you don't want to use Cloud Functions then you can simply fetch the download URL, and then update database directly from client SDK.

Related

Uploading image with vuejs frontend

I'd like to implement uploading a profile picture for users. I'm using a VueJs frontend with a Rails API. What I'm trying to do is upload the image only using the frontend. I'd like for the file to get uploaded without any calls API calls. I could then store the location of the file in the picture attribute in the backend and retrieve it. Is that possible? I'm also using Element library.
<el-upload :http-request="addAttachment">
<el-button size="small" type="primary">Click Upload</el-button>
</el-upload>```
What you are looking at is called,
direct uploads or browser based uploads.
There should be support from storage service you are using.
Example: using S3 and GCS it is possible.
Upload without any API calls? -
Not sure, I once had to make a small API call to get the signature key and use it with POST params to upload file to storage service(GCS)
Once the API response is returned, you then might want to write to db about the file path.

What approach to take when saving json locally on ios device?

Currently my app pulls data from a custom api, parses it and saves the data to multiple arrays.
I am using AWS RDS to store all the data which is displayed on the api, and using AWS EC2 to host the file to access the api.
The problem I have ran into is that each download of the api is ~1mb and AWS charges $0.09/GB of data. I need to lower costs and so I can't have my app pulling the api data every time the refresh function is called. (my api updates every 4 hours. If users refresh the app before my api has updated, the refresh function will do nothing).
My current idea to solve this is either:
(1)download the json data onto the device, then parse & save the offline data to arrays
(2)or download and parse it into arrays, then save those arrays locally (from searching I believe I need to use NSKeyedArchiver or UserDefaults?)
I am not sure what the best approach to this is.

Rails: Best way to allow users to upload images to either a Dropbox linked folder or "our" storage on Amazon S3

I am working on a project where the user joins a "stream". During stream setup, the person who is creating the stream (the stream creator) can choose to either:
Upload all photos added to the stream by members to our hosting solution (S3)
Upload all photos added to the stream by members to the stream creator's own Dropbox authenticated folder
In the future I would like to add more storage providers (such as Drive, Onesky etc)
There is a couple of different questions I have in regards to how to solve this.
What should the structure be in the database for photos? I currently only have photo_url, but that won't be easy to manage from a data perspective with pre-signed urls and when there are different ways a photo can be uploaded (s3, dropbox etc.)
How should the access tokens for each storage provider be stored? Remember that only the stream creator's access_token will be stored and everyone who is on the stream will share that token when uploading photos
I will add iOS and web clients in the future that will do a direct upload to the storage provider and bypass the server to avoid a heavy load on the server
As far as database storage, your application should dictate the structure based on the interface that you present both to the user and to the stream.
If you have users upload a photo and they don't get to choose the URI, and you don't have any hierarchy within a stream, then I'd recommend storing just an ID and a stream_id in your main photo table.
So at a minimum you might have something looking like
create table photos(id integer primary key, stream_id integer references streams(id) not null);
But you probably also want description and other information that is independent of storage.
The streams table would have all the generic information about a stream, but would have a polymorphic association to a class dependent on the type of stream. So you could use that association to get an instance of S3Stream or DropBoxStream based on what actual stream was used.
That instance (also an ActiveRecord resource) could store the access key, and for things like dropbox, the path to the folder etc. In addition, that instance could provide methods to construct a URI given your Photo object.
If a particular technology needs to cache signed URIs, then say the S3Stream object could reference a S3SignedUrl model where the URIs are signed.
If it turns out that the signed URL code is similar between DropBox and S3, then perhaps you have a single SignedUrl model.
When you design the ios and android clients, it is critical that they are not given access to the stream owner's access tokens. Instead, you'll need to do all the signing inside your server app. You wouldn't want a compromise of a device to lead to exposing the access token creating billing problems as well as privacy exposures.
Hope this helps.
we setup a lot of rails applications with different kind of file storages behind it.
Yes, just an url is not manageable in the future. To save a lot of time you could use gems like carrierwave or paperclip. They handle all the thumbnail generation and file validation. One approach is, that you could upload the file from the client directly to S3 or Dropbox to a tmp folder and just tell your Rails App "Hey, here is the url of a new upload file" and paperclip and carrierwave will take care of the thumbnail generation and storaging. (Example for paperclip)
Don't know exactly how your stream works, so I cannot give a good answer to this -.-
With the setup I mentioned in 1. you should upload form your different clients directly to S3 or Dropbox etc. and after uploading, the client tells the Rails Backend that it should import the file from that url. (And before paperclip or carrierwave finish their processing you could use the tmp url from the file to display something directly in your stream)

How to manually update Relay store without querying server?

Let's say I have some data that I obtained through a non-graphql endpoint for example from third party server (firebase).
How do I put the data into the local relay store?
Is there an easy way to add / edit / overwrite data to relay store directly without going through query or mutation?
A non public RelayStoreData field is accessible from the Relay.Store instance and it gives you direct access to the records contained in the store. I haven't done anything with this myself but you could try modifying the cache directly like this:
RelayStore._storeData._cachedStore._records[recordId][fieldName]=newValue
I would use relay without a server, defining your graphql schema locally and doing your API requests from your graphql schema the same way you would query a database in your schema.
https://github.com/relay-tools/relay-local-schema

Get shared buckets from Google Cloud Storage using Rails

I want to download some reports from Google Cloud Storage and I'm trying the Gcloud gem. I managed to successfully connect and now I am able to list my buckets, create one, etc.
But I can't find a way to programically get files from buckets, which are shared with me. I got and address like gs://pubsite... and I need to connect to that bucket, to download some files. How can I achieve that?
Do I need to have billing enabled?
In order to list all the object in a bucket you can use Google Cloud Storage Object list API.
You need to provide the Bucket ID and should have the access to the bucket to read the objects. You can try the API before implementing it in your code.
I hope that helps.
You do not need billing enabled to download objects from a GCS bucket. Operations on GCS buckets are billed to the project that owns the bucket. You only need to enable billing in order to create a new bucket.
Downloading a single file using the Gcloud gem looks like this:
require "gcloud"
gcloud = Gcloud.new
storage = gcloud.storage
bucket = storage.bucket "pubsite"
file = bucket.file "somefile.png"
file.download "/tmp/somefile.png"
There are some examples at http://googlecloudplatform.github.io/gcloud-ruby/docs/v0.2.0/Gcloud/Storage.html

Resources