I want to give permissions to publishers so that they can't stream to all live applications on wowza server. With default configuration, publishers can stream all live applications on wowza server.
Let's say there are 3 publishers: publisher1, publisher2, publisher3 and 3 live applications: liveapp1, liveapp2, liveapp3. I want to make each publishers can stream only their application like below:
publisher1 => liveapp1
publisher2 => liveapp2
publisher3 => liveapp3
I tried giving user names to clientStreamWriteAccess parameter in Application.xml of related live application. But it didn't work. Normally, value of this parameter is "*".
Is there any way to make this? Thanks.
The solution is to setup different "publish.password" files for each application.
Each time you setup an application create the file
[install-dir]/conf/[application]/publish.password file to store the
username and passwords for that application.
Check this out for detailed instructions.
Related
I have an RDF-file stored on my server. The file or at least the file-content should be uploaded to a remote GrapbDB over API.
On the documentation there are two ways to do this. The first one is uploading it to server files and then loading it to GraphDB. Here the problem is, that I am not the owner of the server, GraphDB is running. So I can`t upload it to server files. Or is there maybe another API for that?
The other way is providing a public API on my server and then trigger GraphDB to download the file from my server. But my API must be protected with credantials or JWT. But I donĀ“t know how to set the credantials in the API-Call.
Isn`t there a way to upload a simple graph to a repository?
There is a browser-based user interface in GraphDB that allows you to import from local files. If this is allowed on the server you are connecting to, and you only need to do this once then I think this would be the quickest route to go.
If you want to upload a local file to GraphDB using dotNetRDF, then I would advise you to use the SPARQL 1.1 graph store protocol API via the VDS.RDF.Storage.SparqlHttpProtocolConnector as described here. The base URL you need to use will depend on the configuration of the server and possibly also on the version of GraphDB that it is running, but for the latest version (9.4) the pattern is: <RDF4J_URL>/repositories/<repo_id>/rdf-graphs/service
The connector supports HTTP Basic Authentication (which is one of the options offered by GraphDB) so if you have a user name and password you could try the SetCredentials method on the connector to specify those credentals and if necessary force the use of HTTP Basic Authentication by setting the global options property VDS.RDF.Options.ForceHttpBasicAuth to true.
New to Twilio. Developing an IT alerting function with Twilio SMS/MMS API in Python. A postfix alias-executed program processes a message and sends essential data via Twilio MMS to designated recipients.
Media such as images are accessed through media_url property to Client.messages.create(), via a URL pointing to content that I must store and offer through my HTTP server.
I have verified that that is the case, so my question is:
How do I control access to those images so that only Twilio can access them, and only for the duration of the message sending process?
My current solution, which is a kludge, is for the postscript alias-executed program to write a list of media files associated with the message, and then write my own status_callback that erases the files in that list when I get a "delivered" status (or a certain time limit expires).
This is a problem because the media files are publicly accessible for however long it takes for the "delivered" status to arrive or for my timeout to occur.
I've tried various searches but no applicable security mechanism has presented itself.
I use Basic authentication and serve all my Twilio content from a dedicated directory which is password protected, Twilio seems quite happy to accept urls with inline username#password parameters.
I think Twilio publish a list of their IP address ranges somewhere too, so if you really want to lock your media directory down you could whitelist those and deny everything else access to that dir within your server config.
To delete them once they are processed I would probably write a basic script that is triggered by the Twilio status webhook and adds the filename of the image which can be deleted to a database table. I think you can pass some sort of verification tokens for Twilio to return with a callback for additional security.
Then run another script every few mins as a cron job (under a different user account with permission to delete files in your media dir) which reads the database, deletes any files listed from the directory and then clears the database ready for the next time.
Edit
Thinking about it you can probably delete the files as soon as Twilio has queued your message as I'm pretty sure they copy your media files to their server upon submission. These files are publicly accessible (but with names nobody is likely to guess). You can delete them with HTTP DELETE
I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)
For a project, I'll need to know if my user has finished to download a file to delete it on my remote server. Is there a way to do that ?
There are a couple ways of doing this, some more efficient than others, but here is what I've come up with.
Download through your application
If your application is downloading/passing the file through to the user you can trigger a function at the end of the stream to delete the file.
S3 Bucket Access Logging
S3 has access server logs (http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html) that log information for each request. Depending on how your application is structured, you may be able to process these to see what's been accessed.
There may be up to a 30-60 minute delay in log availability
Other Options
There are some other options, though perhaps not ideal (without knowing the specifics of your application I don't know whether these are acceptable).
Use Object Expiration (http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectExpiration.html)
Related SO question (PHP instead of ROR, but the concepts should apply) Resumable downloads when using PHP to send the file?
I'm a novice web developer with some background in programming (mostly Python).
I'm looking for some basic advice on choosing the right technology.
I need to serve files over the internet (mp3's), but I need to implement some
control on the access:
1. Files will be accessible only for authorized users.
2. I need to keep track on how many times a file was loaded, by whom, etc.
What might be the best technology to implement this? That is, should I
learn Apache, or maybe Django? or maybe something else?
I'm looking for a 'pointer' in the right direction.
Thank!
R
If you need to track/control the downloads that suggests that the MP3 urls need to be routed through a Rails controller. Very doable. At that point you can run your checks, track your stats, and send the file back.
If it's a lot of MP3's, you would like to not have Rails do the actual sending of the MP3 data as it's a waste of it's time and ties up an instance. Look into xsendfile where Rails can send a response header indicating the file path to send and apache will intercept it and do the actual sending.
https://tn123.org/mod_xsendfile/
http://rack.rubyforge.org/doc/classes/Rack/Sendfile.html
You could use Django and Lighttpd as a web server. With Lighttpd you can use mod_secdownload, wich enables you to generate one time only urls.
More info can be found here: http://redmine.lighttpd.net/projects/1/wiki/Docs_ModSecDownload
You can check for permissions in your Django (or any other) app and then redirect the user to this disposable URL if he passed the permission check.