Cronjob to sync comments with disqus with oauth - oauth

I'm trying to set up a cronjob that syncs comments between Disqus and my database.
Everything is ok with the basic API, but I also need to store Ip addresses and emails in my local db. Reading the documentation, I found out that I need to use oauth and to declare a specific scope in order to get those "confidential" data.
So I set up a script that does everything and it actually works: everything is ok if I access the test page on my browser, trigger the authentication and ALLOW disqus to access my account stuff.
The problem is that I can't do this manually every 10 minutes. I need this to work on a cronjob set up on my linux webserver, but it doesn't work: of course my cronjob can't click on the ALLOW button etc.
Am I missing something? Is this a dumb question? :-)
Thanks in advance

Your API application includes an administrator's access token (it doesn't expire, so keep this secret!) to perform functions like this, so you don't need to authenticate constantly. So there's two things you need to do:
Get your admin access token from your application here (details page): http://disqus.com/api/applications/ - then use this to authenticate in your server-side script.
On the same page, go to the settings page and change the default permissions scope from "Read & Write" to "Read, Write & Manage Forums"
This will make sure you get all the sensitive data you need synced up.

Related

Bundle context and variable propagation

I'm looking for a way of propagating information between the authentication script I've setup in my Zapier app and the different actions/triggers I have.
For now it would be a URL defined in a custom zapier form when authenticating a user that I could spread across all the actions/triggers scripts to make the calls properly using a context variable somewhere (not existing in bundle apparently).
I guess the environment global var is not the way as it's not bound to a specific zap but for all usages of the app.
Thank you for your help!
David here, from the Zapier Platform team. Great question!
If you're looking for data that will be unique to each user, but consistent across each of that user's zaps, you're looking for auth fields! They're filled out when a user authenticates (enters their password, connects oauth, etc) and are accessible to all zaps that use that auth via bundle.authData. A user might have multiple auths (in the case of multiple accounts with your service), and each one will have its own version of the auth fields.
Hope this helps. ​Let me know if you've got any other questions!

Box API OAuth2: multiple redirect_uris, long lasting refresh token

I have two questions about Box's Oauth2 API in a testing environment.
Is it possible to have multiple redirect_URI addresses? I'd like to use one address for production (e.g., https://my_site.com/box_redirects_here), one for ongoing development (http://localhost:8000/box_redirects_here) and one for automatic UI tests (http://localhost:8001/box_redirects_here). As far as I could see, the only way to do that would be to create three different Box applications - is there an easier way? BTW, both Dropbox and Google Drive do support multiple redirect URIs.
I have a set of automatic tests that I'd like to run a few times a day. The challenge I'm facing is that every time I run these tests, my refresh_token is invalidated, and I can't use it again - which means I can't run the same set of tests a few hours later without manually getting a new token. One solution would be to save the refresh token, for example in a file, so I could reuse it across testing sessions. But:
It's really cumbersome.
if different developers are running these tests from different machines with no common file system that doesn't really work.
Again, for whatever reason this doesn't seem to be an issue with Google Drive or with Dropbox.
This is not currently possible, and I agree that would be nice.
Your best option is to save the access/refresh token pair to a file or a database (in the event that there's no common filesystem.) The OAuth2 spec grants implementers wide latitude on how they issue refresh tokens, if they issue them at all (I don't think Dropbox does.) While Box's implementation makes integration testing a bit challenging, I think that it ultimately hews most closely to the spec's recommendations.
For your first question, you might be able to get close to what you want by using the redirect_uri query parameter. Although you won't be able to supply an arbitrary redirect URI, you can give one that has the same base URL as the redirect URI in your app console.
From the OAuth tutorial:
Wildcard redirect_uri values are also accepted in the request as long as the base url matches the URI registered in the application console. A registered redirect_uri of https://www.myboxapp.com can be dynamically redirected to https://www.myboxapp.com/user1234 if passed into the request redirect_uri parameter.
For your second question, John is right - Box invalidates a refresh token after it has been used. Although this can be annoying, it's also more secure.

How to authenticate Rails application using token from 3rd party API?

I have a Rails application that make several user-specific calls to a third-party API. They interact with a lot of data in the course of filling out a survey, and their progress is stored in HTML5 localStorage until they reach the end of the survey and the data is saved in a local database & localStorage cleared.
The API calls require a token tacked onto the end as an "auth=" parameter. Right now, I have the user log into my app with their username and password to that service, POST those credentials to the "sessions" call of that API, and get a token back in JSON. I store that token in a variable in the controller, and use it to make the successive API calls and present the user's data in my app, etc. etc.
I've learned quite a bit about Rails, but next to nothing about sessions or authentication. Generally speaking, is there anything more I need to do for this to be a secure scenario? I feel like I'm missing something.
Assuming the user's username / password combination for the 3rd party service doesn't hit your servers, seems OK to me.
If your servers see the user's credentials, that's not particularly cool. Instead use OAuth to get 3rd party sign in, and use the token to make requests on behalf of the user. You can usually keep the whole session on the client if you want to avoid saving users to the database.
Storing progress in localstorage sounds fine btw. To preserve values you can have the pages of the form be tabs (so hide the previous form, not a new page) and use:
autocomplete="on"
to signify that the values should be restored to what they were. Try that before writing code to save things to localstorage.

XPages Social Business Toolkit

I am trying to implement XPagesSBT on localhost.
I have followed this article http://heidloff.net/home.nsf/dx/12152011034545AMNHECAP.htm and the SBT document by Niklas and was trying to implement dropbox oAuth.
I have also placed http://localhost/XPagesSBT.nsf/ and http://localhost/WebSecurityStore.nsf in root folder
but still i get this error
Error while executing JavaScript action expression
Script interpreter error, line=1, col=26: Error calling method 'isAuthenticated()' on java class 'com.ibm.xsp.extlib.sbt.services.client.endpoints.DropboxEndpoint'
No application is registered with id XPagesSBT and provider Dropbox
if(!#Endpoint("dropbox").isAuthenticated()) {#Endpoint("dropbox").authenticate(true);}
do i need to make any other configuration /setup to XPagesSBT db? or it wont work with Localhost?
I don't remember exactly anymore but reading my blog entry you linked it says you shouldn't use Anonymous:
"Additionally there are a couple of security related settings which are important to understand. First of all you need to assign access to the document with the application keys to the ID with which you signed the two NSFs. In the screenshot above I've entered both OpenNTF servers and my own user ID. When you use the web UI to do this these names are added to the document in an authors field and a readers field.
In the last step you need to configure the ACL of the security store. Anonymous must not have access to this database. All users who you want to be able to use the Social Enabler OAuth functionality need to have author access. This is so that their user keys can be stored in this database so that they only have to do the OAuth dance once. "
It should work on localhost. It looks like a configuration issue with SBT not being able to read the security tokens from the websecuritystore.nsf . Did you create the Dropbox Application Key with an admin id and sign the websecuritystore with the correct id?
Padraic

Setting up a private beta for a website

I'm trying to setup a "private beta" for a site that I'm working on. The site uses open id. I don't want anyone to even browse the pages if they aren't part of the beta. What's the best way to implement this? Any suggestions?
For example:
When the site goes live, users will go to http://www.mydomain.com which will not require them to log in.
For the beta I want to restrict access. Users that go to http://www.mydomain.com will be redirected to a login page. Anyone attempting to access ANY PART OF THE SITE who is not authenticated will be redirected back to the login page.
I could stick [Authorize] attributes all over my controller actions, but that seems stupid.
If you're using ASP.NET MVC, it comes with authentication/authorization out of the box. You should be able to use that to setup authentication on your site.
Alternatively you could setup app server settings - IIS lets you setup username/password on a specific site it's serving, regardless of what the actual application may do. If you have access to the app server this might be the best solution.
If you're using IIS6, you can setup authorization easily. Right-click on your site > Properties > Directory Security Tab > Authentication and Access Control > Edit, and enter a username/pwd of your choice. Done.
The real question is how are they being invited to the private beta?
You could setup a password which drops a cookie much like serverfault.com does.
OR
If you know who you are inviting: you could add them to the system before hand using the email/login information that you already know about them (assuming you are inviting them via email)
I have implemented a function in a web application a while ago where we go the possibility to block access to the full website unless the user was an administrator (which in our case meant that the user account was a member of a specific group in Active Directory).
It was based on two things. First, all pages in the web application inherited not directly from the Page class, but from a custom page class in our web application. Second, we had a value like this in the appSettings section of web.config file:
<add key="adminaccessonly" value="0" />
The custom page class would check that value when loading. If it was not 0 it would redirect to a page (that did not inherit the same custom page class, though) informing the user that "the site is not available right now". If the value was 0 the page would load as usual.
In that application we used this to be able to take the site "offline" when we deployed a new version, giving us some time to verify that all was good before we let in the users again.
Best way are invitation system (based on invitation code) or manually confirmation access after create profile in your system. imho
Or you could host the site on a private server, and set up a VPN to use it. Depending on your resources and needs this may be the easiest and most secure way to do what you want without modifying your codebase.
OR alternatively you could use Apache or IIS to force authentication on access to the website directory. Keeping the authentication info in .htaccess for a while.
Even though you use open id authentication, you may still need some form of authorization mechanism. The simplest form would be a user-roles system in your database that assigns different roles to users
In your case, just assign the private_beta role to your private beta invitees and ensure you your authorization mechanism that all users have private_beta privilege before they may continue.
If you don't want to provide authorization for the public site (where everyone can do everything, once authenticated), then, you may only need to do a quick-and-dirty post-processing (for private beta only) on your open_id authenticated users to check them off a short list (which you can store on a text file.

Resources