I have never worked with the twitter api, so I have no idea if this is possible. What I want to do is to trigger a url everytime something new happens on a users timeline (?). Is this possible, and if so, how do I do it?
Yes, but it takes a bit of work. You need to use the twitter streaming API, specifically the follow option.
From twitter:
Example: Create a file called ‘following’ that contains, exactly and
excluding the quotation marks: “follow=12,13,15,16,20,87” then
execute:
curl -d #following https://stream.twitter.com/1/statuses/filter.json
-uAnyTwitterUser:Password.
Basically you pass a list of user ids you want to follow, open a long-lived connection, and twitter sends back to you anything that the user posts publicly. You can monitor this connection and do things when a user posts something.
You have another option, called a User Stream , which gets you way more information about when a user does anything, but it requires the user's approval, and a much more complex authentication process via oAuth. So I would only use that if you need it.
How you're going to be keeping a persistant connection open to twitter is something very much dependent on your programming language and software. In Python, I really like tweepy, but even for python there are several different libraries, or you can just use curl or pycurl and do it yourself like in the example above.
Related
What is the best-supported approach for tracking logged-in Usernames/Ids in App Insights telemetry?
A User with Username "JonTester1" said some Pages he visited 4 hours ago were really slow. How can I see everything JonTester1 did in App Insights to trouble shoot/know which pages he's referring to?
Seems like User Id in App Insights is some Azure-generated anonymized thing like u7gbh that Azure ties to its own idea of the same user (thru cookie?). It doesn't know about our app's usernames at all.
I've also seen a separate field in App Insights called Auth Id (or user_AuthenticatedId in some spots), which looks to sometimes have the actual username e.g. "JonTester1" filled in - but not always... And while I don't see any mention of this field in the docs, it seems promising. How is our app's code/config supposed to be setting that Auth Id to make sure every App Insights log/telemetry has it set?
Relevant MS docs:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/usage-send-user-context
This looks to just copy one library Telemetry object's User Id into another... no mention of our custom, helpful Username/Id anyway... and most in-the-wild examples I see don't actually look like this, including MS docs own examples in the 3rd link below; they instead hardcode get a new TelemetryClient()
https://learn.microsoft.com/en-us/azure/azure-monitor/app/website-monitoring No mention of consistently tracking a custom Username/Id
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics#authenticated-users Shows some different helpful pieces, but still no full example. E.g. it says with only the setAuth... JS function call (still no full example of working client-side JS that tracks User) on the page, you don't need any server-side code for it to track custom User Id across both client-side and server-side telemetry sent to Azure... yet then it also shows explicit code to new up a TelemetryClient() server-side to track User Id (in the Global.asax.cs or where?)... so you do need both?
Similar SO questions, but don't connect the dots/show a full solution:
Azure Insights telemetry not showing Auth ID on all transactions
Application Insights - Tracking user and session across schemas
How is Application insight tracking the User_Id?
Display user ID in the metrics of application Insight
I'm hoping this question and answers can get this more ironed out; hopefully do a better job of documentation than the relevant MS docs...
The first link in your question lists the answer. What it does show you is how to write a custom telemetry initializer. Such an initializer lets you add or overwrite properties that will be send along any telemetry that is being send to App Insights.
Once you add it to the configuration, either in code or the config file (see the docs mentioned earlier in the answer) it will do its work without you needing to create special instances of TelemetryClient. That is why this text of you does not make sense to me:
[…] and most in-the-wild examples I see don't actually look like this, including MS docs own examples in the 3rd link below; they instead hardcode get a new TelemetryClient()
You can either overwrite the value of UserId or overwrite AuthenticatedUserId in your initializer. You can modify the code given in the docs like this:
if (requestTelemetry != null && !string.IsNullOrEmpty(requestTelemetry.Context.User.Id) &&
(string.IsNullOrEmpty(telemetry.Context.User.Id) || string.IsNullOrEmpty(telemetry.Context.Session.Id)))
{
// Set the user id on the Application Insights telemetry item.
telemetry.Context.User.AuthenticatedUserId = HttpContext.Current.User.Identity.Name;
}
You can then see the Auth Id and User Id by going to your AI resource -> Search and click an item. Make sure to press "Show All" first, otherwise the field is not displayed.
Auth Id in the screenshot below is set to the user id from the database in our example:
We access the server from azure functions as well so we set the user id server side as well since there is no client involved in such scenarios.
There is no harm in settting it in both places, javascript and server side via an initializer. That way you cover all scenario's.
You can also manually add user id to app insights by
appInsights.setAuthenticatedUserContext(userId);
See App Insights Authenticated users
What I'm looking to do is to create a bot that will be in a private channel only accessible to the admins. All users will have keywords that they have chosen prior and will get notified about. Lets say user 'x', has chosen "brown" as a keyword, when a comment comes into the private channel containing the keyword "brown", I want the bot to send that message, copy and pasted, directly to the user 'x' that chose that keyword. So basically I would like to know how to make a bot they has a keyword feature that copies that exact message and dm's that user that chose that keyword.
Is this possible, if so how would I go about it?
Thanks
Most of the bots i have made, i have used python.
Selenium web-driver is fairly easy to use if you intend on scraping the data from HTML pages.
In any case, you could use flask and mysql.connector to create the user interface and login for control of the bot.
If there are any other things you want it to do, i'm sure there's a library out there to do it, or you could just launch some kind of macro or script on the server.
Be very careful of permissions and whatnot if you intend on allowing remote control of other scripts and data on the server though.
I am interested to allow my website to send a webform data to an asana project, its for collecting responses from potential clients.
I am unsure the best way to do this, since by using the form, I do not want that the user is required to login, or signup, or anything such as that, the form submission should be anonymous, it should just take whatever is posted and create a task in asana with the text given.
From the documentation, it appears that its always required to login, or connect with asana and this obviously isn't going to work since people are not going to do that in order to send me feedback from the website.
So, is there a way to do this, in the way mentioned above?
You're right in that you need to have an Asana account to make API calls as a particular user. However since you want the submissions to be anonymous anyway, there's a pretty simple way: you can create a bot account and use that to submit the form. For instance, create an Asana user called "forms_bot#yourdomain.com"; make sure it can see the project in which you want to collect the form submissions. Get its credentials from inside Asana, and use these on your server to make the API calls to Asana to submit the information. In this way you will see the tasks created by "forms_bot#yourdomain.com".
We use this idiom very frequently at Asana for these sorts of flows, and as an added plus it makes it very clear where the information came from in the first place (as opposed to seeming as if there were an actual user in your domain that's creating the tasks). Hopefully this makes sense and will allow you to get the workflow you want set up!
I'm writing an app that make some calls to my API that have restrictions. If users were to figure out what these url routes were and the proper parameters and how to specify them, then they could exploit it right?
For example if casting a vote on something and I only want users to be able to cast one vote, a user knowing the route:
get '/castvote/' => 'votemanager#castvote'
could be problematic, could it not? Is it easy to figure out these API routes?
Does anyone know any ways to remove the possibility of this happening?
There is no way to hide AJAX calls - if nothing else, one just needs to open Developer Tools - Network panel, and simply see what was sent. Everything on clientside is an open book, if you just know how to read it.
Instead, do validation on serverside: in your example, record the votes and users that cast them; if a vote was already recorded by that user, don't let them do it again.
Your API should have authorization built into it. Only authorized users having specific access scopes should be allowed to consume your API. Checkout Doorkeeper and cancancan gems provided by the rails community.
As others have said, adding access_tokens/username/password authorisation is a good place to start. Also, if your application should only allow one vote per user, then this should be validated by your application logic on the server
This is a broader problem. There's no way to stop users from figuring out how voting works and trying to game it but there are different techniques used to make it harder. I list some solutions from least to most effective here:
Using a nonce or proof of work, in case of Rails this is implemented through authenticity token for non-GET requests. This will require user to at least load the page before voting, therefore limiting scripted replay attacks
Recording IP address or other identifiable information (i.e. browser fingerprinting). This will limit number of votes from a single device
Requiring signup. This is what other answers suggest
Requiring third-party login (i.e. Facebook, Twitter)
Require payment to cast a vote (like in tv talent shows)
None of those methods is perfect and you can quickly come up with ways to trick any of them.
The real question is what your threat model and how hard you want it to make for users to cast fake votes. From my practical experience requiring third-party login will ensure most votes are valid in typical use cases.
I need to develop an application which should help me in getting all the status,messages from different servers like Twitter,facebook etc in my application and also when i post a message it should gets updated in all the services. I am using authlogic for authentication. Can anyone suggest me what gems/plug-ins i can use..
I need API help to get all the tweets/messages to be displayed in my application and also ways to post the messages to the corresponding services by posting it from my application. Can anyone help me from design point.
Walk through what you'd want to do in your head. Imagine the working site, imagine your webapp working before you start. So your user logs in (handled by authlogic) and sees a textbox called "What are you doing right now?". The user fills in a status message and clicks "post". The status message appears at the top of their previously posted messages.
Start with the easy part. Create a class that posts to two services. Use the twitter gem and rfacebook to post to two already defined services. In the future, you'll want to let the user associate services to their account and you would iterate through the associated services and post the message to each. Once you have this working, you can refactor or polish the UI a bit to round out this feature. I personally would do the "add a social media account to my profile" feature towards the end.
Harder is the reading of the data (strangely enough) because you're going to have to figure out how to store it. You could store nothing but I suspect you'd run into API limits just searching all the time (could design around this). I would keep a little cache of posts associated to the user's social media account. In this way, the data model would look like this:
A user has many social media accounts.
A social media account has many posts. (cache)
Of course, now you need to schedule the caching of the posts. This could be done manually, based on an event (like when they login) or time based. So when the update happens, you load up the posts for that social media account and the user will see the posts the next time they hit the page. For real-time push to the client's browser while they stare at the screen, use faye (non-trivial) and ajax to pull the new posts to the top of the social media stream view.
The time based one is tricky because you'd either have to have a cron job run or have rails handle it all with a gem like clockwork. But then you have to leave rails running. I've also solved this by having a class in /lib do all the work and a simple web call kicks off the update. But it wasn't in a multi-user use case. So that might not work. In any case, you'll want to have some nice reusable code for these problems since update requests can come from many different sources.
You'll also have to deal with the API limits. When pulling down content from twitter, you won't get everything. That will just have to be known by the user or you'll have to indicate a "break in time" somehow.
The UI should be pretty easy (functionally anyway), because you know which source the post/content is coming from. It'd be easy to throw a little icon next to the post to display which social media site it's coming from.
Anyway, good luck, sounds like a fun project.