praw: get submission flair - reddit

I'm using PRAW to work with reddit submissions, specifically submissions that have been resolved and have their "flair" attribute set to SOLVED (as described here).
However, I am getting "None" when I check for flair, even for submissions that I can see have been set to SOLVED.
I have the following code, which works with a submission that has definitely been set to SOLVED.
solvedSubmission = reddit.submission(url='https://www.reddit.com/r/PhotoshopRequest/comments/6ctkpj/specific_can_someone_please_remove_kids_12467_i/')
pprint.pprint(vars(solvedSubmission))
This outputs:
{'_comments_by_id': {},
'_fetched': False,
'_flair': None,
'_info_params': {},
'_mod': None,
'_reddit': <praw.reddit.Reddit object at 0x10e3ae1d0>,
'comment_limit': 2048,
'comment_sort': 'best',
'id': '6ctkpj'}
Can anyone offer any insight as to why I'm seeing "None", on this post and other solved posts? Is there another way that reddit keeps track of solved posts that I should look into?
Thank you!

By now (~1y after OP) you might have solved this already, but it came up in a search I did, and since I figured out the answer, I will share.
The reason you never saw any relevant information is because PRAW uses lazy objects so that network requests to Reddit’s API are only issued when information is needed. You need to make it non-lazy in order to retrieve all of the available data. Below is a minimal working example:
import praw
import pprint
reddit = praw.Reddit() # potentially needs configuring, see docs
solved_url = 'https://www.reddit.com/r/PhotoshopRequest/comments/6ctkpj/specific_can_someone_please_remove_kids_12467_i/'
post = reddit.submission(url=solved_url)
print(post.title) # this will fetch the lazy submission-object...
pprint.pprint(vars(solvedSubmission)) # ... allowing you to list all available fields
In the pprint-output, you will discover, as of the time of writing this answer (Mar 2018), the following field:
...
'link_flair_text': 'SOLVED',
...
... which is what you will want to use in your code, e.g. like this:
is_solved = 'solved' == post.link_flair_text.strip().lower()
So, to wrap this up, you need to make PRAW issue a network request to fetch the submission object, to turn it from a lazy into a full instance; you can do this either by printing, assigning to a variable, or by going the direct route into the object by using the field-name you want.

Related

Google sheet Import HTML function

Am trying to pull out date fr4om Money control form this function (https://www.moneycontrol.com/indices/fno/view-option-chain/BANKNIFTY/2022-07-28) what could be the formula for This & how we get multiple stock data like nifty, banknifty ,& stocks in variable expiry i used this this ( =IMPORTHTML("https://www.moneycontrol.com/indices/fno/view-option-chain/"&AD3&"/"&W253"","table",2)) but error occurred
I was able to successfully import the table from the mentioned URL using the following formula:
=IMPORTHTML("https://www.moneycontrol.com/indices/fno/view-option-chain/BANKNIFTY/2022-08-04", "table", 2, "en_US")
However you've mentioned that an error ocurred, but you haven't shared the error...
Since you are concatenating parameters to the URL being fetched, I'd recommend investigating the end result of the concatenation and try accessing that newly concatenated URL on a browser to see if it works.
Additionally, I’d recommend adhering to the How to Ask guidelines in order for your questions to be properly answered by the community as well as to provide a minimal reproducible example when posting a question.

Access all fields in Zapier Code Step

Is it possible to access all the fields from a previous step as a collection like json rather than having to explicitly setting each one in the input data?
Hope the screenshot illustrates the idea:
https://www.screencast.com/t/TTSmUqz2auq
The idea is I have a step that lookup responses in a google form and I wish to parse the result to display all the Questions and Answer into an email.
Hope this is possible
Thanks
David here, from the Zapier Platform team. Unfortunately, what I believe you're describing right now isn't possible. Usually this works fine since users only map a few values. The worst case is when you want every value, which it sounds like you're facing. It would be cool to map all of them. I can pass that along to the team! In the meantime, you'll have to click everything you're going use in the code step.
If you really don't want to create a bunch of variables, but you could map them all into a single input and separate them with a separator like |, which (as long as it doesn't show up in the data), it's easy to split in the code step.
Hope that helps!
The simplest solution would be to create an additional field in the output object that is a JSON string of the output. In a Python code step, it would look like
import json
output = {'id': 123, 'hello': 'world'}
output['allfields'] = json.dumps(output)
or for returning a list
import json
output = [{'id': 123, 'hello': 'world'},{'id': 456, 'bye': 'world'}]
for x in output:
x['allfields'] = json.dumps(output[output.index(x)])
Now you have the individual value to use in steps as well as ALL the values to use in a code step (simply convert them from JSON). The same strategy holds for Javascript as well (I simply work in Python).
Zapier Result
Fields are accessible in an object called input_data by default. So a very simplistic way of grabbing a value (in Python) would be like:
my_variable = input_data['actual_field_name_from_previous_step']
This differs from explicitly naming the the field with Input Data (optional). Which as you know, is accessed like so:
my_variable = input['your_label_for_field_from_previous_step']
Here's the process description in Zapier docs.
Hope this helps.

Python Reddit API: Extracting User Comments Between Two Given Timestamps

PRAW allows extracting submissions on a given subreddit between two timestamps using this:
reddit.subreddit('news').submissions(startStamp, endStamp)
However, I haven't been able to find anything similar for extracting a given user's comments between two timestamps. Can this be done? I don't actually care about the 1000-requests limit unless the comments I get belong to the correct time range. I already had a look at their documentation here.
Although there is no argument for it like there is for the .submissionscall, you can do this manually with an if statement checking the created_utc against another utc timestamp. (You can use something like https://www.epochconverter.com/ to get a desired timestamp)
The following code sample gets all of /u/spez's comments from last christmas to this christmas.
import praw
oldest = 1482682380.0 #Timestamp for 12/25/16
newest = 1514218380.0 #Timestamp for 12/25/17
reddit = praw.Reddit('USER-AGENT-HERE')
for comment in reddit.redditor('spez').comments.new(limit= None):
if comment.created_utc > oldest and comment.created_utc < newest:
print "Comment Found! permalink: " + comment.permalink
Considering referring to Pushshift. You can get the comments by a user (let's say /u/avi8tr) at the following URL: Link .
There's a python wrapper (just like PRAW) for Pushshift as well, but it's under development: GitHub Link. You'll have to add the 'author' parameter in comment_search in psraw/endpoints.py, though.
Note: Both Pushshift and PSRAW are begin developed actively. So changes are expected.

How to access a reddit post using redditkit in ruby?

I have a reddit post link here:
https://www.reddit.com/r/dankmemes/comments/6m5k0o/teehee/
I wish to access the data of this post throught the redditkit API.
http://www.rubydoc.info/gems/redditkit/
I have tried countless times and the docs don't make too much sense to me. Can someone help show how to do this through the ruby console? Or even an implementation of this in rails would be really helpful!
Looking at the #comment method on the gem, it takes a comment_full_name and performs a GET to api/info.json with that parameter as an id (seen by viewing the source for that method). If we look at the reddit api for api/info the id parameter is a full name for the object with a link to what a full name is.
Following that link, a full name for a comment is
Fullnames start with the type prefix for the object's type, followed by the thing's unique ID in base 36.
and
type prefixes
t1_ Comment
So now we know the comment_full_name should be t1_#{comment's unique id} which appears to be 6m5k0o. Here, I'm unsure if that's already base36 or if they want you to convert that into base36 before passing it. Without seeing what all you've tried, I would say
client = RedditKit::Client.new 'username', 'password'
client.comment("t1_6m5k0o")
and if that doesn't work
client.comment("t1_#{'6m5k0o' base36 encoded}")
For questions like this, it would be nice to see some of your code and what you tried/results they gave. For all I know, you've already tried this and have a reason it didn't work for you.
I would test this out for you, but I don't have a reddit account for the gem to sign in with, this is just my guess glancing at the documentation.

How can I retrieve deleted objects from Active Directory with Ruby?

From the research I've done, it appears I need to send a special OID with my request (1.2.840.113556.1.4.417) in order to access the Deleted Objects container.
I couldn't find a way to send a specific control with a request using the "net-ldap" gem. Does anyone know if this is possible?
There is another gem, ruby-ldap, which appears to be more flexible and it seems I can send controls with my request (e.g. using the search_ext2() method).
However, no matter what I try, I am not getting back any objects, even though I know they haven't been garbage collected yet.
I'm including the filter "isDeleted=TRUE" with my requests as well.
OK, I finally figured it out. One will need to use the ruby-ldap gem. The reason my controls were not being sent was because the LDAP Protocol Version (LDAP::LDAP_OPT_PROTOCOL_VERSION) had defaulted to v2 and apparently it must be v3.
The following is a snippet that works:
require 'ldap'
conn = LDAP::Conn.new('yourserver.example.com', 389)
conn.set_option(LDAP::LDAP_OPT_PROTOCOL_VERSION, 3)
conn.bind("CN=Administrator,CN=Users,DC=example,DC=com", "sekritpass")
# controlType: 1.2.840.113556.1.4.417 (LDAP_SERVER_SHOW_DELETED_OID)
control = LDAP::Control.new('1.2.840.113556.1.4.417')
conn.search_ext2('CN=Deleted Objects,DC=example,DC=com', LDAP::LDAP_SCOPE_SUBTREE, "(isDeleted=*)", nil, false, [control], nil)
The filter (isDeleted=*) isn't necessarily required, you could also simply use (objectClass=*). You can also use the scope LDAP::LDAP_SCOPE_ONELEVEL if desired.
Have you tried isDeleted=* instead?
https://technet.microsoft.com/en-us/library/cc978013.aspx

Resources