I have a Quip document on which others have made comments. How can I print the document with the comments? Failing that, how can I print the comments alone?
Related
I have the following problem. I'm using turbo in a forum with a comment tree.
Commentary_1
id="reply_to_commentary_1"
Commentary_2
id="reply_to_commentary_2"
Commentary _2_1
id="reply_to_commentary_2_1"
When someone inserts a comment on for example Commentary _2_1, that comment is inserted into the id "reply_to_commentary_2_1" via turbo_stream for everyone on that page.
And the picture turns out
Commentary_1
id="reply_to_commentary_1"
Commentary_2
id="reply_to_commentary_2"
Commentary _2_1
id="reply_to_commentary_2_1"
Commentary_2_1_1
id="reply_to_commentary_2_1_1"
But if you do not reload the page, and someone clicks on the answer to Commentary_2_1_1 and answers there. The impression is that turbo_stream does not see the id "reply_to_commentary_2_1_1" and cannot insert it there
My goal is to create a set of PDF documents using Sphinx in an automated process, except I've found I'm too limited in what document specific information I can pass using latex_documents tuple in conf.py. I'll try to explain what I've tried so far, how this is limits what I would like to do and an example "hypothetical" solution.
What I've tried so far:
As stated, my goal is to create a set of documents and my configuration uses a single conf.py. To create a set of documents from a single conf.py, the "latex_documents" keyword references document specific information. It does so using a list of tuples, one tuple per document. This tuple, for example, contains reference to each specific document's index file, author and title (the index file then references the relevant restructured text etc).
This is great because I can update the name of the document's specific author and title on each title page, and the body of the text is picked up from the index.
Here is an example of a single tuple, representing one document, associated with the latex_documents keyword:
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [('path/to/index', 'document_name', 'my document title', 'JSt', 'howto')]
In this example, Sphinx can make JSt, the author's name, appear on the title page.
What is the limitation?
But what if we also wanted to pass to each document's title page, automatically, more information: an executive summary, reviewer name, approver name, document number and even more. This is information, like author, is document specific and will be placed on the generic title page and / or headers and footers (and therefore cannot be put in the restructuredtext or markdown input either).
Hypothetical solution
An example solution, if latex_documents input wasn't / isn't prescriptive, could be as follows.
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, reviewer, approver, document number, documentclass [howto, manual, or own class]).
latex_documents = [('path/to/index', 'document_name', 'my document title', 'JSt', 'ABc', 'ZYx', 'DOC007' 'howto')]
Which by calling in the title page as somthing like:
\#author
\#reviewer
\#approver
\#docnum
Will create something like the following in the tex.
\author{JSt}
\reviewer{ABc}
\approver{ZYx}
\docnum{DOC007}
Does anyone have advice on passing additional document specific information to the title page, headers and footers of a document in the same way author is?
Thanks in advance.
Is there a way to construct a URL that will lead directly to the comment left by someone on a blog article?
For exammple, is there a way to construct a URL that will lead directly to the comment left by Alexander in the following blog article?
http://zeroseconde.blogspot.com/2008/09/fin-du-papier.html
Similarly, is there a way to construct a URL that will lead directly to the comment left by jokeefe on the following blog article?
https://www.metafilter.com/74067/The-Image-Mill
You mean like this? -> http://zeroseconde.blogspot.com/2008/09/fin-du-papier.html?showComment=1223573700000#c4070187594579786156
If so, simply right click on the comment date and copy the url.
The same works to get the URL for jokeefe's comment in the second question.
PRAW allows extracting submissions on a given subreddit between two timestamps using this:
reddit.subreddit('news').submissions(startStamp, endStamp)
However, I haven't been able to find anything similar for extracting a given user's comments between two timestamps. Can this be done? I don't actually care about the 1000-requests limit unless the comments I get belong to the correct time range. I already had a look at their documentation here.
Although there is no argument for it like there is for the .submissionscall, you can do this manually with an if statement checking the created_utc against another utc timestamp. (You can use something like https://www.epochconverter.com/ to get a desired timestamp)
The following code sample gets all of /u/spez's comments from last christmas to this christmas.
import praw
oldest = 1482682380.0 #Timestamp for 12/25/16
newest = 1514218380.0 #Timestamp for 12/25/17
reddit = praw.Reddit('USER-AGENT-HERE')
for comment in reddit.redditor('spez').comments.new(limit= None):
if comment.created_utc > oldest and comment.created_utc < newest:
print "Comment Found! permalink: " + comment.permalink
Considering referring to Pushshift. You can get the comments by a user (let's say /u/avi8tr) at the following URL: Link .
There's a python wrapper (just like PRAW) for Pushshift as well, but it's under development: GitHub Link. You'll have to add the 'author' parameter in comment_search in psraw/endpoints.py, though.
Note: Both Pushshift and PSRAW are begin developed actively. So changes are expected.
Did the format of the UID for comments on video posts has changed? We are noticing examples of comments that previously had one id via api, and are now coming back with a different id. This is causing us to save duplicate data, because we can't programmatically determine that they are the same comment.
The issue appeared start approximately Dec 5, 2017.
Example - these two comments appear to be the same comment, just sent twice with different ids. The "external id" below is the youtube UID:
title Comment from nah28
link https://www.youtube.com/watch?v=n1pRzwFf1lo&lc=z233st5hoofjvbl5f04t1aokglljav4mscz3jhkng02qrk0h00410
publishedDate 2017-11-16 20:14:31
dateFound 2017-11-16 20:16:38
externalId z233st5hoofjvbl5f04t1aokglljav4mscz3jhkng02qrk0h00410
title Comment from nah28
link https://www.youtube.com/watch?v=n1pRzwFf1lo&lc=UgyMXm2SWEfG9sJsAK14AaABAg
publishedDate 2017-11-16 20:14:31
dateFound 2017-12-06 12:17:58
externalId UgyMXm2SWEfG9sJsAK14AaABAg
Confirm!
I didn't find any changelists related, but we're using comment IDs to uniquely identify comments, and now we're having duplicates of comments of tracked videos.
I wonder if there's a way to obtain somehow old comment id