get url and parameters in voila - url

I try to create a simple weblink in Jupyter/Voila to display this:
display(HTML("""Current Month |
As you can see I just hardcode the parameter:
?month=current
I am not able to retrieve these values in my notebook.
I have tried:
display(os.environ.get("SCRIPT_NAME"))
which gives: None
and
sURL = os.getenv('HTTP_REFERER')
which gives None
also this
import os
envs = {k: v for k, v in os.environ.items()}
display (envs)
doesn't give the URL nor the parameters
Does someone have any idea how to get those ?

Related

How to get Twitter mentions id using academictwitteR package?

I am trying to create several network analyses from Twitter. To get the data, I used the academictwitteR package and their get_all_tweets command.
get_all_tweets(
users = c("LegaSalvini"),
start_tweets = "2007-01-01T00:00:00Z",
end_tweets = "2022-07-01T00:00:00Z",
file = "tweets_lega",
data_path = "tweetslega/",
bind_tweets = FALSE
)
## Binding JSON files into data.frame objects
tweets_bind_lega <- bind_tweets(data_path = "tweetslega/")
##Tyding
tweets_bind_lega_tidy <- bind_tweets(data_path = "tweetslega/", output_format = "tidy")
With this, I can easily access the ids for the creation of a retweet and reply network. However, the tidy format does not provide a tidy column for the mentions, instead it deletes them.
However, they are in my untidy df tweets_bind_lega , but stored as a list tweets_bind_afd$entities$mentions. Now I would like to somehow unnest this list and create a tidy df with a column that has contains the mentioned Twitter user ids.
Has anyone created a mention network with academictwitteR before and can help me out?
Thanks!

Lua - gsub xml characters to make xml responses visible in iOS Safari Browser

For some reason the iOS Safari browser does not allow you to see xml content returned via a server.
So to try and get around this, I thought I’d try to take the distinctive xml characters ‘>’ and ‘<‘ with something else, which is unlikely to be challenged e.g ‘~’.
I’ve tried a number of different ways , and while I can use the following to find/replace letters, when I try it with special characters, i can’t seem to get it to work.
Can anyone help ?
local xmltest = "<XML Test>"
local t = {< = "~", > = "~"}
local result = string.gsub(xmltest, "<>", t)
print(result)
Many thanks
Here is the answer, thanks #lhf
local xmltest = "<XML Test>"
local t = {["<"] = "~", [">"] = "~"}
local result = string.gsub(xmltest, "[<>]", t)
print(result)

What is the best way to extract text from Amazon listings into Google Sheets?

PURPOSE:
I am trying to extract product features(bullet points) from Amazon.com listings into a Google Spreadsheet.
PROBLEM:
I have tried 4 different methods but none has worked.
IMPORTXML:
IMPORTXML("https://www.amazon.com/dp/B07JD2GDKN","//ul/li/showHiddenFeatureBullets")
IMPORTHTML:
IMPORTHTML("https://www.amazon.com/dp/B07JD2GDKN","list",1)
REGEXREPLACE(IMPORTXML):
REGEXREPLACE(IMPORTXML("https://www.amazon.com/dp/B07JD2GDKN","//feature-bullets"),"Amazon.com: ","")
custom functions: productFeatures ("https://www.amazon.com/dp/B07JD2GDKN")
function productFeatures(url) {
var content = UrlFetchApp.fetch(url).getContentText();
var match = content.match(/<span class="a-list-item">/);
return match && match [1] ? match[1] : 'Title not found';
}
// via https://screencast.com/t/pkxiFcg6my
:
This are the responses I get:
custom function:(https://screencast.com/t/WL9Ay6UQemK)
Response from running "IMPORTHTML": "Import content is empty"
Response for running "IMPORTXML": "Imported Xml content can not be parsed."
GOAL:
I'd appreciate any help resolving this.
I'm no expert with regex but after some research I was able to get this to work:
I used the custom function below to get the first bullet point
function BP1(url) {
var content = UrlFetchApp.fetch(url).getContentText();
var match = content.match(/<li><span.*>([^<]*)<\/span><\/li>/g);
return match && match ? match[7]: 'BP not found';
}
For each subsequent product features I just created a corresponding function that raises the match number by 1. For example, feature five has the following function:
function BP5(url) {
var content = UrlFetchApp.fetch(url).getContentText();
var match = content.match(/<li><span.*>([^<]*)<\/span><\/li>/g);
return match && match ? match[12]: 'BP not found';
}
The only problem with this is that it pulls xml text at the front and back. I'm guessing someone with more understanding of all this could fix that. I just use the LEFT(), RIGHT(), and LEN() functions to clean up the results
=LEFT(RIGHT(bp1(url), LEN(bp1(url))-39),LEN(bp1(url))-66)
Hope this helps, I know it's not a perfect solution but it gets the job done for me.

Traversing the linked list using pretty printers in GDB

I have a linked list pretty printer which takes the input from command prompt.
E.g., print xyz
My code is something like below:
class Randomcalss:
def __init__(self, val):
self.val = int(val)
def to_string(self):
return "printing linked list:"
def children(self):
for field in self.val.type.fields():
key = field.name
val = self.val[key]
yield key,val.dereference()
It does work as expected, and prints:
printing linked list:= {head = {next = 0x625590, prev = 0x623c70}}
But if I want to traverse the linked list and proceed further what do I do.
Because every time I try to access head['next'] it says head is a string and string indices must be integers.
Cant I do something like self.val[key] to access next node of head too?
You can do val.dereference()['next'] and this will give you address of next member of the list. You can cast the value obtained(if required) and traverse further.

Using python list as node properties in py2neo

I have a list of urls:
urls = ['http://url1', 'http://url2', 'http://url3']
Mind you the list can have any number of entries including 0 (none). I want to create new node property for each url (list entry).
Example how the node will look like
(label{name='something', url1='http://url1', url2='http://url2'}, etc...)
It is possible to expand a dictionary with ** with the same effect I need but is there any way to do this with a list?
You can put your list in a dictionary and use this to create a node:
from py2neo import Node
urls = ['http://1', 'http://2']
props = {}
for i, url in enumerate(urls):
# get a key like 'url1'
prop_key = 'url' + str(i)
props[prop_key] = url
my_node = Node('Person', **props)
graph.create(my_node)

Resources