I am trying to create several network analyses from Twitter. To get the data, I used the academictwitteR package and their get_all_tweets command.
get_all_tweets(
users = c("LegaSalvini"),
start_tweets = "2007-01-01T00:00:00Z",
end_tweets = "2022-07-01T00:00:00Z",
file = "tweets_lega",
data_path = "tweetslega/",
bind_tweets = FALSE
)
## Binding JSON files into data.frame objects
tweets_bind_lega <- bind_tweets(data_path = "tweetslega/")
##Tyding
tweets_bind_lega_tidy <- bind_tweets(data_path = "tweetslega/", output_format = "tidy")
With this, I can easily access the ids for the creation of a retweet and reply network. However, the tidy format does not provide a tidy column for the mentions, instead it deletes them.
However, they are in my untidy df tweets_bind_lega , but stored as a list tweets_bind_afd$entities$mentions. Now I would like to somehow unnest this list and create a tidy df with a column that has contains the mentioned Twitter user ids.
Has anyone created a mention network with academictwitteR before and can help me out?
Thanks!
Related
I have some code applying a map function on a dask bag. I need a lookup dictionary to apply that function and it doesn't work with client.scatter.
I don't know if I am doing the right things, because the workers starts, but they don't do anything. I have tried different configuration looking to different examples, but I can't get it to work. Any support will be appreciated.
I know from Spark, you define a broadcast variable and you access the content by variable.value inside the function you want to apply. I don't see the same with dask.
# Function to map
def transform_contacts_add_to_historic_sin(data,historic_dict):
raw_buffer = ''
line = json.loads(data)
if line['timestamp] > historic_dict['timestamp]:
raw_buffer = raw_buffer + line['vid']
return raw_buffer
# main program
# historic_dict is a dictionary previously filled, which is the lookup variable for map function
# file_records will be a list of json.dump getting from a S3 file
from distributed import Client
client = Client()
historic_dict_scattered = client.scatter(historic_dict, broadcast=True)
file_records = []
raw_data = s3_procedure.read_raw_file(... S3 file.......)
data = TextIOWrapper(raw_data)
for line in data:
file_records.append(line)
bag_chunk = db.from_sequence(file_records, npartitions=16)
bag_transform = bag_chunk.map(lambda x: transform_contacts_add_to_historic(x), args=[historic_dict_scattered])
bag_transform.compute()
If your dictionary is small you can just include it directly
def func(partition, d):
return ...
my_dict = {...}
b = b.map(func, d=my_dict)
If it's large then you might want to wrap it up in Dask delayed first
my_dict = dask.delayed(my_dict)
b = b.map(func, d=my_dict)
If it's very large then yes, you might want to scatter it first (though I would avoid this if things work out with either of the approaches above).
[my_dict] = client.scatter([my_dict])
b = b.map(func, d=my_dict)
I am new to process maker.while I am exploring the process maker I got an employee on boarding process.while understanding the process I understood the dynaforms but I faced problem with trigger forms. In trigger forms they mentioned tables to get data but i didn't find the tables in my work space. Please let me know if anyone know the answer.
Thanks.
##hrUser = ##USER_LOGGED;
//Get image
$sqlSdocument = "SELECT *
FROM APP_DOCUMENT D, CONTENT C
WHERE APP_UID = '".##APPLICATION."'
AND D.APP_DOC_UID = C.CON_ID
AND C.CON_CATEGORY = 'APP_DOC_FILENAME'
ORDER BY APP_DOC_CREATE_DATE DESC";
$resSdocument = executeQuery($sqlSdocument);
$dirDocument = $resSdocument[1]['APP_DOC_UID'];
$httpServer = (isset($_SERVER['HTTPS'])) ? 'https://' : 'http://';
##linkImage =
$httpServer.$_SERVER['HTTP_HOST']."/sys".SYS_SYS."/".SYS_LANG."/".SYS_SKIN."
/cases/cases_ShowDocument?a=".$dirDocument;
Judging by the your code, you want to query the uploaded file but base on the commend, you want to get the image?
Well if employee onboarding is what you want, I am guessing that you uploaded the picture on the first dynaform and you want it to be shown. Well this is your luck because ProcessMaker Documentation already have that
If you want a code that let you see the image immediately after upload, you might want to signup for a enterprise trial and test their Employee Onboarding.
I'm working with an application that has 3 ini files in a somewhat irritating custom format. I'm trying to compile these into a 'standard' ini file.
I'm hoping for some inspiration in the form of pseudocode to help me code some sort of 'compiler'.
Here's an example of one of these ini files. The less than/greater than indicates a redirect to another section in the file. These redirects could be recursive.. i.e. one redirect then redirects to another. It could also mean a redirect to an external file (3 values are present in that case). Comments start with a # symbol
[PrimaryServer]
name = DEMO1
baseUrl = http://demo1.awesome.com
[SecondaryServer]
name = DEMO2
baseUrl = http://demo2.awesome.com
[LoginUrl]
# This is a standard redirect
baseLoginUrl = <PrimaryServer:baseUrl>
# This is a redirect appended with extra information
fullLoginUrl = <PrimaryServer:baseUrl>/login.php
# Here's a redirect that points to another redirect
enableSSL = <SSLConfiguration:enableSSL>
# This is a key that has mutliple comma-separated values, some of which are redirects.
serverNames = <PrimaryServer:name>,<SecondaryServer:name>,AdditionalRandomServerName
# This one is particularly nasty. It's a redirect to another file...
authenticationMechanism = <Authenication.ini:Mechanisms:PrimaryMechanism>
[SSLConfiguration]
enableSSL = <SSLCertificates:isCertificateInstalled>
[SSLCertificates]
isCertificateInstalled = true
Here's an example of what I'm trying to achieve. I've removed the comments for readability.
[PrimaryServer]
name = DEMO1
baseUrl = http://demo1.awesome.com
[SecondaryServer]
name = DEMO2
baseUrl = http://demo2.awesome.com
[LoginUrl]
baseLoginUrl = http://demo1.awesome.com
fullLoginUrl = http://demo1.awesome.com/login.php
enableSSL = true
serverNames = DEMO1,DEMO2,AdditionalRandomServerName
authenticationMechanism = valueFromExternalFile
[SSLConfiguration]
enableSSL = <SSLCertificates:isCertificateInstalled>
[SSLCertificates]
isCertificateInstalled = true
I'm looking at using ini4j (Java) to achieve this, but am by no means fixed on using that language.
My main questions are:
1) How can I handle the recursive redirects
2) How am I best to handle the redirects that have an additional string, e.g. serverNames
3) Bonus points for any suggestions about how to handle the external redirects. No big deal if that part isn't working just yet.
So far, I'm able to parse and tidy up the file, but I'm struggling with these redirects.
Once again, I'm only hoping for pseudocode. Perhaps I need more coffee, but I'm really puzzled by this one.
Thanks in advance for any suggestions.
I'm new to this python/biopyhton stuff, so am struggling to work out why the following code, pretty much lifted straight out of the Biopython Cookbook, isn't doing what I'd expect.
I'd have thought it'd end up with the interpreter display two list containing the same number, but all i get is one list and then a message saying TypeError: 'generator' object is not subscriptable.
I'm guessing something is going wrong with the Medline.parse step and the result of the efetch isn't being processed in a way that allows subsequent interation to extract the PMID values. Or, the efetch isn't returning anything.
Any pointers at to what I'm doing wrong?
Thanks
from Bio import Medline
from Bio import Entrez
Entrez.email = 'A.N.Other#example.com'
handle = Entrez.esearch(db="pubmed", term="biopython")
record = Entrez.read(handle)
print(record['IdList'])
items = record['IdList']
handle2 = Entrez.efetch(db="pubmed", id=items, rettype="medline", retmode="text")
records = Medline.parse(handle2)
for r in records:
print(records['PMID'])
You're trying to print records['PMID'] which is a generator. I think you meant to do print(r['PMID']) which will print the 'PMID' entry in the current record dictionary object for each iteration. This is confirmed by the example given in the Bio.Medline.parse() documentation.
My question is just an extension of this thread [Question]http://stackoverflow.com/questions/851636/default-filter-in-django-admin .
from myproject.myapp.mymodels import fieldC
class Poll(models.Model):
fieldA = models.CharField(max_length=80, choices=CHOICES.MyCHOICES)
fieldB = models.ForeignKey(fieldC)
admin.py
list_display = ('fieldB__fieldc1')
Now my list filter shows four criteria All, A ,B ,C .
What I want is if the superuser is logged in ,the filter should show all four criteria All,A,B,C and if the user is other than superuser filter should only show All, A, B.
How can i acheive this ?
Here is my actual piece of admin.py
def changelist_view(self, request, extra_context=None):
referer = request.META.get('HTTP_REFERER', '')
test = referer.split(request.META['PATH_INFO'])
if test[-1] and not test[-1].startswith('?'):
if not request.GET.has_key('patient__patient_type__exact'):
q = request.GET.copy()
q['patient__patient_type__exact'] = 'Real'
request.GET = q
request.META['QUERY_STRING'] = request.GET.urlencode()
if not request.user.is_superuser:
q['patient__patient_type__exact'] = 'Real'
return super(VisitAdmin, self).changelist_view(request, extra_context)
Thanks in advance
I think the new FilterSpec API in Django 1.4 gives you exactly what you need here. Check out the docs on list_filter. In 1.4 you can now make custom list filters that subclass django.contrib.admin.SimpleListFilter and give you the power to write custom lookup and queryset code, and since the request is passed in you can do a simple conditional with is_superuser.
if request.user.is_superuser:
# pass one set of lookups
else:
# pass a different set
read the example code in the docs carefully and I think it will all be clear.