Is there a way to make Facebook re-parse a page to get the updated open graph object?
I know about the linter debug tool but I was wondering if there's an API or something to do it programmatically.
According to the Facebook documentation located at https://developers.facebook.com/docs/technical-guides/opengraph/defining-an-object/ under "Updating Objects"
curl -X POST \
-F "id={object-url OR object-id}" \
-F "scrape=true" \
"https://graph.facebook.com"
all you have to do is hit http://developers.facebook.com/tools/debug/og/object?q={escaped URL} with an authorized session.
I had to re-lint a couple thousand URLs so printed them out as basic links and then used a browser plugin to download them all so it only ended up taking 15 mins.
Related
I'm trying to automaticly upload video's to youtube on mine synology NAS (DS220+)
I've found this link to tokland youtube-upload.
All the steps on this github page I've done correctly (i think but obvious not :)
I think there is a auth problem whit Google and I'm not sure what I'm doing wrong or maybe there is a better way.
Steps I've taken:
-via SSH installed google-api-python-client and ofcourse youtube-upload-master.
-created a chanal and API credentials at youtube for clientsecrets.json
(Here I think I'm going wrong --> not shut what to put in the "Authorized redirect URIs" a.k.a. redirect_uris)
below you found mine client_secrets.json (ofcourse whitout the real client/project id's) but the redirect_uris and javascript_origins is legit (i think but posibly also the problem but realy don't know how to handle this)
{
"web":{
"client_id":"client-id",
"project_id":"project-id",
"auth_uri":"https://accounts.google.com/o/oauth2/auth",
"token_uri":"https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_secret":"client-secret",
"redirect_uris":["http://localhost"],
"javascript_origins":["http://localhost"]
}
}
In the end I was hoping ( and maybe not even posible but I'm new at this and needed a goal in this corona time) making a simple batch script to call this script when a file apears in a folder and upload it to a chanel in youtube.
batch script:
youtube-upload \
--title="test title" \
--description="test description" \
--category="Music" \
--tags="mutter, beethoven" \
--recording-date="2011-03-10T15:32:17.0Z" \
--default-language="en" \
--default-audio-language="en" \
--client-secrets="/volume1/some/folder/client_secrets.json" \
test.mp4
When I run the above code via SSH on the synology I'm getting a question to enter a verification code.
whish i must get from the link above the question to "read in" the access token ( I think , again I'm new and trying to understand this langues)
When I follow the link I'm getting this site from google instead a code :
I'm realy stuck at this moment and open for some new insight.
All I want to do is upload a video to youtube in a scheduled automatic way from a synology NAS.
Doesn't sound very complicated when i started but couldn't find any good exmples to build on.
Does anybody know what I'm doing wrong or does a better way to do this?
EDIT (for future reference) :
Afther some playing around and help from #stvar I installed a new secrets file:
{
"installed":{
"client_id":"someclientid",
"project_id":"somecprjid",
"auth_uri":"https://accounts.google.com/o/oauth2/auth",
"token_uri":"https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_secret":"somesecret",
"redirect_uris":["urn:ietf:wg:oauth:2.0:oob","http://localhost"]
}
}
??? Afther this i got another error whish i thought i was creating ???? :
IOError: [Errno 2] No such file or directory: '/var/services/homes/usrname/.youtube-upload-credentials.json'
give it a shot ytb_up based selenium all you need is login into your google account and get a cookie json
https://github.com/wanghaisheng/ytb-up
features YOU MAY NEED
proxy support
auto detect whether need a proxy
2. cookie support
for those multiple channels under same google account
3. schedule time publish
you can explictly specify a date and time for each video or you can set publish policy and daily public count,for example,daily count is 4,you got 5 videos,then first 4 will be published 1 day after the upload date ,the other 1 will be 2 days after the upload date
4. fix google account verify
I am running JIRA 6.1 and I'm trying to use the API to create a new project. Since JIRA 6.1's Rest API doesn't support creating a project I'm using the Soap API.
Note I'm doing this using the Atlassian .net SDK, but I imagine the solution is irrelevant to that.
I have managed to create the project no problem, but I am now trying to set the following schemes in the project
Issue Type
Workflow
Screens
As far as I can tell the 6.1 Soap API (and the 7 Rest API) doesn't actually allow you to modify these schemes, only allowing you to set the Permission, Security and Notification schemes - https://docs.atlassian.com/jira/REST/latest/#api/2/project-createProject
Is that the case or am I missing something?
If it is possible to set the scheme's I want, does anyone have any examples I could base my work off?
Thanks
Got an answer from Atlassian support, and as I suspected this isn't possible.
No, you're correct, the SOAP and REST APIs do not have those
functions.
You're going to need to write type-2 add-ons to provide the functions
you need if you're going to do this remotely, but with the caveat that
if you're willing to do that, you will probably find it a lot easier
to simply write add-ons that do all the work instead of just providing
the external hooks. (Let's put it this way - I was able to code
post-functions to create an entire customised project for JIRA 4 in a
couple of days. Versus a week to add a single SOAP call for feeding
back some simple user data)
I won't mutter too much about using SOAP - I'm assuming you know it's
dead, gone and mostly pointless to code for
Of course, there is the CLI plug-in which I think I'd be silly to ignore
JIRA Command Line Interface (CLI) support this for 6.1 through 7.0
including setting schemes that are not supported by SOAP or REST
except for screens. See the createProject action for details of what
is supported.
Starting from Jira 7.0.0, we can use Create project REST API [POST /rest/api/2/project]
which also allows setting following schemes while creating the project,
issueSecurityScheme
permissionScheme
notificationScheme
workflowSchemeId
Sample Request Payload:
{
"key": "EX",
"name": "Example",
"projectTypeKey": "business",
"projectTemplateKey": "com.atlassian.jira-core-project-templates:jira-core-project-management",
"description": "Example Project description",
"lead": "Charlie",
"url": "http://atlassian.com",
"assigneeType": "PROJECT_LEAD",
"avatarId": 10200,
"issueSecurityScheme": 10001,
"permissionScheme": 10011,
"notificationScheme": 10021,
"workflowSchemeId": 10031,
"categoryId": 10120
}
For issuetype and screen schemes, there is no such parameter available which can be set using the above create project rest api.
You can also try to use the following Rest endpoint to create jira project using shared configuration which will allow you to reuse all schemes which are present in the template project.
/rest/project-templates/1.0/createshared/{{projectid}}
More information on the Jira rest API can be found at https://docs.atlassian.com/software/jira/docs/api/REST/8.9.0/#api/2/project-createProject
You can try the following curl request for creating jira project
curl -D- \
-u admin:sphere \
-X POST \
-H "X-Atlassian-Token: nocheck" \
-H "Content-Type: application/x-www-form-urlencoded" \
"http://localhost:port/rest/project-templates/1.0/templates?projectTemplateWebItemKey=com.atlassian.jira-legacy-project-templates%3Ajira-blank-item&projectTemplateModuleKey=com.atlassian.jira-legacy-project-templates%3Ajira-blank-item&name=SECOND+Create+from+REST+API&key=CFRAPI&lead=admin&keyEdited=false"
I found somethings like this code to send to Telegram-CLI but I have no idea what it means or how to do it so if someone could explain it to me step by step I'd be so happy.
https://github.com/psamim/telegram-cli-backup
I couldn't install sqlite3 for some reason with the given code there.
I'm using windows, do I need to boot into Ubuntu to do it?
Anyways, explain it to me like I'm 3 years old in case I won't know something.
Thanks so much.
If you are using windows check the instructions here. I've only tried in Linux and the link in your question (using Lua) works.
The following scripts too does the job well
Python script to backup everything
https://github.com/tvdstaaij/telegram-json-backup
Here is a ruby version of the same https://github.com/tvdstaaij/telegram-history-dump
The mentioned script is updated and now it saves the conversations into a CSV file and does not need sqlite3 library any more. It only needs lua.
It seems I could help you a little.
I am using Ubuntu and I wrote this Bash-script:
#!/bin/bash
TOKEN='YourBot:Token'
URL='https://api.telegram.org/bot'$TOKEN
UPD_URL=$URL'/getUpdates?offset='
function get_offset {
res=$(curl $UPD_URL$OFFSET)
OFFSET=$(echo $res | grep "update_id" | cut -f 4 -d ':' | cut -f 1 -d ',' | head -1)
OFFSET=$((OFFSET+1))
}
while :
do
get_offset
if echo $res | grep "message"
then echo $res >> BackupChat.txt
fi
done
It is very simple bash-script.
Obviously you must create your own bot and add bot to the chat you want to backup.
Bad things about this script is that it creates pretty difficult for reading text logfile with mass garbage like "username" "date" "::" , etc. But it could be improved for making a normal output looking like a cool database.
I hope you have enough Linux skills to make it by yourself.
I think the situation improved since this question was asked, so here an answer from a 2020's point of view that does not require any programming skills or command line tools.
To backup (aka. "export") your Telegram chats download the desktop client available here:
https://desktop.telegram.org/
On Linux, for example, unpack the downloaded file into any subdirectory like ~/tmp/, and start the client from there, like
$ cd ~/tmp/Telegram
$ ./Telegram
You will need to register first with your phone number like on any other Telegram devices via a confirmation code sent to your already logged-in telegram account.
The user interface looks similar to the web interface.
Go into a chat you are interested in, then in the upper right menu choose "Export chat history". Click all checkboxes you are interested in, like media files, GIFs, stickers, etc. and click export.
By default it generates a complete HTML files and subdirectory structure under ~/Downloads/Telegram Desktop/ which you can open for instance like this:
firefox ~/Downloads/Telegram\ Desktop/ChatExport_01_02_2020/messages.html
If you need a more complete backup of all chats you can go to the central menu (3 small bars) on the top left, then "Settings" -> "Advanced" -> under "Data and Storage" choose "Export Telegram data". There you can also click checkboxes for what you are interested in. Near the bottom is aselection between HTML for humas, or machine-readable JSON.
On the very first export request it first requires you to confirm on another Telegram instance to allow the export request, to avoid misuse. Once you confirmed, e.g. from your mobile phone, you can go to export again and proceed as described above without any further confirmations.
I have a Rails app that has a Picture model using the carrier-wave gem to handle image upload/saving.
Eventually, I plan to have an iOS app POST an image to the Picture model's controller / create action.
Before that, I'd like to test some things locally and simulate the POST event.
Can I do this by encoding/posting via OSX Terminal? I imagine I need to encode the image file into (binary?) and POST it to the controller/action.
The easiest way to simulate this is using the command line utility curl. You can do something like:
curl -X POST -F field1=value1 -F file=#path/to/file.jpg http://example.org/pictures
The -F options allow you to set form field values. For example, your controller might be expecting that a couple form fields will be submitted with the file upload. You can pass in multiple -F name=value options. If the value starts with a #, then curl will read from a file (such as the image you want to upload).
The -X POST makes curl run a POST request to the server. I'm not 100% sure it is necessary, because I think curl will automatically switch to POST since you've included a file with the upload... but it won't hurt anything either.
curl is a very powerful tool. You can get additional information by typing man curl in your OSX Terminal window. It has a lot of options and can handle just about any situation you throw at it.
I was going through Twitter Web Pages for my project. Found this problem.
E.g.
Web Page:https://twitter.com/SrBachchan
Page Source(when viewed in the browser by Right Clicking) :
view-source:https://twitter.com/SrBachchan
Downloaded the source code by curl command. The downloaded source code(through curl) is different from the original source code.
I tried downloading the source code using python also(used urllib2.urlopen). Its the same as obtained by curl.
Can anyone throw some light on this.?
I found the solution myself.
One needs to add the header '--header "Accept-Language: en" ' for getting the exact source code of the language.
E.g. curl --header "Accept-Language: en" https://twitter.com/SrBachchan would do the job.