Is it possible to send a GET and POST request using curl through a linux terminal?
Essentially, I'm trying to understand what these guys are doing in this tutorial but I don't understand how to add additional GET parameters to their tutorial's example.
For instance, in their tutorial (http://valeriobasile.github.io/candcapi/), they use the example of:
curl -d 'Every man loves a woman' 'http://gingerbeard.alwaysdata.net/candcapi/proxy.php/raw/pipeline?semantics=fol'
It works but I want a graphical representation of this. They also mention this in their example Here.
"An entry point to generate a PNG image...
$CANDCAPT/drg
"The URL accepts the same GET parameter as pipeline."
So I tried to send this but it doesn't get a PNG file:
curl -d 'Every man loves a woman&pipeline' 'http://gingerbeard.alwaysdata.net/candcapi/proxy.php/raw/pipeline?semantics=drs&roles=verbnet'
This actually breaks their system.
curl -d 'Every man loves a woman' 'http://gingerbeard.alwaysdata.net/candcapi/proxy.php/raw/pipeline?semantics=drs&roles=verbnet%20%#d%2dbox'
So my question is, I'm not too sure what the developers mean when they say the url accepts the same GET parameters and how would I add that in? Please let me know if you have any ideas, thanks.
EDIT 1 -
I tried adding drg instead of pipeline but it returns a message saying, "not found"
curl -d 'Every man loves a woman' 'http://gingerbeard.alwaysdata.net/candcapi/proxy.php/drg?semantics=fol'
I read through the GitHub documentation, and at the bottom it describes how to obtain graphical output:
The C&C/Boxer API provides an entry point to generate a PNG image of the DRG of a given text:
$CANDCAPI/drg
The URL accepts the same GET parameter as pipeline and returns a raw PNG file.
Based on this, your GET url should look something like this:
http://gingerbeard.alwaysdata.net/candcapi/proxy.php/drg?semantics=fol
Using the "same" GET parameters means that whatever you passed to the pipeline after the question mark can also be passed to the drg web service.
Modified anwser:
Here's the correct command from the same example:
curl -d 'Every man loves a woman' 'http://gingerbeard.alwaysdata.net/candcapi/drg?semantics=fol'
And for people who are beginners like me, here's to save the png file:
curl -d 'Every man loves a woman' 'http://gingerbeard.alwaysdata.net/candcapi/drg?semantics=fol' >> image.png
Related
This is a YouTube channel URL that includes Cyrillic characters in the username:
https://www.youtube.com/c/%D0%9B%D1%83%D1%87%D1%88%D0%B8%D0%B5%D0%B4%D0%BE%D0%BA%D1%83%D0%BC%D0%B5%D0%BD%D1%82%D0%B0%D0%BB%D1%8C%D0%BD%D1%8B%D0%B5%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B/videos
I am trying to obtain the channel's id from the URL by calling the YouTube DATA API v3:
https://www.googleapis.com/youtube/v3/channels?key=[YouTubeAPIkey]&forUsername=%D0%9B%D1%83%D1%87%D1%88%D0%B8%D0%B5%D0%B4%D0%BE%D0%BA%D1%83%D0%BC%D0%B5%D0%BD%D1%82%D0%B0%D0%BB%D1%8C%D0%BD%D1%8B%D0%B5%D1%84%D0%B8%D0%BB%D1%8C%D0%BC%D1%8B&part=id
But the call returns no data.
For reference, "https://www.youtube.com/c/besogontv/videos" returns a valid result:
https://www.googleapis.com/youtube/v3/channels?key=[YouTubeAPIkey]&forUsername=besogontv
Just to see if it may work, I tried decoding the URL encoding and then re-encoding to UTF8, but it didn't make a difference.
Is there some character encoding issue I'm missing?
If you'll issue the following command (at any GNU/Linux bash prompt):
$ wget \
--quiet \
--output-document=- \
--content-on-error \
"https://www.googleapis.com/youtube/v3/channels?key=$APP_KEY&id=UCk8LWzqGcHz21FWysiXuCHw&part=brandingSettings,contentDetails,id,snippet,statistics,status,topicDetails&maxResults=1"
you'll see that лучшиедокументальныефильмы is not the channel's user name, but its customUrl!
The forUsername property does not function for a given channel's custom URL since these URLs are not guaranteed to uniquely represent any given channel.
Do convince yourself by querying on Google's issue tracker for either of these two phrases channels forusername or vanity URL to see the terse/raw official responses users got from Google's staff.
Indeed, at times, the official docs and staff responses do lack useful/meaningful clear-cut specifications and/or formulations. (I already experienced all these myself too!)
As a final note, you may scrape out of the HTML page obtained from https://www.youtube.com/c/лучшиедокументальныефильмы the channel ID of your interest, but please bear in mind that this activity is forbidden by Google, as per its DTOS docs:
Scraping
You and your API Clients must not, and must not encourage, enable, or require others to, directly or indirectly, scrape YouTube Applications or Google Applications, or obtain scraped YouTube data or content. Public search engines may scrape data only in accordance with YouTube's robots.txt file or with YouTube's prior written permission.
Instead of scraping, I'd recommend using the Search.list API endpoint, invoked with the q parameter being лучшиедокументальныефильмы and the type parameter being channel (if you're able to cope with the fuzziness implied).
Update upon answering to a related SO question
Here is a simple Python3 script implementing the functionality that you're looking for. Applying your custom URL to this script produces the expected result:
$ python3 youtube-search.py \
--custom-url Лучшиедокументальныефильмы \
--app-key ...
UCk8LWzqGcHz21FWysiXuCHw
$ python3 youtube-search.py \
--user-name Лучшиедокументальныефильмы \
--app-key ...
youtube-search.py: error: user name "Лучшиедокументальныефильмы": no associated channel found
Note that you have to pass to this script your application key as argument to the command line option --app-key (use --help for brief help info).
I see a lot of useful methods in the API, but I don't find any method to list all my Posts, or all the posts from within a publication. Is this intentional?
Thought it would be something really obvious to exist in the API. Or am I missing something?
Got it, just use the RSS feed instead.
I wrapped a Github package by #mark-fasel into a Clay microservice that enables you to do exactly this:
Simplified Return Format:
https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
What Medium actually returns at the endpoint
https://clay.run/services/nicoslepicos/medium-get-users-posts
I put together a little fiddle, since a user was asking how to use the endpoint in HTML to get the titles for their last 3 points:
https://jsfiddle.net/h405m3ma/1/
You can call the API as:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"nicolaerusan"}' https://clay.run/services/nicoslepicos/medium-get-users-posts-simple
You can also use it easily in your node code using the clay-client npm package and just write:
Clay.run('nicoslepicos/medium-get-users-posts-simple', {"username":"usernameValue"}) .then((result) => {
// Do what you want with returned result console.log(result);
})
If you need to generally pull down an RSS feed, here's a microservice for that:
https://clay.run/services/nicoslepicos/rss-to-json
So I just understood that the Slack Web API does not support JSON data over POST. Which means I have to encode my complex and nested JSON object to fit in query parameters over GET. Problem is, the attachements don't seem to work. Does anyone have a solution ?
So I just understood that the Slack Web API does not support JSON data over POST. Which means I have to encode my complex and nested JSON object to fit in query parameters over GET.
I'm not sure I follow what you mean. You can certainly use POST. The body of a Slack API call should be form-encoded, but parameter values are sometimes JSON (as is the case for attachments).
Here's a working curl command that uses HTTP POST to post a message with a simple attachment.
$ curl -d token=<REDACTED> -d channel=<REDACTED> \
-d text="This is the main text." \
-d attachments='[{"text": "This is an attachment."}]' \
https://slack.com/api/chat.postMessage
I'd recommend using POST, but GET also works fine. If you fill in the values in https://api.slack.com/methods/chat.postMessage/test, the tool will give you a URL at the bottom that you can use with HTTP GET.
i have ruby on rails based api which accepts a get request.
example :
http://localhost:3000/api/search?query=whatis&access_token=324nbkjh3g32423
when i do curl from mac terminal like
curl http://localhost:3000/api/search?query=whatis&access_token=324nbkjh3g32423
i checked in the server with "request.fullpath", it return only "/api/search?query=whatis", the second parameter is missing.
however if i do curl like
curl --data="query=whatis&access_token=324nbkjh3g32423" http://localhost:3000/api/search
it is taking all the parameters.
i understand there is a problem with encoding, but i what to know what difference is there with the two requests.
Thanks in advance
The problem probably is that bash shell sees & as the end of the command.
try quoting the entire querystring like this -
curl "http://localhost:3000/api/search?query=whatis&access_token=324nbkjh3g32423"
I want to translate bulk numbers of short url's coming streamed from twitter. Rather than accessing each individual request I want to use API's that accept a list of short or tiny URL's and return the original URL's. Are such API's available?
99% of all url openers have an API.
For example, there's a PEAR package (PHP) called Services_ShortURL that supports:
bit.ly
digg
is.gd
short.ie
tinyurl.com
Not really an API, but this will give you the URL really fast.
curl -I insert short URL here | grep Location | awk '{print $2}'
There are a few web-sites around that are dedicated services to converting shortened URLs back to their original.
Two I know of that have APIs are LongURL and Untiny.me. I'm in the middle of writing a java library to use both of these.
I had written a small script to turn short urls to it's original links. It's based on the http header returned by the short urls.
From Untiny.me's online service, this was useful:
http://untiny.me/api/1.0/extract/?format=text&url=bit.ly/GFscreener12
So conceivably a simple Bash script reading each line as a short URL would work:
#!/bin/bash
# urlexpander.sh by MarcosK
while read URLline; do
curl -s "untiny.me/api/1.0/extract/?format=text&url=$URLline"
done
To test, feed it a single URL with echo "bit.ly/GFscreener12" | ./urlexpander.sh
or send it your whole input file, one short URL per line, with:
cat urllist.txt | ./urlexpander.sh
Have a look at bit.ly API or budurl.com API