storing httpie output with colors in document - httpie

I would like to use httpie to generate documentation for a REST web services. The idea would be to have a text containing sample requests with comments
'ping the server','http -v get :8080/ping'
'submit document','http -v post :8080/document name=asdf'
A script would then execute the requests and capture the nicely formatted output in a document.
Is there a way to do that?

You could also use Pygments CLI (pip install pygments). That should provide cleaner HTML, and it also gives you the option to choose any from the many Pygments styles.
{
# Stylesheet:
echo '<style>'
pygmentize -S default -f html
echo '</style>'
# Request HTTP headers as HTML:
http --print=H httpbin.org/post hello=world | pygmentize -f html -l http /dev/stdin
# JSON request body as HTML:
http --print=B httpbin.org/post hello=world | pygmentize -f html -l json /dev/stdin
} > request.html
Output:
<style>
…
</style>
<div class="highlight"><pre><span class="nf">POST</span> <span class="nn">/post</span> <span class="kr">HTTP</span><span class="o">/</span><span class="m">1.1</span>
<span class="na">Content-Length</span><span class="o">:</span> <span class="l">18</span>
<span class="na">Accept-Encoding</span><span class="o">:</span> <span class="l">gzip, deflate</span>
<span class="na">Accept</span><span class="o">:</span> <span class="l">application/json</span>
<span class="na">User-Agent</span><span class="o">:</span> <span class="l">HTTPie/0.8.0</span>
<span class="na">Host</span><span class="o">:</span> <span class="l">httpbin.org</span>
<span class="na">Content-Type</span><span class="o">:</span> <span class="l">application/json; charset=utf-8</span>
</pre></div>
<div class="highlight"><pre><span class="p">{</span><span class="nt">"hello"</span><span class="p">:</span> <span class="s2">"world"</span><span class="p">}</span>
</pre></div>

I don't know a way to make it via httpie, but there is a way to grab formatted output from bash to html: see this question or use HTML::FromANSI perl module or aha tool. There are many similar tools, choose the most suitable for you.

Related

Unable to trigger Jenkins job using API Token command-line curl

I'm new to this and not sure if I'm doing this right.
My requirement is to trigger(build) a jenkins job using command-line which I will invoke using ansible.
I'm following the instructions on this stackoverflow link.
I followed the below steps but Unfortunately the build does not get triggered.
Step 1:
I logged in Jenkins portal https://myjenkins.com:9043 using my user id user114 which is also the Jenkins administrator.
I then created API-token for my user id which is 118f32aa48601c136d29y11f3dd0e107f5.
Step 2:
I then selected Trigger Build Remotely option for job5 and gave the token name as 118f32aa48601c136d29y11f3dd0e107f5
Step 3: I then created Jenkins-Crumb using the below command:
`curl -s -k 'https://user114:118f32aa48601c136d29y11f3dd0e107f5#myjenkins.com:9043/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)'
Jenkins-Crumb:5196c183ad95cf3c0482873e34236f3b78ab628ecba968086cd19s7430016a4e`
Then i tried the below commands with the intention of triggering the build for job5 but none of them triggered my Jenkins build.
Attemp 1:
$ curl -I -k -X POST https://user114:118f32aa48601c136d29y11f3dd0e107f5#myjenkins.com:9043/job/job5/build -H "Jenkins-Crumb:5196c183ad95cf3c0482873e34236f3b78ab628ecba968086cd19s7430016a4e"
HTTP/1.1 400 Bad Request
X-Content-Type-Options: nosniff
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 481
Server: Jetty(NOTHING)
Attempt 2:
$ curl -k -X POST https://user114:118f32aa48601c136d29y11f3dd0e107f5#myjenkins.com:9043/job/job5/build -H "Jenkins-Crumb:5196c183ad95cf3c0482873e34236f3b78ab628ecba968086cd19s7430016a4e"
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 Nothing is submitted</title>
</head>
<body><h2>HTTP ERROR 400 Nothing is submitted</h2>
<table>
<tr><th>URI:</th><td>/job/job5/build</td></tr>
<tr><th>STATUS:</th><td>400</td></tr>
<tr><th>MESSAGE:</th><td>Nothing is submitted</td></tr>
<tr><th>SERVLET:</th><td>Stapler</td></tr>
</table>
<hr>Powered by Jetty:// NOTHING<hr/>
</body>
Attempt 3:
$ curl -k -X POST https://user114:118f32aa48601c136d29y11f3dd0e107f5#myjenkins.com:9043/job/job5/build?token=118f32aa48601c136d29y11f3dd0e107f5 -H "Jenkins-Crumb:5196c183ad95cf3c0482873e34236f3b78ab628ecba968086cd19s7430016a4e"
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 This page expects a form submission</title>
</head>
<body><h2>HTTP ERROR 400 This page expects a form submission</h2>
<table>
<tr><th>URI:</th><td>/job/job5/build</td></tr>
<tr><th>STATUS:</th><td>400</td></tr>
<tr><th>MESSAGE:</th><td>This page expects a form submission</td></tr>
<tr><th>SERVLET:</th><td>Stapler</td></tr>
</table>
<hr>Powered by Jetty:// NOTHING<hr/>
</body>
</html>
Further, I also intend to pass parameters to this jenkins job which i dont know how-to.
Can you please suggest ?
I started using the Generic Webhook Plugin for this. It takes away a lot of the hassle and allows you to parse any JSON/XML payload as you please.

curl part in URL with awk or sed

I have a part of an URL from a curl command like:
<span class="h2">Newest Version 2.1.4</span>
(The result is longer, but this should do it as well.
I want to have a sed or awk statement that leaves only
2.1.4
What is the most "dynamic" way to do that? Is it possible to filter only the Numbers after the word "Version" up to the
Try this using xmllint :
curl ...... |
xmllint --html --xpath '//span[#class="h2"]/text()' |
grep -oP 'Newest Version \K.*'
the most "dynamic" way is not the case for that. You need the most robust and flexible way.
xmlstarlet solution:
xmlstarlet sel -t -v 'substring(//span[#class="h2"]/text(), 16)' -n input.html
The output:
2.1.4
Always use XML/HTML parsers when dealing with XML/HTML data.
You can use the following command that must be piped to the command that will fetch your html file (curl or something else)
xmllint --html --xpath 'substring-after(//span[#class="h2"],"Newest Version ")' -
Explanations:
--html to activate the HTML parser mode
--xpath to evaluate an xpath expression, here the xpath expression is:
'substring-after(//span[#class="h2"],"Newest Version ")' is the xpath expression used to fetch from all the span elements the ones with attribute class="h2" then from that node you get the substring after Newest Version
Last but not least, it is important to use the - at the end to have xmllint work on stdin instead of working from a file.
Test:
$ echo '<span class="h2">Newest Version 2.1.4</span>' | xmllint --html --xpath 'substring-after(//span[#class="h2"],"Newest Version ")' -
2.1.4

simple filtering with `grep` , `awk`, `sed` or whatever else that's capable

I have a file, each line of which can be described by this grammar:
<text> <colon> <fullpath> <comma> <"by"> <text> <colon> <text> <colon> <text> <colon> <text>
Eg.,
needs fixing (Sunday): src/foo/io.c, by Smith : in progress : <... random comment ...>
How do I get the <fullpath> portion, which lies between the first <colon> and the first <comma>
(I'm not very inclined to write a program to parse this, though this looks like it could be done easily with javacc. Hoping to use some built-in tools like sed, awk, ...)
Or with a regex substitution
sed -n 's/^[^:]*:\([^:,]*\),.*/\1/p' file
Linux sed dialect; if on a different platform, maybe you need an -E option and/or take out the backslashes before the round parentheses; or just go with Perl instead;
perl -nle 'print $1 if m/:(.*?),/' file
Assuming the input will be similar to what you have above:
awk '{print $4}' | tr -d ,
For the entire file you can just type the file name next to the awk command to the command I have above.
If you're using bash script to parse this stuff, you don't even need tools like awk or sed.
$ text="needs fixing (Sunday): src/foo/io.c, by Smith : in progress : <... comment ...>"
$ text=${text%%,*}
$ text=${text#*: }
$ echo "$text"
src/foo/io.c
Read about this on the bash man page under Parameter Expansion.
with GNU grep:
grep -oP '(?<=: ).*?(?=,)'
This may find more than one substring if there are subsequent commas in the line.

Script to rename '../foo' to asset_path('foo')

This is not limited to Rails developers, but I would assume it's pretty common to have to do this since Rails 3.1.
I'm looking for a script/some method of replacing everything of the form
'../foo/BLAHBLAH'
to <%= asset_path 'BLAHBLAH' %>
where foo is the name of the asset type, so it can be either images or fonts.
Anyone have experience with this?
You can do this with a global search and replace.
In Textmate you can hit Command-Shift-F to enter a project wide search. Then search for \.\.\/images\/(.*?)[\)'"]and replace it with <%= asset_path('$1') %>
With find and sed it's a simple one liner:
find PROJECT_DIR -type f -name "*.html" -exec sed -i -e 's/\.\.\/images\/\([^)'\''"]*\)/<%= asset_path("\1")/g' {} \;
And in Vim you can do:
:args ./**
:argdo %s/\.\.\/images\/(.*?)[\)'"]/<%= asset_path('\1')/g

How do get the data stream link for any video of youtube?

I'm not sure if this is the right place to post this question,I googled a lot about this,but nothing turned up,. for a link of the form
http://www.youtube.com/watch?v=[video_id]
How do i get the link for the data stream?
The following bash script will retrieve youtube streaming url's. I know this is outdated, but maybe this will help someone.
#!/bin/bash
[ -z "$1" ] && printf "usage: `basename $0` <youtube_url>\n" && exit 1
ID="$(echo "$1" | grep -o "v=[^\&]*" | sed 's|v=||')"
URL="$(curl -s "http://www.youtube.com/get_video_info?&video_id=$ID" | sed -e 's|%253A|:|g' -e 's|%252F|/|g' -e 's|%253F|?|g' -e 's|%253D|=|g' -e 's|%2525|%|g' -e 's|%2526|\&|g' -e 's|\%26|\&|g' -e 's|%3D|=|g' -e 's|type=video/[^/]*&sig|signature|g' | grep -o "http://o-o---preferred[^:]*signature=[^\&]*" | head -1)"
[ -z "$URL" ] && printf "Nothing was found...\n" && exit 2
echo "$URL"
Here's a quick lesson in reverse-engineering the YouTube page to extract the stream data.
In the HTML you'll find a <script> tag which defines a variable "swfHTML" - it looks like this: "var swfHTML = (isIE) ? "...
The text in the quotes that follows that snippet is the HTML that displays the Flash object. Note, this text is a set of broken up strings that get concatenated so you'll need to clean it up (i.e. strip instances of '" + "' and and escaping backslashes in order to get the HTML string.)
Once clean you'll need to find the <param> tag with name="flashvars", the value of this tag is an &-delimited URL. Do a split on the & and you'll get your key-value pairs for all the data relating to this video.
The main key you're looking for is "fmt_url_map" and it's an URL Encoded string of Comma-Separated Values starting with "35|" or "34|" or other. (These are defined in another key, "fmt_list" to be files of resolution 854x480 for 35, 640x360 for 34, etc..)
each channel provides rss-data, wich is not updated immediatelly.
Here is a generator for Youtube RSS Files. You should be able to deduce the location of videofiles based on the RSS information. The flv files should be streamable but other formats are also provided.
EDIT:
http://www.referd.info/ is no longer available. It basically was a service where you provided the youtube link and it dereferenced it found all possible downloadsources for that video. I am sure those services are still out there... this one isnt anymore.
You Need Open Link Like This
http://www.youtube.com/get_video_info?&video_id=OjEG808MfF4
And Find Your Stream Your In Return Data

Resources