I am currently writting a report and implemented a glossary using the
\usepackage{glossaries}
Everything was working great a few days ago :
each word that required definition in my text was declared in my glossary and was linked with the \acrshrt{} function
For instance :
\usepackage{glossaries}
\makeglossaries
\section{Example}
bla bla bla \acrshrt{dms} bla bla bla
\newacronym{dms}{DMS}{DMS meaning}
which led to :
and (pages and word are clickable and point back to the word and its definition)
However now I have :
and
I geniuenly do not understand why it suddenly stopped working.
I tried removing \hyperref and \url packages, but no changes.
Solution found :
For some very very VERY obscure reasons, loading the \usepackage{hyperref} before \usepackage{glossaries} worked.
Guess from now on I'll stop ordering my packages in an alphabetical order .. x)
In Jenkins classic interface it is possible to create a link in the output console, but I can't see how I can make something clicable in the Blue Green interface.
You can get the solution here :
How to display a hyperlink in hudson/jenkins build output console
You can also use a workaround for that which is fairly customisable than the method mentioned above.
After finishing your build you can add functionality to your build to send a mail with the report link.
You can do it through the Jenkinsfile or using the job configuration as well.
The Jenkinsfile groovy script is as follows.
script {
emailext subject: ' REPORT-Version/'+ "${G_BRANCH}" + ' $DEFAULT_SUBJECT' ,
body: "Click the link below to show Mocha Testing Results for your current build :<br><br>BAE 4.5 Release Report" ,
attachLog: true,
replyTo: '$DEFAULT_REPLYTO',
to: '$DEFAULT_RECIPIENTS'
}
Through Job configuration you can go to Job --> Configure --> Post Build Actions --> Editable Email Notifications.
Well, it works almost out of the box. Blue Ocean automatically recognizes URL patterns and transform it in a clickable link. For example:
echo "CUCUMBER: ${BUILD_URL}cucumber-html-reports/overview-features.html"
Unfortunately it looks like it isn't possible to create a clickable arbitrary text.
I want to add an image as a header on the upper right corner of every page of my word document, using r-markdown with R-Studio.
I found a solution accepted by community here, but there is only the proper command in Latex described, there is no working example. For me the offered solution did not work.
Therefore I provide my sample code:
---
title: "2014 Report"
author: "My Name"
date: "Monday, October 06, 2014"
output:
word_document:
fontsize: 10pt
header-includes:
- \usepackage{wallpaper}
abstract: This is my abstract
---
Test
\ThisURCornerWallPaper{0.1}{header.png}
Here is some text.
That code produced the following output:
As one can see, there is no picture included in the document. Can somebody provide me with an actually working example (other ideas that do the job are also welcome)?
In my application I am sending mail to various users.The mail is attached with an .ics file. But when the user tries to open the file in Office365 an error pop ups which says
'The .ICS attachment can't be viewed because the format is not supported'.
Please see below the .ics file I have used
BEGIN:VEVENT
DTSTAMP:20170322T064351Z
DTSTART;TZID=America/Denver:20170323T110000
DTEND;TZID=America/Denver:20170323T113000
SUMMARY:WAND: Test Summary
TZID:America/Denver
LOCATION:
UID:20170322T064351Z-1#fe80:0:0:0:0:100:7f:fffe%12
DESCRIPTION:Candidate Name: Test User\nContact Phone Number: 1256355
END:VEVENT
END:VCALENDAR
The issue occurs when I have added the Timezone parameter recently.But this will work if I remove the Timezone parameter.
That is, if I replace
DTSTART;TZID=America/Denver:20170323T110000
DTEND;TZID=America/Denver:20170323T113000
with the below one
DTSTART:20170323T110000
DTEND:20170323T113000
the issue does not occur. But I need to add timezone.
Any additional elements need to add for the timezone parameter?
Please suggest.
The ics stream shown in your example seems to be truncated (missing at least the BEGIN:VCALENDAR) but assuming it is there in your actual ics, you are also supposed to include a VTIMEZONE component (before the BEGIN:VEVENT) that corresponds to the TZID=America/Denver used in your DTSTART/DTEND.
See for example the second example at https://www.rfc-editor.org/rfc/rfc5545#section-4
Is it possible to extract the closed caption transcript from YouTube videos?
We have over 200 webcasts on YouTube and each is at least one hour long. YouTube has closed caption for all videos but it seems users have no way to get it.
I tried the URL in this blog but it does not work with our videos.
http://googlesystem.blogspot.com/2010/10/download-youtube-captions.html
Here's how to get the transcript of a YouTube video (when available):
Go to YouTube and open the video of your choice.
Click on the "More actions" button (3 horizontal dots) located next to the Share button.
Click "Open transcript"
Although the syntax may be a little goofy this is a pretty good solution.
Source: http://ccm.net/faq/40644-youtube-how-to-get-the-transcript-of-a-video
Get timedtext file directly from YouTube
curl -s "$video_url"|grep -o '"baseUrl":"https://www.youtube.com/api/timedtext[^"]*lang=en'|cut -d \" -f4|sed 's/\\u0026/\&/g'|xargs curl -Ls|grep -o '<text[^<]*</text>'|sed -E 's/<text start="([^"]*)".*>(.*)<.*/\1 \2/'|sed 's/\xc2\xa0/ /g;s/&/\&/g'|recode xml|awk '{$1=sprintf("%02d:%02d:%02d",$1/3600,$1%3600/60,$1%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1
yt-dlp
yt-dlp supports saving the automatically generated closed captions in a JSON format:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q yt-dlp -i --skip-download --write-auto-sub --sub-format json3 -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.json3;do jq -r '.events[]|select(.segs and .segs[0].utf8!="\n")|(.tStartMs|tostring)+" "+([.segs[]?.utf8]|join(""))' "$f"|awk '{x=$1/1e3;$1=sprintf("%02d:%02d:%02d",x/3600,x%3600/60,x%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.json3}";rm "$f";done)
You can also use the function above to download the captions for all videos on a channel or playlist if you give the ID or URL of the channel or playlist as an argument. When there is an error downloading a single video, the -i (--ignore-errors) option skips the video instead of exiting with an error.
Or this just gets the text without the timestamps:
yt-dlp --skip-download --write-auto-sub --sub-format json3 $youtube_url_or_id;jq -r '.events[]|select(.segs and.segs[0].utf8!="\n")|[.segs[].utf8]|join("")' *json3|paste -sd\ -|fold -sw60
youtube-dl
As of 2022, the format of the VTT and TTML downloaded by youtube-dl --write-auto-sub is messed up so that all subtitle texts are placed under a few long lines so that the timestamps of the subtitles are not visible. If you don't need the timestamps, then it shouldn't matter, but otherwise you can fix it by substituting yt-dlp for youtube-dl in the following commands. But with yt-dlp, you can also use a more convenient JSON format, so you don't need the following approach to deal with the VTT subtitle format.
This downloads the subtitles as VTT:
youtube-dl --skip-download --write-auto-sub $youtube_url
The other available formats are ttml, srv3, srv2, and srv1 (shown by --list-subs):
--write-sub
Write subtitle file
--write-auto-sub
Write automatically generated subtitle file (YouTube only)
--all-subs
Download all the available subtitles of the video
--list-subs
List all available subtitles for the video
--sub-format FORMAT
Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best"
--sub-lang LANGS
Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags
You can use ffmpeg to convert the subtitle file to another format:
ffmpeg -i input.vtt output.srt
In the VTT subtitles, each subtitle text is repeated three times, and there is typically a new subtitle text every eighth line (but under some mysterious circumstances it's every 12th line instead):
WEBVTT
Kind: captions
Language: en
00:00:01.429 --> 00:00:04.249 align:start position:0%
ladies<00:00:02.429><c> and</c><00:00:02.580><c> gentlemen</c><c.colorE5E5E5><00:00:02.879><c> I'd</c></c><c.colorCCCCCC><00:00:03.870><c> like</c></c><c.colorE5E5E5><00:00:04.020><c> to</c><00:00:04.110><c> thank</c></c>
00:00:04.249 --> 00:00:04.259 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
</c>
00:00:04.259 --> 00:00:05.930 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
you<00:00:04.440><c> for</c><00:00:04.620><c> coming</c><00:00:05.069><c> tonight</c><00:00:05.190><c> especially</c></c><c.colorCCCCCC><00:00:05.609><c> at</c></c>
00:00:05.930 --> 00:00:05.940 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
</c>
00:00:05.940 --> 00:00:07.730 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
such<00:00:06.180><c> short</c><00:00:06.690><c> notice</c></c>
00:00:07.730 --> 00:00:07.740 align:start position:0%
such short notice
00:00:07.740 --> 00:00:09.620 align:start position:0%
such short notice
I'm<00:00:08.370><c> sure</c><c.colorE5E5E5><00:00:08.580><c> mr.</c><00:00:08.820><c> Irving</c><00:00:09.000><c> will</c><00:00:09.120><c> fill</c><00:00:09.300><c> you</c><00:00:09.389><c> in</c><00:00:09.420><c> on</c></c>
00:00:09.620 --> 00:00:09.630 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
</c>
00:00:09.630 --> 00:00:11.030 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
the<00:00:09.750><c> circumstances</c><00:00:10.440><c> that's</c><00:00:10.620><c> brought</c><00:00:10.920><c> us</c></c>
00:00:11.030 --> 00:00:11.040 align:start position:0%
<c.colorE5E5E5>the circumstances that's brought us
</c>
This converts the VTT subtitles to a simpler format:
sed '1,/^$/d' *.vtt| # remove the lines at the top of the file
sed 's/<[^>]*>//g'| # remove tags
awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3' | # print each new subtitle text and its start time without milliseconds
awk NF\>1 # remove lines with only one field
Output:
00:00:01 ladies and gentlemen I'd like to thank
00:00:04 you for coming tonight especially at
00:00:05 such short notice
00:00:07 I'm sure mr. Irving will fill you in on
00:00:09 the circumstances that's brought us
In maybe around 10% of videos that I tested with (like for example p9M3shEU-QM and aE05_REXnBc), there were one or more subtitle texts which came 12 and not 8 lines after the previous subtitle text. But a workaround is to print every fourth line but to then remove empty lines.
Function form:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q youtube-dl -i --skip-download --write-auto-sub -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.vtt;do sed '1,/^$/d' -- "$f"|sed 's/<[^>]*>//g'|awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3'|awk 'NF>1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.vtt}";rm "$f";done)
Following document says only the owner of the channel can do this via standard youtube interface:
https://developers.google.com/youtube/2.0/developers_guide_protocol_captions?hl=en
Cheap fix:
You can click on the "interactive transscript" button - and copy the content this way.
Of course you lose the milliseconds this way.
Extremely cheap fix:
A shared youtube account -
so that multiple people can edit and upload caption files.
Challenging solution:
The youtube API allows downloading and uploading of caption files via HTTP...
You may write a youtube API application to provide a browser user interface for uploading or downloading for ANY user or particular users.
Here is an example project for this in java
http://apiblog.youtube.com/2011/01/youtube-captions-uploader-web-app.html
Here is very simple example of a working upload for everybody:
http://yt-captions-uploader.appspot.com/
You can view/copy/download a timecoded xml file of a youtube's closed captions file by accessing
http://video.google.com/timedtext?lang=[LANGUAGE]&v=[YOUTUBE VIDEO IDENTIFIER]
For example http://video.google.com/timedtext?lang=pt&v=WSVKbw7LC2w
NOTE: this method does not download autogenerated closed captions, even if you get the language right (maybe there's a special code for autogenerated languages).
You can download the streaming subtitles from YouTube with KeepSubs DownSub and SaveSubs.
You can choose from the Automatic Transcript or author supplied close captions. It also offers the possibility to automatically translate the English subtitles into other languages using Google Translate.
(Obligatory 'this is probably an internal youtube.com interface and might break at any time')
Instead of linking to another tool that does this, here's an answer to the question of "how to do this"
Use fiddler or your browser devtools (e.g.
Chrome) to inspect the youtube.com HTTP traffic, and there's a response from /api/timedtext that contains the closed caption info as XML.
It seems that a response like this:
<p t="0" d="5430" w="1">
<s p="2" ac="136">we've</s>
<s t="780" ac="252"> got</s>
</p>
<p t="2280" d="7170" w="1">
<s ac="243">we're</s>
<s t="810" ac="233"> going</s>
</p>
means at time 0 is the word we've and at time 0+780 is the word got and at time 2280+810 is the word going, etc. This time is in milliseconds so for time 3090 you'd want to append &t=3 to the URL.
You can use any tool to stitch together the XML into something readable, but here's my Power BI Desktop script to find words like "privilege":
let
Source = Xml.Tables(File.Contents("C:\Download\body.xml")),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:format", Int64.Type}}),
body = #"Changed Type"{0}[body],
p = body{0}[p],
#"Changed Type1" = Table.TransformColumnTypes(p,{{"Attribute:t", Int64.Type}, {"Attribute:d", Int64.Type}, {"Attribute:w", Int64.Type}, {"Attribute:a", Int64.Type}, {"Attribute:p", Int64.Type}}),
#"Expanded s" = Table.ExpandTableColumn(#"Changed Type1", "s", {"Attribute:ac", "Attribute:p", "Attribute:t", "Element:Text"}, {"s.Attribute:ac", "s.Attribute:p", "s.Attribute:t", "s.Element:Text"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Expanded s",{{"s.Attribute:t", Int64.Type}}),
#"Removed Other Columns" = Table.SelectColumns(#"Changed Type2",{"s.Attribute:t", "s.Element:Text", "Attribute:t"}),
#"Replaced Value" = Table.ReplaceValue(#"Removed Other Columns",null,0,Replacer.ReplaceValue,{"s.Attribute:t"}),
#"Filtered Rows" = Table.SelectRows(#"Replaced Value", each [#"s.Element:Text"] <> null),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Time", each [#"Attribute:t"] + [#"s.Attribute:t"]),
#"Filtered Rows1" = Table.SelectRows(#"Added Custom", each ([#"s.Element:Text"] = " privilege" or [#"s.Element:Text"] = " privileged" or [#"s.Element:Text"] = " privileges" or [#"s.Element:Text"] = "privilege" or [#"s.Element:Text"] = "privileges"))
in
#"Filtered Rows1"
There is a free python tool called YouTube transcript API
You can use it in scripts or as a command line tool:
pip install youtube_transcript_api
With the YouTube video updated as of June 2020 it's very straight forward
select on the 3 dots next to like/dislike buttons to open further menu options
select "add translations"
select language
click autogenerate if needed
click Actions > Download
You will get and .sbv file
Choose Open Transcript from the ... dropdown to the right of the vote up/down and share links.
This will open a Transcript scrolling div on the right side.
You can then use Copy. Note that you cannot use Select All but need to click the top line, then scroll to the bottom using the scroll thumb, and then shift-click on the last line.
Note that you can also search within this text using the normal web page search.
I just got this easily done manually by opening the transcript at the beginning of the video and left-clicking and dragging at the time 00:00 marker with the shift key pressed over a few lines at the beginning.
I then advanced the video to near the end. When the video stopped, I clicked the end of the last sentence whilst holding down the shift key once more. With CTRL-C I copied the text to the clipboard and pasted it into an editor.
Done!
Caveat: Be sure to have no RDP-Windows sharing the clipboard or Software such as Teamviewer is running at the same time as this procedure will overflow their buffers where a large amount of text is copied.