youtube-dl -citw downloads only one video instead of all - youtube

I used
user#ubuntu:~/Arbeitsfläche$ sudo youtube-dl -citw ytuser: raz\ malca
to download all videos from this channel. But it wodnloads only one:
user#ubuntu:~/Arbeitsfläche$ sudo youtube-dl -citw ytuser: raz\ malca
[generic] ytuser:: Requesting header
ERROR: Invalid URL protocol; please report this issue on https://yt-dl.org/bug . Be sure to call youtube-dl with the --verbose flag and include its complete output. Make sure you are using the latest version; type youtube-dl -U to update.
WARNING: Falling back to youtube search for raz malca . Set --default-search to "auto" to suppress this warning.
[youtube:search] query "raz malca": Downloading page 1
[download] Downloading playlist: raz malca
[youtube:search] playlist raz malca: Collected 1 video ids (downloading 1 of them)
[download] Downloading video #1 of 1
[youtube] Setting language
[youtube] R72vpqqNSDw: Downloading webpage
[youtube] R72vpqqNSDw: Downloading video info webpage
[youtube] R72vpqqNSDw: Extracting video information
I use the newest version of this programm.
My system is ubuntu 13.10 :)
Has anyone a idea ?

First of all, don't run youtube-dl with sudo - normal user rights suffice. There is also absolutely no need to pass in -ctw. But the main problem is that your command line contains a superfluous space after ytuser:. In any case, even if the space weren't there, raz malca is not a valid YouTube user ID. There is, however, a channel named so. Simply pass in that channel's URL into youtube-dl:
youtube-dl -i https://www.youtube.com/channel/UCnmQSqOPhkawAdndZgjfanA

to find out the channel's URL, do an inspect on the user that created the channel. E.g. for the channel mix - o grande amor (Tom Jobim) - Solo Guitar by Chiba Kosei, right click (in Firefox) on Chiba Kosei and select inspect element. Notice that the channel is in the href part. So copy that and your final channel-url is https://youtube.com/channel/UC5q9SrhlCJ4YciPTLLNTR5w
a href="/channel/UC5q9SrhlCJ4YciPTLLNTR5w" class="yt-uix-sessionlink g-hovercard spf-link " data-ytid="UC5q9SrhlCJ4YciPTLLNTR5w" data-sessionlink="itct=CDIQ4TkiEwjBvtSO7v3KAhWPnr4KHUYMDbko-B0">Chiba Kosei

Related

How can I get video source from movie website

I want extract video source for example for this movie.
https://www.cda.pl/video/53956781/vfilm
There are hidden video sources. And there is something like security steps to unable do it.
I opened it in firefox with Video Download Helper and I got:
headers
[...
]
id
network-probe:5f8a4b2c
isPrivate
false
length
831760944
pageUrl
https://www.cda.pl/video/53956781/vfilm
referrer
https://www.cda.pl/video/53956781/vfilm
running
0
status
active
tabId
36
thumbnailUrl
https://icdn.2cda.pl/vid/premium/539567/frames/620x365/e86f6548543225cf0f1b61f1e9d730be.jpg
title
Istnienie (2013) Lektor PL 720p - CDA
topUrl
https://www.cda.pl/video/53956781/vfilm
type
video
url
https://vwaw607.cda.pl/539567/v_lq_lq5250559bd547af6bad2eb78dacaf5df4.mp4
urlFilename
v_lq_lq5250559bd547af6bad2eb78dacaf5df4
So there is an video source but If I go to https://vwaw607.cda.pl/539567/v_lq_lq5250559bd547af6bad2eb78dacaf5df4.mp4
it says video cannot be loaded. File is damaged
Could you give me an idea how to extract video url. Thanks

Error opening video stream or file?

l try to read the following video, downloaded from http://www.sample-videos.com/
which is http://www.sample-videos.com/video/mp4/720/big_buck_bunny_720p_5mb.mp4
Here is my code :
import cv2
cap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
if(cap.isOpened()== False):
print("Error opening video stream or file")
count = 0
while (cap.isOpened()):
# capture frame by frame :
ret, frame = cap.read()
if ret==True:
# Display the resulting frame
cv2.imshow('Frame', frame)
cv2.imwrite("frame%d.jpg" % count, frame)
count +=1
print(count)
However l get Error opening video stream or file at cap = cv2.VideoCapture('big_buck_bunny_720p_5mb.mp4')
and ret equals False always
My OpenCV version is 3.1.0
There may be the following issue with your machine:
configure the video path
check the permission to access the file
install an additional codec
You might have installed opencv but there are some prerequisites needs to be installed while reading a .mp4 video file using open cv.
You can verify that by simply reading an .avi format file and .mp4 file
[it could read .avi file but not .mp4 file]
To read a mp4 .file
Install ffmpeg package compiled with H.264 codec:
H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding) is a standard for video compression, and is currently one of the most commonly used formats for the recording, compression, and distribution of high definition video.
Ref : https://www.debiantutorials.com/how-to-install-ffmpeg-with-h-264mpeg-4-avc/
Few suggestions to make sure all prerequisites are available
1. check ffmpeg package compiled with H.264 is already installed in the machine using the command below.
ffmpeg -version
2. Installation of open-cv in anaconda will reduce the stress to install ffmpeg package compiled with H.264
3. Make sure that the user created in the machine has got enough privilege to read and write in specific application related directories
a. Check the read and write permission using the command below
ls -ld <folder-path>
or
namei -mo <folder-path>
b. Alter the access writes based on the user privilege required (sudo access needed else we need to engage admin to alter the permission)
eg : sudo chmod -R 740 <folder-path>** [ Recursive rwx for user ,r for group ]

Downloading a YouTube video through Wget

I am trying to download YouTube videos through Wget. The first thing necessary is to capture the URL of the actual video resource. Suppose I want to download this video: video. Opening up the page in the Firebug console reveals something like this:
The link which I have encircled looks like the link to the resource, for there we see only the video: http://www.youtube.com/v/r-KBncrOggI?version=3&autohide=1. However, when I am trying to download this resource with Wget, a 4 KB file of name r-KBncrOggI#version=3&autohide=1 gets stored in my hard-drive, nothing else. What should I do to get the actual video?
And secondly, is there a way to capture different resources for videos of different resolutions, like 360px, 480px, etc.?
Here is one VERY simplified, yet functional version of the youtube-download utility I cited on my another answer:
#!/usr/bin/env perl
use strict;
use warnings;
# CPAN modules we depend on
use JSON::XS;
use LWP::UserAgent;
use URI::Escape;
# Initialize the User Agent
# YouTube servers are weird, so *don't* parse headers!
my $ua = LWP::UserAgent->new(parse_head => 0);
# fetch video page or abort
my $res = $ua->get($ARGV[0]);
die "bad HTTP response" unless $res->is_success;
# scrape video metadata
if ($res->content =~ /\byt\.playerConfig\s*=\s*({.+?});/sx) {
# parse as JSON or abort
my $json = eval { decode_json $1 };
die "bad JSON: $1" if $#;
# inside the JSON 'args' property, there's an encoded
# url_encoded_fmt_stream_map property which points
# to stream URLs and signatures
while ($json->{args}{url_encoded_fmt_stream_map} =~ /\burl=(http.+?)&sig=([0-9A-F\.]+)/gx) {
# decode URL and attach signature
my $url = uri_unescape($1) . "&signature=$2";
print $url, "\n";
}
}
Usage example (it returns several URLs to streams with different encoding/quality):
$ perl youtube.pl http://www.youtube.com/watch?v=r-KBncrOggI | head -n 1
http://r19---sn-bg07sner.c.youtube.com/videoplayback?fexp=923014%2C916623%2C920704%2C912806%2C922403%2C922405%2C929901%2C913605%2C925710%2C929104%2C929110%2C908493%2C920201%2C913302%2C919009%2C911116%2C926403%2C910221%2C901451&ms=au&mv=m&mt=1357996514&cp=U0hUTVBNUF9FUUNONF9IR1RCOk01RjRyaG4wTHdQ&id=afe2819dcace8202&ratebypass=yes&key=yt1&newshard=yes&expire=1358022107&ip=201.52.68.216&ipbits=8&upn=m-kyX9-4Tgc&sparams=cp%2Cid%2Cip%2Cipbits%2Citag%2Cratebypass%2Csource%2Cupn%2Cexpire&itag=44&sver=3&source=youtube,quality=large&signature=A1E7E91DD087067ED59101EF2AE421A3503C7FED.87CBE6AE7FB8D9E2B67FEFA9449D0FA769AEA739
I'm afraid it's not that easy do get the right link for the video resource.
The link you got, http://www.youtube.com/v/r-KBncrOggI?version=3&autohide=1, points to the player rather than the video itself. There is one Perl utility, youtube-download, which is well-maintained and does the trick. This is how to get the HQ version (magic fmt=18) of that video:
stas#Stanislaws-MacBook-Pro:~$ youtube-download -o "{title}.{suffix}" --fmt 18 r-KBncrOggI
--> Working on r-KBncrOggI
Downloading `Sourav Ganguly in Farhan Akhtar's Show - Oye! It's Friday!.mp4`
75161060/75161060 (100.00%)
Download successful!
stas#Stanislaws-MacBook-Pro:~$
There might be better command-line YouTube Downloaders around. But sorry, one doesn't simply download a video using Firebug and wget any more :(
The only way I know to capture that URL manually is by watching the active downloads of the browser:
That largest data chunks are video data, so you can copy its URL:
http://s.youtube.com/s?lact=111116&uga=m30&volume=4.513679238953965&sd=BBE62AA4AHH1357937949850490&rendering=accelerated&fs=0&decoding=software&nsivbblmax=679542.000&hcbt=105.345&sendtmp=1&fmt=35&w=640&vtmp=1&referrer=None&hl=en_US&nsivbblmin=486355.000&nsivbblmean=603805.166&md=1&plid=AATTCZEEeM825vCx&ns=yt&ptk=youtube_none&csipt=watch7&rt=110.904&tsphab=1&nsiabblmax=129097.000&tspne=0&tpmt=110&nsiabblmin=123113.000&tspfdt=436&hbd=30900552&et=110.146&hbt=30.770&st=70.213&cfps=25&cr=BR&h=480&screenw=1440&nsiabblmean=125949.872&cpn=JlqV9j_oE1jzk7Zc&nsivbblc=343&nsiabblc=343&docid=r-KBncrOggI&len=1302.676&screenh=900&abd=1&pixel_ratio=1&bc=26131333&playerw=854&idpj=0&hcbd=25408143&playerh=510&ldpj=0&fexp=920704,919009,922403,916709,912806,929110,928008,920201,901451,909708,913605,925710,916623,929104,913302,910221,911116,914093,922405,929901&scoville=1&el=detailpage&bd=6676317&nsidf=1&vid=Yfg8gnutZoTD4G5SVKCxpsPvirbqG7pvR&bt=40.333&mos=0&vq=auto
However, for a large video, this will only return a part of the stream unless you figure out the URL query parameter responsible for stream range to be downloaded and adjust it.
A bonus: everything changes periodically as YouTube is constantly evolving. So, don't do that manually unless you carve pain.

How to generate thumbnail for PDF file without downloading it fully?

I have to work with external rest API which allows to browse documents library - list docs, get metadata for individual docs and download documents fully or given range.
Currently we show standard icons for all documents (PDF files on server).
We want to improve and show thumbnails.
Is there a way of extracting thumbnail of cover page from PDF without reading whole file? Something similar to EXIF maybe? Client is running on iOS.
Not sure if I fully understand your environment and your limitations.
However, if you can retrieve a 'given range' of a remote document, then it's easy to just retrieve page 1. (You can only retrieve parts of PDF documents which will successfully render if they are "web optimized" a.k.a. "linearized".)
However, nowadays most PDFs do no longer contain thumbnails which could be retrieved. Adobe software (as well as other PDF viewers) do create the page previews on the fly.
So you must retrieve the first page first.
Then Ghostscript can generate a "thumbnail" from this page. Command for Linux/Unix/MacOSX:
gs \
-o thumb.jpg \
-sDEVICE=jpeg \
-g80x120 \
-dPDFFitPage \
firstpage.pdf
Command for Windows:
gswin32c.exe ^
-o thumb.jpg ^
-sDEVICE=jpeg ^
-g80x120 ^
-dPDFFitPage ^
firstpage.pdf
For this example...
...the thumbnail filetype will be JPEG. You can change this to PNG (-sDEVICE=pngalpha, or =png256 or =png16m).
...the thumbnail size will be 80x120 pixel; change it however you need.

How to extract closed caption transcript from YouTube video?

Is it possible to extract the closed caption transcript from YouTube videos?
We have over 200 webcasts on YouTube and each is at least one hour long. YouTube has closed caption for all videos but it seems users have no way to get it.
I tried the URL in this blog but it does not work with our videos.
http://googlesystem.blogspot.com/2010/10/download-youtube-captions.html
Here's how to get the transcript of a YouTube video (when available):
Go to YouTube and open the video of your choice.
Click on the "More actions" button (3 horizontal dots) located next to the Share button.
Click "Open transcript"
Although the syntax may be a little goofy this is a pretty good solution.
Source: http://ccm.net/faq/40644-youtube-how-to-get-the-transcript-of-a-video
Get timedtext file directly from YouTube
curl -s "$video_url"|grep -o '"baseUrl":"https://www.youtube.com/api/timedtext[^"]*lang=en'|cut -d \" -f4|sed 's/\\u0026/\&/g'|xargs curl -Ls|grep -o '<text[^<]*</text>'|sed -E 's/<text start="([^"]*)".*>(.*)<.*/\1 \2/'|sed 's/\xc2\xa0/ /g;s/&/\&/g'|recode xml|awk '{$1=sprintf("%02d:%02d:%02d",$1/3600,$1%3600/60,$1%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1
yt-dlp
yt-dlp supports saving the automatically generated closed captions in a JSON format:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q yt-dlp -i --skip-download --write-auto-sub --sub-format json3 -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.json3;do jq -r '.events[]|select(.segs and .segs[0].utf8!="\n")|(.tStartMs|tostring)+" "+([.segs[]?.utf8]|join(""))' "$f"|awk '{x=$1/1e3;$1=sprintf("%02d:%02d:%02d",x/3600,x%3600/60,x%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.json3}";rm "$f";done)
You can also use the function above to download the captions for all videos on a channel or playlist if you give the ID or URL of the channel or playlist as an argument. When there is an error downloading a single video, the -i (--ignore-errors) option skips the video instead of exiting with an error.
Or this just gets the text without the timestamps:
yt-dlp --skip-download --write-auto-sub --sub-format json3 $youtube_url_or_id;jq -r '.events[]|select(.segs and.segs[0].utf8!="\n")|[.segs[].utf8]|join("")' *json3|paste -sd\ -|fold -sw60
youtube-dl
As of 2022, the format of the VTT and TTML downloaded by youtube-dl --write-auto-sub is messed up so that all subtitle texts are placed under a few long lines so that the timestamps of the subtitles are not visible. If you don't need the timestamps, then it shouldn't matter, but otherwise you can fix it by substituting yt-dlp for youtube-dl in the following commands. But with yt-dlp, you can also use a more convenient JSON format, so you don't need the following approach to deal with the VTT subtitle format.
This downloads the subtitles as VTT:
youtube-dl --skip-download --write-auto-sub $youtube_url
The other available formats are ttml, srv3, srv2, and srv1 (shown by --list-subs):
--write-sub
Write subtitle file
--write-auto-sub
Write automatically generated subtitle file (YouTube only)
--all-subs
Download all the available subtitles of the video
--list-subs
List all available subtitles for the video
--sub-format FORMAT
Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best"
--sub-lang LANGS
Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags
You can use ffmpeg to convert the subtitle file to another format:
ffmpeg -i input.vtt output.srt
In the VTT subtitles, each subtitle text is repeated three times, and there is typically a new subtitle text every eighth line (but under some mysterious circumstances it's every 12th line instead):
WEBVTT
Kind: captions
Language: en
00:00:01.429 --> 00:00:04.249 align:start position:0%
ladies<00:00:02.429><c> and</c><00:00:02.580><c> gentlemen</c><c.colorE5E5E5><00:00:02.879><c> I'd</c></c><c.colorCCCCCC><00:00:03.870><c> like</c></c><c.colorE5E5E5><00:00:04.020><c> to</c><00:00:04.110><c> thank</c></c>
00:00:04.249 --> 00:00:04.259 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
</c>
00:00:04.259 --> 00:00:05.930 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
you<00:00:04.440><c> for</c><00:00:04.620><c> coming</c><00:00:05.069><c> tonight</c><00:00:05.190><c> especially</c></c><c.colorCCCCCC><00:00:05.609><c> at</c></c>
00:00:05.930 --> 00:00:05.940 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
</c>
00:00:05.940 --> 00:00:07.730 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
such<00:00:06.180><c> short</c><00:00:06.690><c> notice</c></c>
00:00:07.730 --> 00:00:07.740 align:start position:0%
such short notice
00:00:07.740 --> 00:00:09.620 align:start position:0%
such short notice
I'm<00:00:08.370><c> sure</c><c.colorE5E5E5><00:00:08.580><c> mr.</c><00:00:08.820><c> Irving</c><00:00:09.000><c> will</c><00:00:09.120><c> fill</c><00:00:09.300><c> you</c><00:00:09.389><c> in</c><00:00:09.420><c> on</c></c>
00:00:09.620 --> 00:00:09.630 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
</c>
00:00:09.630 --> 00:00:11.030 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
the<00:00:09.750><c> circumstances</c><00:00:10.440><c> that's</c><00:00:10.620><c> brought</c><00:00:10.920><c> us</c></c>
00:00:11.030 --> 00:00:11.040 align:start position:0%
<c.colorE5E5E5>the circumstances that's brought us
</c>
This converts the VTT subtitles to a simpler format:
sed '1,/^$/d' *.vtt| # remove the lines at the top of the file
sed 's/<[^>]*>//g'| # remove tags
awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3' | # print each new subtitle text and its start time without milliseconds
awk NF\>1 # remove lines with only one field
Output:
00:00:01 ladies and gentlemen I'd like to thank
00:00:04 you for coming tonight especially at
00:00:05 such short notice
00:00:07 I'm sure mr. Irving will fill you in on
00:00:09 the circumstances that's brought us
In maybe around 10% of videos that I tested with (like for example p9M3shEU-QM and aE05_REXnBc), there were one or more subtitle texts which came 12 and not 8 lines after the previous subtitle text. But a workaround is to print every fourth line but to then remove empty lines.
Function form:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q youtube-dl -i --skip-download --write-auto-sub -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.vtt;do sed '1,/^$/d' -- "$f"|sed 's/<[^>]*>//g'|awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3'|awk 'NF>1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.vtt}";rm "$f";done)
Following document says only the owner of the channel can do this via standard youtube interface:
https://developers.google.com/youtube/2.0/developers_guide_protocol_captions?hl=en
Cheap fix:
You can click on the "interactive transscript" button - and copy the content this way.
Of course you lose the milliseconds this way.
Extremely cheap fix:
A shared youtube account -
so that multiple people can edit and upload caption files.
Challenging solution:
The youtube API allows downloading and uploading of caption files via HTTP...
You may write a youtube API application to provide a browser user interface for uploading or downloading for ANY user or particular users.
Here is an example project for this in java
http://apiblog.youtube.com/2011/01/youtube-captions-uploader-web-app.html
Here is very simple example of a working upload for everybody:
http://yt-captions-uploader.appspot.com/
You can view/copy/download a timecoded xml file of a youtube's closed captions file by accessing
http://video.google.com/timedtext?lang=[LANGUAGE]&v=[YOUTUBE VIDEO IDENTIFIER]
For example http://video.google.com/timedtext?lang=pt&v=WSVKbw7LC2w
NOTE: this method does not download autogenerated closed captions, even if you get the language right (maybe there's a special code for autogenerated languages).
You can download the streaming subtitles from YouTube with KeepSubs DownSub and SaveSubs.
You can choose from the Automatic Transcript or author supplied close captions. It also offers the possibility to automatically translate the English subtitles into other languages using Google Translate.
(Obligatory 'this is probably an internal youtube.com interface and might break at any time')
Instead of linking to another tool that does this, here's an answer to the question of "how to do this"
Use fiddler or your browser devtools (e.g.
Chrome) to inspect the youtube.com HTTP traffic, and there's a response from /api/timedtext that contains the closed caption info as XML.
It seems that a response like this:
<p t="0" d="5430" w="1">
<s p="2" ac="136">we've</s>
<s t="780" ac="252"> got</s>
</p>
<p t="2280" d="7170" w="1">
<s ac="243">we're</s>
<s t="810" ac="233"> going</s>
</p>
means at time 0 is the word we've and at time 0+780 is the word got and at time 2280+810 is the word going, etc. This time is in milliseconds so for time 3090 you'd want to append &t=3 to the URL.
You can use any tool to stitch together the XML into something readable, but here's my Power BI Desktop script to find words like "privilege":
let
Source = Xml.Tables(File.Contents("C:\Download\body.xml")),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:format", Int64.Type}}),
body = #"Changed Type"{0}[body],
p = body{0}[p],
#"Changed Type1" = Table.TransformColumnTypes(p,{{"Attribute:t", Int64.Type}, {"Attribute:d", Int64.Type}, {"Attribute:w", Int64.Type}, {"Attribute:a", Int64.Type}, {"Attribute:p", Int64.Type}}),
#"Expanded s" = Table.ExpandTableColumn(#"Changed Type1", "s", {"Attribute:ac", "Attribute:p", "Attribute:t", "Element:Text"}, {"s.Attribute:ac", "s.Attribute:p", "s.Attribute:t", "s.Element:Text"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Expanded s",{{"s.Attribute:t", Int64.Type}}),
#"Removed Other Columns" = Table.SelectColumns(#"Changed Type2",{"s.Attribute:t", "s.Element:Text", "Attribute:t"}),
#"Replaced Value" = Table.ReplaceValue(#"Removed Other Columns",null,0,Replacer.ReplaceValue,{"s.Attribute:t"}),
#"Filtered Rows" = Table.SelectRows(#"Replaced Value", each [#"s.Element:Text"] <> null),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Time", each [#"Attribute:t"] + [#"s.Attribute:t"]),
#"Filtered Rows1" = Table.SelectRows(#"Added Custom", each ([#"s.Element:Text"] = " privilege" or [#"s.Element:Text"] = " privileged" or [#"s.Element:Text"] = " privileges" or [#"s.Element:Text"] = "privilege" or [#"s.Element:Text"] = "privileges"))
in
#"Filtered Rows1"
There is a free python tool called YouTube transcript API
You can use it in scripts or as a command line tool:
pip install youtube_transcript_api
With the YouTube video updated as of June 2020 it's very straight forward
select on the 3 dots next to like/dislike buttons to open further menu options
select "add translations"
select language
click autogenerate if needed
click Actions > Download
You will get and .sbv file
Choose Open Transcript from the ... dropdown to the right of the vote up/down and share links.
This will open a Transcript scrolling div on the right side.
You can then use Copy. Note that you cannot use Select All but need to click the top line, then scroll to the bottom using the scroll thumb, and then shift-click on the last line.
Note that you can also search within this text using the normal web page search.
I just got this easily done manually by opening the transcript at the beginning of the video and left-clicking and dragging at the time 00:00 marker with the shift key pressed over a few lines at the beginning.
I then advanced the video to near the end. When the video stopped, I clicked the end of the last sentence whilst holding down the shift key once more. With CTRL-C I copied the text to the clipboard and pasted it into an editor.
Done!
Caveat: Be sure to have no RDP-Windows sharing the clipboard or Software such as Teamviewer is running at the same time as this procedure will overflow their buffers where a large amount of text is copied.

Resources