Cobol display variable just after MOVE - cobol

Can someone explain to me what this code is doing:
ZAZAZA MOVE UT-S40-ZONES (UT-INDS40-R) TO W-FH00-S40-ZONES. PGM
ZAZAZA MOVE UT-S40-PERIOD (UT-INDS40-R) TO W-FH00-S40-PERIOD. PGM
-----> DISPLAY "S40-ZONES-PGM::" UT-S40-ZONES (UT-INDS40-R)
-----> DISPLAY "W-FH00-S40-ZONES::" W-FH00-S40-ZONES
-----> DISPLAY "UT-S40-PERIOD::" UT-S40-PERIOD (UT-INDS40-R)
-----> DISPLAY "W-FH00-S40-PERIOD::" W-FH00-S40-PERIOD
ZAZAZA* PGM
The display are :
S40-ZONES-PGM::******************************************
W-FH00-S40-ZONES:: empty !!!!!
UT-S40-PERIOD::" ***************************************
W-FH00-S40-PERIOD::***************************************
I cannot find W-FH00-S40-ZONES:: anywhere
I have noticed that S40-ZONES-PGM and W-FH00-S40-ZONES have the same definition PICTURE X and the same size.
Does anyone have an idea?

I have found the response .
In fact before those lines there is a condition IF not fulfilled.
zzzzzz IF UT-S40-REGMAL (UT-INDS40-R) = "200" zzz
zzzzzz* zzz
zzzzzz* zzz
zzzzzz MOVE UT-S40-ZONES (UT-INDS40-R) TO W-FH00-S40-ZONES. zzz

Related

Action Text displaying image error rails 6 on windows 10

I am using windows 10 to program ruby on rails 6 with the help of ruby installer. Everything works fine so far until... I use action text for rich text editor.
I followed this guide [https://edgeguides.rubyonrails.org/action_text_overview.html][1] and everything works. But when I attach an image to the text_area, after saving the image cannot be displayed.
When I open the image's url, I see this:
MiniMagick::Error in ActiveStorage::RepresentationsController#show
`magick mogrify -resize-to-limit [1024, 768] C:/Users/.../AppData/Local/Temp/ActiveStorage-5-20200716-3792-28u8ot.jpg` failed with error: mogrify: unrecognized option `-resize-to-limit' # error/mogrify.c/MogrifyImageCommand/6009.
if status != 0 && options.fetch(:whiny, MiniMagick.whiny)
fail MiniMagick::Error, "`#{command.join(" ")}` failed with error:\n#{stderr}"
end
Model: Product
title
description(action text - attach image in text_area)
Display in view: product.description ( attached image cannot be displayed )
Can anyone help me solve this problem?
Add gem 'image-processing' to your Gemfile and
run bundle install

Why doesn't grep work on some file, but works on another (same content)

I wrote this grep command:
grep -- "^[0-9a-zA-Z\.-]\+$" file.txt
To get all lines containing only numbers, letters and dashes (legal domains).
This is the result of diff on both files
1,3c1,3
< test.xcom
< hi-th6ere.co.k
< 54
---
> test.xcom
> hi-th6ere.co.k
> 54
I wrote a file with some domains to test and it works great!
But, when I download a file (with the same content!) from the web, and then run this command, grep doesn't return anything.
I've tried to set full permissions on this file, but it still doesn't work.
Any ideas?
Thanks,
What makes you think the file content as the same as the one you've tested?
You can run 'diff filename1 filename2' to see if there are any differences between the two files.
It could be the the file you're downloading is in unicode format, so in a web browser it looks to be have the same content as the file you've tested, but the binary content of the file itself is different.

How to make TextEdit links clickable

This script is supposed to open a series of web pages in a new browser window, and then open TextEdit with predetermined text and links.
Safari does what it is supposed to.
Text edit opens and pastes the text I want, but the links are not clickable.
I know I could just right click and choose Substitutions> Add Links> myself, but I am trying to automate the entire process.
I appreciate your time and efforts on my behalf! Thank you!
OpenWebPages()
OpenTextEditPage()
to OpenTextEditPage()
-- Create a variable for text
set docText to ""
tell application "TextEdit"
activate
make new document
-- Define the text to be pasted into TextEdit
set docText to docText & "Some text to show in TextEdit." & linefeed & "
My favorite site about coding is http://stackoverflow.com/
My favorite site for paper modeling is http://www.ss42.com/toys.html
My favorite site for inventing is http://www.instructables.com/howto/bubble+machine/
" & linefeed & "Click the links above to improve your mind!" as string
-- Past the above text and links into TextEdit
set the text of the front document to docText & "" as string
tell application "System Events"
tell process "TextEdit"
-- highlight all text
keystroke "a" using command down
-- Think of a clever way to right click and choose Substitutions> Add Links>
-- Or think of another clever way to turn all URLs into links please.
end tell
end tell
end tell
end OpenTextEditPage
to OpenWebPages()
-- Start new Safari window
tell application "Safari"
-- activate Safari and open the StackOverflow AppleScript page
make new document with properties {URL:"http://stackoverflow.com/search?q=applescript"}
-- Yoda is watching you
open location "http://www.ss42.com/pt/Yoda/YodaGallery/yoda-gallery.html"
-- Indoor boomerang
open location "http://www.ss42.com/pt/paperang/paperang.html"
-- Are you a Human ?
open location "http://stackoverflow.com/nocaptcha?s=f5c92674-b080-4cea-9ff2-4fdf1d6d19de"
end tell
end OpenWebPages
According to this question I built this little handler. It takes a path to your rtf-file like makeURLsHyper((path to desktop folder as string) & "TestDoc.rtf") and worked quite fine in my little tests. But it doesn't care about text formatting at this point.
on makeURLsHyper(pathOfRtfFile)
-- converting the given path to a posix path (quoted for later use)
set myRtfPosixPath to quoted form of (POSIX path of pathOfRtfFile)
-- RTF Hyperlink start
set rtfLinkStart to "{\\\\field{\\\\*\\\\fldinst HYPERLINK \""
-- RTF Hyperlink middle
set rtfMiddlePart to "\"}{\\\\fldrslt "
-- RTF Hyperlink end
set rtfLinkEnd to "}}"
-- use sed to convert http-strings to rtf hyperlinks
set newFileContent to (do shell script "sed -i bak -e 's#\\(http[^[:space:]]*\\)#" & rtfLinkStart & "\\1" & rtfMiddlePart & "\\1" & rtfLinkEnd & "#g' " & myRtfPosixPath)
end makeURLsHyper
Have a nice day, Michael / Hamburg

RMagick, Tempfile, Paperclip: how to save a image file with large dimensions and small kbs as a thumbnail?

I have a Rails rake task that is processing a batch of images. It strips out the white background (using RMagick), replaces it with a transparent layer, writes it to a tempfile and then saves it as a PNG on Amazon S3 (using Paperclip).
It works for the bulk of the images. However, it runs into an error for at least 1 image. Can someone help me figure out why and how to fix it?
Code sample:
require 'RMagick'
require 'tempfile'
include Magick
task :task_name => :environment do
x = Item.find(128) # image 128 is the one giving me trouble
sourceImage = Image.read(x.image_link_hires)
processedImage = sourceImage[0].transparent("white")
tempImageFile = Tempfile.new(["processed_image",".png"])
processedImage.write("png:" + tempImageFile.path)
x.image_transparent = tempImageFile
x.save!
end
The error message:
rake aborted! Validation failed: Image transparent C:/Users/Roger/AppData/Local/Temp/processed_image20130107-8640-1ck71i820130107-8640-i6p91w.png is not recognized by the 'identify' command., Image transparent C:/Users/Roger/AppData/Local/Temp/processed_
image20130107-8640-1ck71i820130107-8640-i6p91w.png is not recognized by the 'identify' command.
This message appears upon running the last line (the save operation).
Tempfile problem with small files?
I think the error has something to do with Tempfile not actually writing a file to the temp path. This error may have to do with small filesize? The specific image that it's having trouble with has an usually amount of white space, so the resulting filesize after processing is about 30k for an 800x800 pixel image.
How can I verify if this is the case? And if it is, how can I work around it?
Other observations:
When I write the trouble image to a normal file (rather than Tempfile), it saves successfully locally.
The task works fine for other images, which tend to be much bigger (~1-2MB)
After processedImage.write, I've checked tempImageFile.size. It says that it's 30kb as expected.
When I observe the temp file directory when the rake task runs, I can see the temp files being created when the task is run other images successfully. The files seem to show up when processedImage.write runs. However, for the trouble image, I don't see temp files ever being created.
Thanks for any advice.
Update 7 Jan 2013
I've investigated this more. I reran #1 above, but attempted to save onto S3 with Paperclip. This generated the same error message.
So now I believe the issue is that this is a small file in terms of bytes (32kb), but with a decent height and width (800x800). Paperclip is trying to save a thumbnail version of it, which is 90x90. Typically this generates a filesize that is <1% the original, which I assume is the source of the errors.
If anyone has an elegant workaround / fix for this, I'd appreciate hearing about it.

How to extract closed caption transcript from YouTube video?

Is it possible to extract the closed caption transcript from YouTube videos?
We have over 200 webcasts on YouTube and each is at least one hour long. YouTube has closed caption for all videos but it seems users have no way to get it.
I tried the URL in this blog but it does not work with our videos.
http://googlesystem.blogspot.com/2010/10/download-youtube-captions.html
Here's how to get the transcript of a YouTube video (when available):
Go to YouTube and open the video of your choice.
Click on the "More actions" button (3 horizontal dots) located next to the Share button.
Click "Open transcript"
Although the syntax may be a little goofy this is a pretty good solution.
Source: http://ccm.net/faq/40644-youtube-how-to-get-the-transcript-of-a-video
Get timedtext file directly from YouTube
curl -s "$video_url"|grep -o '"baseUrl":"https://www.youtube.com/api/timedtext[^"]*lang=en'|cut -d \" -f4|sed 's/\\u0026/\&/g'|xargs curl -Ls|grep -o '<text[^<]*</text>'|sed -E 's/<text start="([^"]*)".*>(.*)<.*/\1 \2/'|sed 's/\xc2\xa0/ /g;s/&/\&/g'|recode xml|awk '{$1=sprintf("%02d:%02d:%02d",$1/3600,$1%3600/60,$1%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1
yt-dlp
yt-dlp supports saving the automatically generated closed captions in a JSON format:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q yt-dlp -i --skip-download --write-auto-sub --sub-format json3 -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.json3;do jq -r '.events[]|select(.segs and .segs[0].utf8!="\n")|(.tStartMs|tostring)+" "+([.segs[]?.utf8]|join(""))' "$f"|awk '{x=$1/1e3;$1=sprintf("%02d:%02d:%02d",x/3600,x%3600/60,x%60)}1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.json3}";rm "$f";done)
You can also use the function above to download the captions for all videos on a channel or playlist if you give the ID or URL of the channel or playlist as an argument. When there is an error downloading a single video, the -i (--ignore-errors) option skips the video instead of exiting with an error.
Or this just gets the text without the timestamps:
yt-dlp --skip-download --write-auto-sub --sub-format json3 $youtube_url_or_id;jq -r '.events[]|select(.segs and.segs[0].utf8!="\n")|[.segs[].utf8]|join("")' *json3|paste -sd\ -|fold -sw60
youtube-dl
As of 2022, the format of the VTT and TTML downloaded by youtube-dl --write-auto-sub is messed up so that all subtitle texts are placed under a few long lines so that the timestamps of the subtitles are not visible. If you don't need the timestamps, then it shouldn't matter, but otherwise you can fix it by substituting yt-dlp for youtube-dl in the following commands. But with yt-dlp, you can also use a more convenient JSON format, so you don't need the following approach to deal with the VTT subtitle format.
This downloads the subtitles as VTT:
youtube-dl --skip-download --write-auto-sub $youtube_url
The other available formats are ttml, srv3, srv2, and srv1 (shown by --list-subs):
--write-sub
Write subtitle file
--write-auto-sub
Write automatically generated subtitle file (YouTube only)
--all-subs
Download all the available subtitles of the video
--list-subs
List all available subtitles for the video
--sub-format FORMAT
Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best"
--sub-lang LANGS
Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags
You can use ffmpeg to convert the subtitle file to another format:
ffmpeg -i input.vtt output.srt
In the VTT subtitles, each subtitle text is repeated three times, and there is typically a new subtitle text every eighth line (but under some mysterious circumstances it's every 12th line instead):
WEBVTT
Kind: captions
Language: en
00:00:01.429 --> 00:00:04.249 align:start position:0%
ladies<00:00:02.429><c> and</c><00:00:02.580><c> gentlemen</c><c.colorE5E5E5><00:00:02.879><c> I'd</c></c><c.colorCCCCCC><00:00:03.870><c> like</c></c><c.colorE5E5E5><00:00:04.020><c> to</c><00:00:04.110><c> thank</c></c>
00:00:04.249 --> 00:00:04.259 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
</c>
00:00:04.259 --> 00:00:05.930 align:start position:0%
ladies and gentlemen<c.colorE5E5E5> I'd</c><c.colorCCCCCC> like</c><c.colorE5E5E5> to thank
you<00:00:04.440><c> for</c><00:00:04.620><c> coming</c><00:00:05.069><c> tonight</c><00:00:05.190><c> especially</c></c><c.colorCCCCCC><00:00:05.609><c> at</c></c>
00:00:05.930 --> 00:00:05.940 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
</c>
00:00:05.940 --> 00:00:07.730 align:start position:0%
you<c.colorE5E5E5> for coming tonight especially</c><c.colorCCCCCC> at
such<00:00:06.180><c> short</c><00:00:06.690><c> notice</c></c>
00:00:07.730 --> 00:00:07.740 align:start position:0%
such short notice
00:00:07.740 --> 00:00:09.620 align:start position:0%
such short notice
I'm<00:00:08.370><c> sure</c><c.colorE5E5E5><00:00:08.580><c> mr.</c><00:00:08.820><c> Irving</c><00:00:09.000><c> will</c><00:00:09.120><c> fill</c><00:00:09.300><c> you</c><00:00:09.389><c> in</c><00:00:09.420><c> on</c></c>
00:00:09.620 --> 00:00:09.630 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
</c>
00:00:09.630 --> 00:00:11.030 align:start position:0%
I'm sure<c.colorE5E5E5> mr. Irving will fill you in on
the<00:00:09.750><c> circumstances</c><00:00:10.440><c> that's</c><00:00:10.620><c> brought</c><00:00:10.920><c> us</c></c>
00:00:11.030 --> 00:00:11.040 align:start position:0%
<c.colorE5E5E5>the circumstances that's brought us
</c>
This converts the VTT subtitles to a simpler format:
sed '1,/^$/d' *.vtt| # remove the lines at the top of the file
sed 's/<[^>]*>//g'| # remove tags
awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3' | # print each new subtitle text and its start time without milliseconds
awk NF\>1 # remove lines with only one field
Output:
00:00:01 ladies and gentlemen I'd like to thank
00:00:04 you for coming tonight especially at
00:00:05 such short notice
00:00:07 I'm sure mr. Irving will fill you in on
00:00:09 the circumstances that's brought us
In maybe around 10% of videos that I tested with (like for example p9M3shEU-QM and aE05_REXnBc), there were one or more subtitle texts which came 12 and not 8 lines after the previous subtitle text. But a workaround is to print every fourth line but to then remove empty lines.
Function form:
cap()(printf %s\\n "${#-$(cat)}"|parallel -j10 -q youtube-dl -i --skip-download --write-auto-sub -o '%(upload_date)s.%(title)s.%(uploader)s.%(id)s.%(ext)s' --;for f in *.vtt;do sed '1,/^$/d' -- "$f"|sed 's/<[^>]*>//g'|awk -F. 'NR%4==1{printf"%s ",$1}NR%4==3'|awk 'NF>1'|awk 'NR%n==1{printf"%s ",$1}{sub(/^[^ ]* /,"");printf"%s"(NR%n?FS:RS),$0}' n=2|awk 1 >"${f%.vtt}";rm "$f";done)
Following document says only the owner of the channel can do this via standard youtube interface:
https://developers.google.com/youtube/2.0/developers_guide_protocol_captions?hl=en
Cheap fix:
You can click on the "interactive transscript" button - and copy the content this way.
Of course you lose the milliseconds this way.
Extremely cheap fix:
A shared youtube account -
so that multiple people can edit and upload caption files.
Challenging solution:
The youtube API allows downloading and uploading of caption files via HTTP...
You may write a youtube API application to provide a browser user interface for uploading or downloading for ANY user or particular users.
Here is an example project for this in java
http://apiblog.youtube.com/2011/01/youtube-captions-uploader-web-app.html
Here is very simple example of a working upload for everybody:
http://yt-captions-uploader.appspot.com/
You can view/copy/download a timecoded xml file of a youtube's closed captions file by accessing
http://video.google.com/timedtext?lang=[LANGUAGE]&v=[YOUTUBE VIDEO IDENTIFIER]
For example http://video.google.com/timedtext?lang=pt&v=WSVKbw7LC2w
NOTE: this method does not download autogenerated closed captions, even if you get the language right (maybe there's a special code for autogenerated languages).
You can download the streaming subtitles from YouTube with KeepSubs DownSub and SaveSubs.
You can choose from the Automatic Transcript or author supplied close captions. It also offers the possibility to automatically translate the English subtitles into other languages using Google Translate.
(Obligatory 'this is probably an internal youtube.com interface and might break at any time')
Instead of linking to another tool that does this, here's an answer to the question of "how to do this"
Use fiddler or your browser devtools (e.g.
Chrome) to inspect the youtube.com HTTP traffic, and there's a response from /api/timedtext that contains the closed caption info as XML.
It seems that a response like this:
<p t="0" d="5430" w="1">
<s p="2" ac="136">we've</s>
<s t="780" ac="252"> got</s>
</p>
<p t="2280" d="7170" w="1">
<s ac="243">we're</s>
<s t="810" ac="233"> going</s>
</p>
means at time 0 is the word we've and at time 0+780 is the word got and at time 2280+810 is the word going, etc. This time is in milliseconds so for time 3090 you'd want to append &t=3 to the URL.
You can use any tool to stitch together the XML into something readable, but here's my Power BI Desktop script to find words like "privilege":
let
Source = Xml.Tables(File.Contents("C:\Download\body.xml")),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Attribute:format", Int64.Type}}),
body = #"Changed Type"{0}[body],
p = body{0}[p],
#"Changed Type1" = Table.TransformColumnTypes(p,{{"Attribute:t", Int64.Type}, {"Attribute:d", Int64.Type}, {"Attribute:w", Int64.Type}, {"Attribute:a", Int64.Type}, {"Attribute:p", Int64.Type}}),
#"Expanded s" = Table.ExpandTableColumn(#"Changed Type1", "s", {"Attribute:ac", "Attribute:p", "Attribute:t", "Element:Text"}, {"s.Attribute:ac", "s.Attribute:p", "s.Attribute:t", "s.Element:Text"}),
#"Changed Type2" = Table.TransformColumnTypes(#"Expanded s",{{"s.Attribute:t", Int64.Type}}),
#"Removed Other Columns" = Table.SelectColumns(#"Changed Type2",{"s.Attribute:t", "s.Element:Text", "Attribute:t"}),
#"Replaced Value" = Table.ReplaceValue(#"Removed Other Columns",null,0,Replacer.ReplaceValue,{"s.Attribute:t"}),
#"Filtered Rows" = Table.SelectRows(#"Replaced Value", each [#"s.Element:Text"] <> null),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Time", each [#"Attribute:t"] + [#"s.Attribute:t"]),
#"Filtered Rows1" = Table.SelectRows(#"Added Custom", each ([#"s.Element:Text"] = " privilege" or [#"s.Element:Text"] = " privileged" or [#"s.Element:Text"] = " privileges" or [#"s.Element:Text"] = "privilege" or [#"s.Element:Text"] = "privileges"))
in
#"Filtered Rows1"
There is a free python tool called YouTube transcript API
You can use it in scripts or as a command line tool:
pip install youtube_transcript_api
With the YouTube video updated as of June 2020 it's very straight forward
select on the 3 dots next to like/dislike buttons to open further menu options
select "add translations"
select language
click autogenerate if needed
click Actions > Download
You will get and .sbv file
Choose Open Transcript from the ... dropdown to the right of the vote up/down and share links.
This will open a Transcript scrolling div on the right side.
You can then use Copy. Note that you cannot use Select All but need to click the top line, then scroll to the bottom using the scroll thumb, and then shift-click on the last line.
Note that you can also search within this text using the normal web page search.
I just got this easily done manually by opening the transcript at the beginning of the video and left-clicking and dragging at the time 00:00 marker with the shift key pressed over a few lines at the beginning.
I then advanced the video to near the end. When the video stopped, I clicked the end of the last sentence whilst holding down the shift key once more. With CTRL-C I copied the text to the clipboard and pasted it into an editor.
Done!
Caveat: Be sure to have no RDP-Windows sharing the clipboard or Software such as Teamviewer is running at the same time as this procedure will overflow their buffers where a large amount of text is copied.

Resources