How to extract hyperlinks from office documents using tika - hyperlink

I'm using Apache Tika to extract raw text from various document formats including office.
When extracting text from word documents that include hyperlinks, then only the text is extracted and the information about the hyperlink is lost.
Is there a way to configure the parser so that the underlying link is also extracted?
ParseContext context = new ParseContext();
Detector detector = new DefaultDetector();
Parser parser = new AutoDetectParser(detector);
context.set(Parser.class, parser);
Metadata metadata = new Metadata();
try (TikaInputStream input = TikaInputStream.get(new File(fileName))) {
BodyContentHandler handler = new BodyContentHandler();
parser.parse(input, handler, metadata, context);
String rawText = handler.toString();
input.close();
}

I'm using tika-app to extract hyperlinks from office documents in bash. I'm using the --html option to output the HTML content of files. I'm then using sed and grep to filter the HTML to just the contents of href attributes in that HTML. The result I get is the content of each href, one per line.
java -jar /root/tika-app-1.20.jar --html TEST.docx 2>/dev/null | sed 's/href/\nhref/g' | grep '^href' | sed 's/href="//' | sed 's/".*//'
I know that OP is not using tika-app, but the general approach can be applied using Tika from Java too.

Related

Is there a script that can extract particular link from txt and write it in another txt file?

I'm looking for a script (or if there isn't, I guess I'll have to write my own).
I wanted to ask if anyone here knows a script that can take a txt file with n links (lets say 200). I need to extract only links that have particular characters in them, let's say I only need links that contain "/r/learnprogramming". I need the script to get those links and write them to another txt files.
Edit: Here is what helped me: grep -i "/r/learnprogramming" 1.txt >2.txt
you can use ajax to read .txt file using jquery
<script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js></script>
<script>
jQuery(function($) {
console.log("start")
$.get("https://ayulayol.imfast.io/ajaxads/ajaxads.txt", function(wholeTextFile) {
var lines = wholeTextFile.split(/\n/),
randomIndex = Math.floor(Math.random() * lines.length),
randomLine = lines[randomIndex];
console.log(randomIndex, randomLine)
$("#ajax").html(randomLine.replace(/#/g,"<br>"))
})
})
</script>
<div id=ajax></div>
If you are using linux or macOS you could use cat and grep to output the links.
cat in.txt | grep /r/programming > out.txt
Solution provided by OP:
grep -i "/r/learnprogramming" 1.txt >2.txt
Since you did not provide the exact format of the document I assume those links are separated by newline characters. In this case, the code is pretty straightforward using Python/awk since you can iterate over file.readlines() and print only those that match your pattern (either by using a lines.contains(pattern) or using a regex if the pattern is more complex). To store the links in a new file simply redirect the stdout to a new file like this:
python script.py > links.txt
The solution above works even if links are separated by an arbitrary symbol s, first read the file into a single string and split it over s. I hope this helps.

How do I convert richtext pasteboard content to plaintext with hamerspoon?

I am looking for a solution to auto convert rich-text copied to clipboard (pasteboard) to plain text one in Hammerspoon (lua code).
I know how to access the pasteboard in lua but no idea on how to bind this action to the copy or paste event in order to automate it (neither on how to convert content to plain text).
local pasteboard = require("hs.pasteboard")
The easiest method would be to just use the answer described here to fetch the RTF data in the pasteboard and pipe the data to the already available
textutil command to convert it to plain text to stdout:
osascript -e 'the clipboard as «class RTF »' | \
perl -ne 'print chr foreach unpack("C*",pack("H*",substr($_,11,-3)))' | \
textutil -stdin -stdout -convert txt
We can then in the Hammerspoon environment use hs.execute to run the shell command and return the converted value, so in your Lua code it's as simple as:
local text = hs.execute([[
osascript -e 'the clipboard as «class RTF »' | \
perl -ne 'print chr foreach unpack("C*",pack("H*",substr($_,11,-3)))' | \
textutil -stdin -stdout -convert txt
]])
FYI the Hammerspoon API does allow you to retrive RTF data from the pasteboard using hs.pasteboard.readDataForUTI using the "public.rtf" UTI, so technically you could do all this in Lua, but you would have to manually convert the RTF data yourself.

Converting text output to object using powershell

I am trying to schedule a list of URL's in maintenance mode in SCOM 2007 using powershell. I am trying to get the list of URLs display name from a text file and trying to pass as input to below command.However it's not working. Can some body help how to pass the display name in text file as input
$URLStuff = Get-Content C:\Display.txt
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$_.DisplayName -eq $URLStuff}
get-content is returning you an array of string objects, one per line found in the file. You need to turn your where-object around to search that array for the DisplayName of each object found from SCOM.
$URLWatcher = (Get-MonitoringClass -name Microsoft.SystemCenter.WebApplication.Perspective) |
Get-MonitoringObject | where {$URLStuff -contains $_.DisplayName}
I'm assuming that you've already verified that DisplayName does contain the data you're looking for and will match the contents of Display.txt.

How do I save the origin html file with Apache Nutch

I'm new to search engines and web crawlers. Now I want to store all the original pages in a particular web site as html files, but with Apache Nutch I can only get the binary database files. How do I get the original html files with Nutch?
Does Nutch support it? If not, what other tools can I use to achieve my goal.(The tools that support distributed crawling are better.)
Well, nutch will write the crawled data in binary form so if if you want that to be saved in html format, you will have to modify the code. (this will be painful if you are new to nutch).
If you want quick and easy solution for getting html pages:
If the list of pages/urls that you intend to have is quite low, then better get it done with a script which invokes wget for each url.
OR use HTTrack tool.
EDIT:
Writing a your own nutch plugin will be great. Your problem will get solved plus you can contribute to nutch by submitting your work !!! If you are new to nutch (in terms of code & design), then you will have to invest lot of time building a new plugin ... else its easy to do.
Few pointers for helping your initiative:
Here is a page which talks about writing own nutch plugin.
Start with Fetcher.java. See lines 647-648. That is the place where you can get the fetched content on per url basis (for those pages which got fetched successfully).
pstatus = output(fit.url, fit.datum, content, status, CrawlDatum.STATUS_FETCH_SUCCESS);
updateStatus(content.getContent().length);
You should add code right after this to invoke your plugin. Pass content object to it. By now, you would have guessed that content.getContent() is the content for url you want. Inside the plugin code, write it to some file. Filename should be based on the url name else it will be difficult to work with that. Url can be obtained by fit.url.
You must do modifications in run Nutch in Eclipse.
When you are able to run, open Fetcher.java and add the lines between "content saver" command lines.
case ProtocolStatus.SUCCESS: // got a page
pstatus = output(fit.url, fit.datum, content, status, CrawlDatum.STATUS_FETCH_SUCCESS, fit.outlinkDepth);
updateStatus(content.getContent().length);'
//------------------------------------------- content saver ---------------------------------------------\\
String filename = "savedsites//" + content.getUrl().replace('/', '-');
File file = new File(filename);
file.getParentFile().mkdirs();
boolean exist = file.createNewFile();
if (!exist) {
System.out.println("File exists.");
} else {
FileWriter fstream = new FileWriter(file);
BufferedWriter out = new BufferedWriter(fstream);
out.write(content.toString().substring(content.toString().indexOf("<!DOCTYPE html")));
out.close();
System.out.println("File created successfully.");
}
//------------------------------------------- content saver ---------------------------------------------\\
To update this answer -
It is possible to post process the data from your crawldb segment folder, and read in the html (including other data nutch has stored) directly.
Configuration conf = NutchConfiguration.create();
FileSystem fs = FileSystem.get(conf);
Path file = new Path(segment, Content.DIR_NAME + "/part-00000/data");
SequenceFile.Reader reader = new SequenceFile.Reader(fs, file, conf);
try
{
Text key = new Text();
Content content = new Content();
while (reader.next(key, content))
{
System.out.println(new String(content.GetContent()));
}
}
catch (Exception e)
{
}
The answers here are obsolete. Now, it is simply possible to get the plain HTML-files with nutch dump. Please see this answer.
In apache Nutch 2.3.1
You can save the raw HTML by edit the Nutch code firstly run the nutch in eclipse by following https://wiki.apache.org/nutch/RunNutchInEclipse
After you finish ruunning nutch in eclipse edit file FetcherReducer.java , add this code to the output method, run ant eclipse again to rebuild the class
Finally the raw html will added to reportUrl column in your database
if (content != null) {
ByteBuffer raw = fit.page.getContent();
if (raw != null) {
ByteArrayInputStream arrayInputStream = new ByteArrayInputStream(raw.array(), raw.arrayOffset() + raw.position(), raw.remaining());
Scanner scanner = new Scanner(arrayInputStream);
scanner.useDelimiter("\\Z");//To read all scanner content in one String
String data = "";
if (scanner.hasNext()) {
data = scanner.next();
}
fit.page.setReprUrl(StringUtil.cleanField(data));
scanner.close();
}

tika returning incorrect line of text for pdf with lots of tables

I am using tika to extract text from a pdf file that has lot of tables.
java -jar tika-app-0.9.jar -t https://s3.amazonaws.com/centraldoc/alg1.pdf
It is returning some invalid text and sometimes it is trimming white space between 2 words; for example it returns
"qu inakli fmyathematical ideas to the real world" instead of "Link mathematical ideas to the real world".
Is there a way to minimize this kind of error? or is there another library that I can use? Does it make sense to use OCR to process these kind of pdf.
Try to control order when using PDFBox parser: PDFTextStripper has a flag that controls the order of lines in the document. By default (in PDFBox) it's set to false for performance reasons (no order preserved), but Tika changed its behavior between releases switching this flag on and off.
More details exactly on this problem in my blog Extracting text from PDF files with Apache Tika 0.9 (and PDFBox under the hood).
To get text from PDF to display in the right order, I had to set the SortByPosition flag to true... (tika-app-1.19.jar)
BodyContentHandler handler = new BodyContentHandler();
Metadata metadata = new Metadata();
ParseContext context = new ParseContext();
PDFParser pdfParser = new PDFParser();
PDFParserConfig config = pdfParser.getPDFParserConfig();
config.setSortByPosition(true); // needed for text in correct order
pdfParser.setPDFParserConfig(config);
pdfParser.parse(is, handler, metadata, context);

Resources