smush.it vs OptiPNG / pngcrush [closed] - image-processing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'd like to see some online vs. offline image optimizers comparison numbers, namely Yahoo! Smush.it vs. OptiPNG or pngcrush.
How those things differ in speed and resulting image size, and what is the best choice?

Very detailed and comprehensive comparison — with lots of tools and results on many different types of PNGs and optimizations:
http://css-ig.net/png-tools-overview
I think it's a much better source than PunyPNG's small comparison showing that their tool is best [partly at converting image formats rather than optimizing existing format] :)

I really don't know how reliable the information on this site is because they have their own compression service but take a look at the comparison in the URL: http://punypng.com/about/comparison

I copied the following image:
And installed two of the tools you mentioned offline:
brew install optipng pngcrush
And compared image sizes using default settings with an online tool called reSmush.it:
879K feat-social-awareness.original.png
712K feat-social-awareness.optipng.png
700K feat-social-awareness.pngcrush.png
205K feat-social-awareness.resmushit.png
Speed of each tool was not measured for the above test. Subjectively they all felt about the same.
Comparing the images visually I was unable to see the difference between the original and the optimized versions created using the offline tools. In the case of reSmush.it, however, there was a noticeable loss in image fidelity which can be easily reproduced using their API (see example).
As a result, the above sizes are not an apples-to-apples comparison. More like apples-to-gorillas. So I went back and increased the reSmush.it quality to 100 by setting qlty=100 as specified in their API docs and got back the same lossy PNG as with the default settings.
So what's the best choice? Well, it depends…
If compute resources are a major constraint consider using reSmush.it.
If image fidelity is a concern don't use reSmush.it.
If you use OptiPNG you're likely going to lose your original files (it overwrites by default).
If you use pngcrush you're getting better compression compared to optipng without a noticeable loss in image fidelity.
If you want lossy optimization similar to reSmush.it in an offline tool try pngquant.
And if serving images over the wire under heavy bandwidth constraints consider a different image format altogether, such as Fabrice Bellard's BPG Image format.

Related

Human annotation tool for corpora in NLP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to build my own training corpus for Named Entity Recognition, but I don't know if there is already an existing tool for this or if I have to implement one myself.
Basically, what I need to do is take a corpus and manually tag it word by word, which is pretty tedious, but it has to be done.
Can anyone tell me if there is already an existing one and where to get it?
I had a good experience working with BRAT.
GATE is also a very complex tool for annotating, steeper learning curve.
We had a nice experience using DataTurks . They provide nice intuitive UI which allows to add collaborator, insights into data, leaderboard for annotators and some other funky features.
https://dataturks.com
For online annotation of text or HTML corpus of relatively short documents I also recommend BRAT. You will have to go under the hood of the python web application if you want to do anything custom. It also failed to work for me on large HTML documents (100 or so pages).
I have also used stand-alone apps:
Protege + Knowtator: a bit cumbersome to setup / use, but it
works;
Gate: also cumbersome, and it somewhat works. Backup
your annotations at regular intervals as you might get
surprised by a stacktrace that also wiped or corrupted your annotated
corpus (which is just serialized Java objects).
If you are dealing with PDF documents, we built a web-based PDF Annotation Tool: NOTA. It accepts anything printed to PDF, including scans. We do commercial OCR on our end to recover text from images. There is a REST API to create color-coded annotation schemas and pre-populate documents with annotations, as well as a REST API for exporting formatted text and annotation offsets. There is also a JS API you can use to customize any annotation workflows, add metadata to annotations, etc. Relationships are not supported out of the box. Large documents, 200+ pages are supported. Email us and we can give you an API key to try it out. Details and documentation links can be found here. It is free for small research projects.
Here is a screenshot of what the annotations looks like :
I co-develop myself the web-based text annotation tool: tagtog.net
There is nothing to install, and you can define the type of entities you want to annotate. Additionally you can annotation relationships, document labels, and much more. You can upload your documents in many different formats, including PDF or markdown. You can annotate together with your team collaboratively. We have put great care in making the interface easy and beautiful. It looks like this:
You can start right away with a free account. Also I would be happy to help you with any doubt or issue you may have; just ping me or write us an email to the address shown on the website, tagtog.net.
Our annotation tool Prodigy is very scriptable, and is designed for active learning. It integrates especially well with our NLP library spaCy.
We've paid particular attention to the Named Entity Recogntion (NER) annotation workflows, as entity recognition can otherwise be very slow. I have a tutorial video on this:
https://www.youtube.com/watch?v=l4scwf8KeIA
There is this tool called, Dataturks is super simple to use, fully online NLP annotation tool, so that I even can easily push my teammates to complete datasets for our projects.
try TagEditor ,
It is a desktop application designed to annotate text for training with spaCy library.
You can tag Named Entities, Dependencies, Parts of speech, text categories
and print json file.
Example

Open Source way to real-time image processing OCR application? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have an application in mind that I want to produce. We have wall-mounted schedule boards that are divided into small rectangles using black lines on a white background. Magnetic name tags are placed into a particular partition to indicate this person is to work in that cell. This system works very well for communication among people, but I would like an automatic way of saving this schedule information into a database automatically.
I am envisioning a system where a camera is set in a fix position focusing on the schedule board. Periodically the camera will take a picture of the board. I want to write some code to decipher which name tags are in which area. This would require some OCR or symbol recognition. There are big numbers on each name tag that I will use to identify the person whose name tag it is.
I naturally go to Python when tackling a new programming problem. I found this post -> python image recognition which looks like a good place to start (with PIL and numpy).
Do you know a good way to do this?
Update: I have tried SimpleCV and it seems good for now.
This is actually a pretty hard problem, even though it looks quite simple. But you can make it a lot easier by doing some stuff to your image to make this manageable. I have the following suggestions:
Try to make it so that your camera is looking straight at the board with a reasonable lens so that there is minimal distortion of the image on the edges, and no perspective distortion.
Given that you'll be shooting the occasional image for analysis I think performance is in no way an issue, so shoot high-resolution images, with a flash or with a long exposure time (because everything you're shooting is stationary) to get the best possible picture quality.
If the number of different tags you expect is not too large you might find it easier to just try to match reference images of these tags in your image through template matching rather than going for full OCR of numbers. This is a lot easier to get working if your image is good enough. The python opencv interface is very complete.
High Performance Mark has a good comment to your question about including barcodes on the tags. I would add the option of QR codes, but that is just the same thing. Both are easy to detect and there are good libraries to help you read them.
If you decide you do need OCR, you should look into available OCR packages and not try to roll your own. Try pytesser for the tesseract engine or the OCRopus python interface.
Since you mentioned that you would like to use Python for this problem, perhaps you could take a look at SimpleCV. It will provides you an easy way to grab the image from the camera and do basic image processing.
I strongly agree with jilles de witt that OCR would be an extremely hard image analysis task to develop from scratch. Code reading would be a better option, but that also will be difficult to program and will require sophisticated or somewhat challenging imaging as others have noted. However, for this app you really do not need to implement OCR or formal bar codes, QR or other 2d codes.
Since your application is constrained to a limited number of targets, perhaps you could make your own simple code. For example, you could place 0 to 4 big dots in a 2x2 array after each person's name. This simple example code uniquely identifies 16 unique tags, and the features will be much easier to image, extract and decode than formal codes. Add a locator line if the code position is not consistent.

Practices for processing and storing user contributed images to a website [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Like many people I am handling uploaded user images for a website. Right now we're just saving everything as a static file and letting the browser resize with HTML, I know suboptimal, hence my question. I want to move to a presumably better process where the image being served up is already the right size.
Clearly I want to store an original, but I'm wondering the right approach for handling other sizes. Most general photos need to be viewable in 2 thumbnail formats as well as a larger, say less than 800X600px, format. Some photos might need to be viewable in some other formats, but each photo will "know" what formats it needs at the time it is uploaded. So my question is should I store all versions (3 - 4 probably) of the files and allow my static file server to remain truly static, or should I build a request handler that re-sizes the image from the original on demand? (or some other option)
I'm leaning towards generating the images at save time and writing backfill scripts if I ever need other sizes.
You have two options, the one you proposed (using server side to recreate your images with various sizes then store them statically) or you can use server side script to dynamically serve up your images from the original. You can specify your headers to tell the browser to force cache, so your server isn't constantly serving up dynamically scaled down images.
Option 1 Advantage: The right image, every time.
Option 1 Disadvantage: More space, more files
Option 2 Advantage: Save space and can create more sizes on the fly
Option 2 Disadvantage: More processing power and more dependent on the browser (less optimal on bandwidth optimization)
Personally, I would go the static route, you can optimize file size using good compression/crush techniques and you know explicitly your user is receiving the right file.
Take example on how Facebook, Google, Apple, CNN, and Wikipedia store their images. None of them dynamically outputs an image when it needs manipulation for every request.

PDF Parsing with Text and Coordinates [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am currently using PDF Box to parse a pdf and I am trying to figure out how to retrieve data about the text such as the font (bold, size, etc) and the location of the font.
Any suggestions?
After poking around the (hard to find) PDFBox docs, I found this little gem.
Apparently one of the examples shows exactly how to do everything you asked. Basically, you subclass PdfTextStripper and override the processTextPosition method. There, you query the TextPosition for whatever information you need.
For future reference, you can find the javaDoc here: http://pdfbox.apache.org/apidocs/index.html
Edit 2018-04-02: original link is dead, but example can be found in the SVN repo here.
One of the best things for text extraction from PDFs is TET, the text extraction toolkit. TET is part of the PDFlib.com family of products.
PDFlib.com is Thomas Merz's (the author of the "PostScript and PDF Bible") company.
TET's first incarnation is a library. That one can probably do everything you want, including to positional information about each text element on the page. Oh, and it can also extract images. It recombines+merges images which are fragmented into pieces.
pdflib.com also offers another incarnation of this technology, the TET plugin for Acrobat. Obviously you'd need Acrobat as well to make use of this.
And the third incarnation is the PDFlib TET iFilter. This is a standalone tool for user workstations. Both this is free (as in beer) to use for private, non-commercial purposes.
Lastly, TET also comes with a commandline interface.
TET is really powerful. Way better than Adobe's own text extraction. It extracted text for me where other tools (including Adobe's) do spit out garbage only.
A few months ago I tested their desktop standalone tool, and what they say on their webpage is true. It has a very good commandline. Some of my "problematic" PDF test files the tool handled to my full satisfaction.
This thing is my recommendation for every sophisticated and challenging PDF text extraction requirements.
TET is simply awesome. It detects tables. Inside tables, it identifies cells spanning multiple columns. It identifies table rows and contents of each table cell separately. It deals very well with hyphenations: it removes hyphens and restores complete words. It supports non-ASCII languages (including CJK, Arabic and Hebrew). When encountering ligatures, it restores the original characters...
Give it a try.
The GetPageText function with extract option 3 or 4 in Quick PDF Library returns a CSV string for the selected page which includes the text (either individual words or a piece of text) and the related font name, text color, text size and co-ordinates on the page.
Note: it is a commercial library and I work for the company that sells it.
PDF files can be parsed with tabula-py, or tabula-java.
I made a full tutorial on how to use tabula-py on this article. You can tabula in a web-browser too as long as you have installed Java.

free visual editor for graph (dot) files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Is there a free (as in "cheers"), linux-compatible, interactive visual editor for graphviz or other graphs? aptitude seems to be drawing a blank.
edit: "cheers" means both "beer" and "speech". meta-edit: I guess it should be "free as in beach".
edit 2: Maybe a suitable svg editor would be a more realistic goal. I basically want something that can be used to conveniently create a collection of labeled shapes and lines which connect them. Actually it would probably make more theoretical sense to extract the graph from this data, since it includes both semantic data (the graph) and presentation data (the way it's arranged on the screen, the colours used, etc). Is there a way to lay out labeled shapes conveniently with inkscape or some other free vector graphics editor? I really need rearranging of the nodes, and (re)flowing of the text in them, to happen with maximum convenience.
I've also realized that this is really a superuser question. I was going to repost it over there when I found an existing question that seems likely to provide me with an answer: dia.
edit 3: dia seems useful except that it doesn't seem to be possible to get the textual contents of node objects to wrap in any useful manner (ie any way other than by inserting manual line breaks). This is kind of a dealbreaker, since it screws most of the convenience factor that's my incentive to do things this way rather than with a text editor or a pen and paper. But it supports some sort of event model and Python-based scripting, so I'm going to dig around a bit and see if I can use python to wrap the text in response to content changes. Unless one of you lovely people has a better idea..? Basically I want to have the option to explicitly set the node size via GUI interaction, and have the contents wrap and rescale (within a certain range of font sizes) to fit it. Rich text would be pretty useful.
In other words, this is actually a valid SO question at this point, since it appears to require coding.
To save time those eager to try existing programs handling DOT graphs:
dotty can display DOT graphs and with little luck you can move its nodes with a mouse, nothing more, and you can easily segfault as a bonus (I tried latest stable graphviz)
lefty is only a special-purpose language interpreter used by dotty, nothing to look at
KGraphEditor is an empty wishful project (a QT window and a few buttons)
gvedit is not really a graph editor: it provides a simple text editor and you hit F5 to run a layout tool and open a picture; you can actually get more functionality from configuring your own favourite text editor
grappa is an abandoned java applet, which I failed to run
interestingly, dia can export to DOT ("PyDia DOT Export"), but due to its buggy printing, you have to post-process the files to use them
graphedit can read in DOT a graph and you can move its nodes around and change their colors
Eclipse people started working on DOT support in GEF4, so it can display DOT graphs
GraphUI has a very interesting demonstration video, but beware: although it might seem that the graph is being created by clicking and dragging, in reality all editing happens through the keyboard, using shortcuts. On the plus side, contextual instructions are always available showing which shortcuts do what.
DotEditor claims a tree editor, modifying node attributes/color/shape with mouse.
The graph editors mentioned in other answers, yEd (a Java application) and JointJS/Rappid (a JaveScript thing) apparently have nothing to do with DOT (tried both).
I believe there exist no working DOT-handling graph editor out there at all.
JointJS is a Javascript graph editing library based on Backbone : http://www.jointjs.com/
The author also provides Rappid, an online graph editor which might suit your needs, I don't know about dot files import though.

Resources