Optimization methods for iOS pdf render slowness - ios

I have a pdf reader app which render the pdf file. It works fine for normal pdf files. But for some of big magazine files, it's really slow to render a page. Then I tried to upload my pdf file to GoodReader, it's slightly better than my app, but it's also very slow. That means this kind of pdf really need to be optimized before it's used for iOS device.
I've tried the Adobe Acrobat 10 to reduce the file size, but the the result is not very obvious. And I have another similar magazine pdf is rendered pretty fast in my reader. But I can't tell the difference. I think there must be some key factors will affect the pdf rendering. But unfortunately I have no idea at all.
Can anybody advise how to optimize pdf file? Are there any good software for that? Thanks

If you have control over the generation of your files, I would suggest to avoid complex compression algorithms such as JBIG2 and to reduce the resolution (not the compression quality) of your raster images. JBIG2 is only used in black and white images, so maybe this is why you are getting a slow performance with some files and not with others.
Text should not be a problem in general, they are usually straight forward for rendering, but maybe you can try avoiding full embedded fonts if possible to keep the file size small.
If you will be using these files in a web scenario, I would also recommend using Linearized PDF files.

Related

powerful geojson editor to edit 10MB worth of .geojson data

alright, I have a geojson file that is about 10MB, normal browser based editors fail for obvious reasons, so is there any geojson editor that is powerful enough to edit a 10MB file? and I am not talking about just a JSON editor.
I made in the past good experience with https://vector.rocks/.
It is an online GeoJSON editor that can work with large files. Disadvantageous is the limited functionality of the tool.
I would also check if you really need a 10MB file.
Depending on your application it might be worth compressing the file to make it easier to work with.
At https://mapshaper.org/ you can load a GeoJSON file (even very large files) and download it with reduced size but lower precision.
If both solutions are not suitable for you, you will probably have to get used to a professional GIS software.
Some of the most popular free softwares are:
QGIS (https://qgis.org)
gvSIG (http://www.gvsig.com/en/products/gvsig-desktop)
GRASS (https://grass.osgeo.org/)
They all can easily deal with 10MB files but it will but it will take you some time until you are used to the editing functions.
While designed for editing OSM, I find JOSM to also be a powerful desktop editor for large geoJSON files:
https://josm.openstreetmap.de/
You'll just want to be careful working with polygons, because JOSM doesn't directly support geoJSON polygons – they instead used a "closed way" linestring, but there is a patch:
https://josm.openstreetmap.de/ticket/17453
https://josm.openstreetmap.de/ticket/18902
Beyond polygons, JOSM is well suited for working with geoJSON linestrings and points

What are some options for handling image uploading/compression in ASP?

please bear with me as I'm not trying to frustrate anyone with inane questions, and I did google search this but I couldn't really find anything recent or helpful.
I am a novice programmer and I am using a classic asp web application. I just enabled the users to upload and download images, but I'm quickly regretting it as it's eating up all of the router bandwidth. I am finding my solution inadequate, so I wanted to start over.
My desire is threefold with this functionality:
Compression. I understand that this is impossible to do BEFORE uploading without some kind of Java/Silverlight/Flash portion of the application to handle uploads, correct? What is the common way most places go about this? Just allow regular file uploads and compress once they are on the server?
Resizing. I want to resize all images before they are uploaded to a reasonable size, instead of just telling users that try and upload huge camera images that they can't upload. I figure I just want to let them upload and have it resize for them before uploading. Does this functionality exist already?
Changing filetype. I want to allow users to upload all image file types but make them .jpg on the server after the upload.
With these three requirements, how hard is it to implement something like this in just pure code and libraries? Would it be better to just use a 3rd party plugin, such as ASPjpeg or ASPupload? Have you encountered something similar, and what was your solution?
Thanks.
Take a look at ASPJpeg and ASPUpload from Persits. We use these components to upload a full size image (can be png even though the library is "ASPJpeg"), resize it to several different sizes we need on our site, then store the resized images on the server in a variety of folders. The ASPUpload component is a little tricky but if you follow their sample code you'll be fine.
I never found a good component for decompressing uploaded zip files and had to write my own, which I've since abandoned. In the end with upload speeds increasing and storage getting so cheap, it started to matter less and less that the files were compressed before being uploaded.
EDIT: Just noticed you mentioned these components in your question. Consider this an endorsement of your idea to use them. :-)

can Mathematica be instructed to print-to-file smaller pdf files?

Mathematica 9.0.1.0, Linux.
Create a notebook cell with only the word "Section" and apply the format "Section" to it. Then create a variable x and evaluate it. Then print the two-cell notebook to a pdf file. (We often have to pass these forth and back via email to mobile users.) The resulting pdf file is just under 1MB big. A few more modest additions, and Mma print-to-file yields a 2-3MB files from about one page of notebook. for comparison, my 800 page dense latex-generated book with R graphics consumes about 4MB.
can Mma be instructed to produce more compact pdf files? I know it can rasterize graphics, but this isn't really a graphics problem.
this comes from the folks from wolfram support:
The pdf files are large because they contain embedded fonts for faithful reproduction.
One way to reduce the file size would be to set certain options below to False.
This can be done from Mathematica menu by going to Format->Options Inspector->
Select 'Global Preferences' from Show option values-> go to
Notebook Options->Printing Options-> EmbedExternalFonts set to False.
Do the same for Notebook Options->Printing Options->EmbedStandardPostScriptFonts
set to False.
However, the PDF that is generated may not look exactly like you want it, especially if you send it to someone else. However, if you just want to keep the PDF on your machine, where the fonts exist anyway, this may be a good default option.
apparently, their developers are working on the problem, too.

Convert HTML to TIFF or printable poster

I want to make a website that people come and type a sentence and I make a poster out of it, print it, and send it to them.
I know I can make a box with html divs and color it and put some web font, but My questions are:
How do I go from HTML to TIFF ( I've read TIFF is the best format for poster print)
given that dpi on web is a lot lower than posters, how do I increase the dpi on generating the poster? Can I use some sort of library on the server?
What are the drawbacks of using web fonts? if they have large enough font size?
Also how do companies like zazzle, mixbook, shutterfly go about putting font on the image and printing it large?
My Original plan was to use Rails, are there any useful Gems that can help me?
I see Convert Html to a Printable Image people advice converting to PDF, wouldn't it destroy the quality of the poster?
Any other advice would be appreciated.
I think it will be useful for you https://github.com/csquared/IMGKit
And look at http://www.imagemagick.org/- for image processing. It has ruby wrapper http://rmagick.rubyforge.org/

Derive Text, Images, and LaTeX Equations from Websites

Would it be possible to derive the text, images, and LaTeX equations from a particular website so that you can directly customize your own PDF without having the objects blurry? Only the image will have a fixed resolution.
I realize that there are a couple ways of generating a PDF indirectly. Attempting to render a PDF from Wolfram MathWorld on the Riemann Zeta Function, for instance, would be possible by printing and saving it as a PDF via Chrome, but as you zoom in more closely, the LaTeX equations and text naturally become blurry. I tried downloading "Wolfram's CDF Player," but it contains only the syntax for Mathematica's libraries - not the helpful explanations that the Wolfram MathWorld provides. What would be required for me to extract the text, images, and LaTeX equations in a PDF file wihtout having them blurry?
Unless you have access to the LaTeX source that was used to produce the images in a way that isn't apparent from your question, the answer is "you cannot." Casual inspection of the website linked implies that the LaTeX that is used to produce the equations is not readily available (it's probably on a backend system somewhere that produces the images that get put on the web server).
To a browser, it's just an image. The method by which the image was produced is irrelevant to how it appears on the web page, and how it would appear in a PDF (ie. more pixelated than desired).
Note that if a website uses a vector-graphics format like SVG instead of a pixel based format like PNG or JPEG, then those will translate to PDF cleanly, and will zoom nicely. That's a choice that would be made by the webmaster of the site in question.
Inspecting the source reveals that the gifs depicting each equation have alt-text that approximates the LaTeX that would render them (it might be Mathematica code--I'm not familiar with Wolfram's tools). Extracting a reasonable source wouldn't be impossible, but it would be hard. The site is laid out with tables, so even with something like beautiful soup parsing the HTML could be tricky. Some equations are broken up into different gifs, so parsing them would be even trickier. You'd also have to convert from whatever the alt-text is to LaTeX.
All in all, if you don't need to do a zillion pages, I'd suggest copy-pasting the text, saving the images, grabbing the alt-text of each image and doing the converting yourself.
For the given example, you could download the Mathematica notebook for that page. Maybe it is possible to parse something from that.

Resources