How to add DRM on epub programmatically? - epub

I'm searching for a method to add DRM on ePub files programmatically. Anyone know how to do that? Maybe 3rd party software?

I added DRM with following things:
if ePub is coming from server, make zip file password protected and inside HTML pages you can encrypt via AES-128. For images also you can encrypt but you need to add more code on your reader part.
if you are encrypting images, then all images must be decrypted before you load HTML page in web or browser.

If you just want to protect your books from copying, then publish them through Amazon or Apple. As long as you don't select otherwise, those bookstores will wrap the books in DRM that allows them to be read only a limited number of devices belonging to the purchaser.
If you have some reason for wanting to worry about DRM yourself, then after carefully considering why on earth you'd want to, you'll need to find a vendor who can provide both the DRM technology and the reader (perhaps white-labeled for you) which knows how to read those DRM'd books. You see, DRM is useless unless there's a reader that can read the DRM'd books. And what's more, you need a back-end infrastructure to keep track of which devices belong to which person. There are vendors who provide such solutions. However, you'll end up paying them some of the money you were trying to save by avoiding Amazon or Apple.
The pricey Adobe solution mentioned by one commenter has the advantage that it is used by multiple bookstores/reading systems, including Kobo and Sony, so if you use it, then people buying your books can read them on any of these devices--albeit with an annoying step involving some software called ADE.
If for some strange reason you are thinking of building this entire infrastructure yourself, all I can say is, good luck.
More generally, even if you work through Amazon or Apple, it's well worth stepping back and thinking if you really want to do DRM or not. It's a natural human instinct to think, "By golly, I'm not going to let anyone steal MY book!!", but many of the people that might pirate a non-DRM'd book would not have paid money in the first place, so it's hard to say you're actually "losing" money. And someone who pirated the book might then Tweet or tell their friends about it and you'll end up selling more books than otherwise. Finally, someone who really wants the book without paying will crack the DRM anyway, as another commenter noted.

Related

iOS obfuscation of supporing files

I have an sqlite table and some audio in my iOS application that I have put a lot of work and effort into, but looking through iFile or any other browser based application I can easily find these files and do whatever I want with them. If I can do this then someone else and more malicious than myself would be able to do the same.
How can I obfuscate my files while keeping them usable?
What you need to do depends on who you are protecting them from.
Using NSData "Data Protection" will protect the file only wheb the iDevice is locked—at best but is a step up.
Another method is to encrypt them with a key which you save in the keychain. on an iPhone 6s can encrypt 1Mb in 6ms, an iPhone 4s in 30 ms (using Common Crypto), so there is really no noticible speed degradation. A good candidate for this is a 3rd party library: RNCryptor, it handes many details needed to do this right. The attacker will have to be more than a cyrious user, this may meet your needs.
You need to define the attacker you are protecting against ranging from a curious kid to a well funded government.
Depending on how hard you want to make it, just hash all filenames so people can't see them. if thats too easy encrypt them ... I have an answer here on SO that details how to do this

Creating PDFs from iOS text fields

I'm working on the requirements & specifications for a new iOS app intended for use by certain professionals working "in the field". All day long for weeks on end, these folks have a sizable reporting burden to their superiors using standardized forms that track all different kinds of information. Traditionally, those forms are in PDF, and are simply printed and filled out in ink and then shared with the dozens to hundreds of others working the same operation. Sometimes they'll use a PDF with form fields so the data can be typed and then printed as part of the form. Either way, given their workflow, time and stress pressures, and other factors, it's not a very productive way to get the standardized reporting forms done.
The app we're spec'ing would offer an iOS (and Android, if possible -- but secondary or even tertiary requirement at this point) user interface for tracking the data they enter in the field, organizing it in a logical manner for each individual user, and with the press of a button, take all that data and automatically create a PDF file of it using the standardized form.
Of course, the forms are STRICTLY and rigidly standardized in this industry, and any deviation in format, structure, or presentation is simply not tolerable.
So I was approaching the project by thinking the app would maintain an internal repository of the original standardized forms from the accrediting organization, with each possible data area defined as a field. The app would:
open the necessary PDF form for the task at hand;
parse its dictionary to identity the specific data fields;
for every single field, identify the relevant data from the iOS app's own user interface and data tables, and assign that data to the corresponding field from the PDF/dictionary
export the PDF to a NEW PDF file, which the app would either email or store through iCloud, Dropbox, or some other form of file sharing.
The catch with #4 is that that PDF file must remain editable by standard PDF applications on Windows and Mac (Acrobat, Preview, etc.), so all the fields need to remain. And the PDF should be viewable just the same on either Windows or Mac.
Now, at NO time will the PDF (neither the original nor the exported final document) EVER need to be displayed inside the iOS app, nor would it make much sense to be able to do so.
I don't know if any of this is possible. This is our first iOS project, and we've been leaning towards building the app using Moai or Corona or some other framework to save development time and make porting across platforms easier. That said, if it cannot be done using Lua and one of these frameworks (I remain skeptical...they seem HIGHLY geared towards games), we're not opposed to doing it directly in Objective C and building an Android version some time down the road.
But either way, I'm at a loss in assessing whether this is even a practical undertaking. Our requirements are clear, and frankly if this can't be done, the project won't be pursued any further. But I could definitely use some help from you folks in identifying what my options are, whether I can do it in Lua, and what SDK(s) would be most useful in accomplishing this.
Based on what you've said, it seems that there is little reason to do the PDF-based part of the work on the mobile device itself since:
you don't need to display it on the ipad
you plan to email it or store it in the cloud
if you write this for iOS you will have to write again for Android as you've mentioned
Can you simplify the mobile part of your requirement by focusing on the data-collection and validation, then firing off to a server to do the document production? That will give you a lot more flexibility in the tools that you can use to merge the data into PDF docs. If so you could look at creating PDFs or populating the fields from code using something like iText (C# or Java). If you don't want to build your own back end server you could try something like Docmosis Cloud - but that might not allow you to get your precise layouts.
Certainly the catch you mentioned - needing to keep the PDFs editable with their fields is a significant gotcha in all cases. If you could convince the stakeholders that it is better to generate the final documents from your system (generate draft, review, update data, generate again etc) - rather than generating editable documents that you then lose control and tracability over, then you will be miles ahead.
Hope that helps.
Did you consider just generating a new pdf using an image of the form as the background to the pdf and just writing the user's data into the required areas over the form image. Would reduce the complexity of trying to parse the original form PDFs.
That's a point of worthwhile discussion, but one we don't have an ideal answer on. I tend to think of that as the almost perfect scenario -- it'd be considerably easier to develop. There are two key issues with this approach that have made us table it except as a very last resort:
The users of this product would be working in the field. That field could be quite literally anywhere--the streets of Manhattan, a disaster-stricken area with infrastructure that's been severely damaged or even destroyed, or the most war-ravaged third world country. If it were the streets of, say, Manhattan, there's no problem--their iOS or Android device will have 3G or Wi-Fi access just about anywhere they go. In the latter two scenarios (which are arguably more common in this industry), that connectivity may be very limited. The concern is whether the end user's ability to be productive or to see and share data with their colleagues will be too greatly restricted if they don't have a decent signal. To be fair though, even today they often aren't even using mobile devices, forcing them to go back to a headquarters type location or use radios to share information, effectively negating my point here. But if we're not going to significantly increase their productivity in the field, it just gives us pause to think through whether or not we have enough of a value proposition to ask them to fairly significantly change their methods of doing things.
To your latter point, no there's no convincing the stakeholders that this new system is the better approach. Even if there were, it would take years to do so. These forms are a part of a well-defined, decades-old standard used by literally thousands of organizations.

Any good (free) text-to-speech engines out there?

I've been scouring the SO board and google and can't find any really good recommendations for this. I'm building a Twilio application and the text-to-speech (TTS) engine is way bad. Plus, it's a pain in the ass to test since I have to deploy every time. Is there a significantly better resource out there that could render to a WAV or MP3 file so I can save and use that instead? Maybe there's a great API for this somewhere. I just want to avoid recording 200 MP3 files myself, would rather have this generated programatically...
Things I've seen and rejected:
http://www.yakitome.com/ (I couldn't force myself to give them my email)
http://www2.research.att.com/~ttsweb/tts/demo.php
http://www.naturalreaders.com/index.htm
http://www.panopreter.com/index.php (on the basis of crappy website)
Thinking of paying for this, but not sure yet: https://ondemand.neospeech.com/
Obviously I'm new to this, if I'm missing something obvious, please point it out...
I am not sure if you have access to a mac computer or not. Mac has pretty advanced tts built into the operating system. Apple spent a lot of money on top engineers to research it. It can easily be controlled and even automated from the command prompt. It also has quite a few built in voices to choose from. That is what I used on a recent phone system I put up. But I realize that this is not an option if you don't have a mac.
Another one you might want to check into is http://cepstral.com/ they have very realistic voices. I think they used to be open source but they are no longer and now you need to pay licensing fees. They are very commonly used for high end commercial applications. And are not so much geared towards the home user that wants their article read to them.
I like the YAKiToMe! website the best. It's free and the voices are top quality. In case you're still worried about giving them your email, they've never spammed me in many years of use and I never got onto any spam lists after signing up with them, so I doubt they sold my email. Anyway, the service is great and has lots of features for turning electronic text into audio files in different languages.
As for the API you're looking for, YAKiToMe! has a well-documented API and it's free to use. You have to register with the site to use it, but that's because it lets you customize pronunciation and voice selection, so it needs to differentiate you from other users.

Using ATOM, RSS or another syndication feed for paid content

I work for a publishing house and we're discussing different ways to sell our content over digital channels.
Besides the web, we're closely watching the development of content publishing on tablets (e.g. iPad) and smartphones (e.g. iPhone). Right now, it looks like there are four different approaches:
Conventional publishing houses release Apps like The Daily, Wired or Time Magazine. Personally I name them Print-Content-Meets-Offline-Website Magazines. Very nice to look at, but slow, very heavy regarding datasize and often inconsistent on the usability side. Besides that: These magazines don't co-exist well in a world where Facebook and Twitter is where users spend most of their time and share content.
Plain and stupid PDF. More or less lightweight, but as interactive and shareable as a granite block. A model mostly used by conventional publishers and apps like Zinio.
Websites with customized views for different devices (like Die Zeit's tablet-enhanced website). Lightweight, but (at least until now) not able to really exploit a hardware platform as a native app can.
Apps like Flipboard, Reeder or Zite go a different way: Relaying on Twitter-, Facebook- and/or syndication-feeds like RSS and Atom, they give the user a very personalized way to consume news and media. Besides that, the data behind it is as lightweight as possible, the architecture to distribute the data is fast and has proven for years to be reliable.
Personally, I think #4 is the way to go. Unluckily the mentioned Apps only distribute free content and as a publishing house we're also interested in distributing paid content.
I did some research googled around and came to the conclusion, that there is no standardized way to protect and sell individual articles in a syndication feed.
My question:
Do you have any hints or ideas how this could be implemented in a plattform-agnostic way? Or is there an existing solution I just haven't found yet?
Update:
This article explains exactly what we're looking for:
"What publishers and developers need is
a standard API that enables
distribution of content for authorized
purposes, monitors its use, offers
standard advertising units and
subscription requirements, and
provides a way to share revenues."
Just brainstorming, so take it for what it's worth:
Feedreaders can't do buying but most of them have at least let you authenticate to feeds, right? If your free feed was authenticated, you would be able to tie the retrieval of atom entries to a given user account. The retrieval could check the user account against purchased articles and make sure they were populated with fully paid content.
For unpurchased content, the feed gets populated with a link that takes you to a Buy The Article page. You adjust that user account and the next time the feed is updated, the feed gets shows the full content. You could even offer "article tracks" or something like that where someone can by everything written by a given author or everything matching some search criteria. You could adjust rates accordingly.
You also want to be able to allow people to refer articles to others via social media sites and blogs and so forth. To facilitate this, the article URLs (and the atom entry ids) would need to be the same whether they are purchased or not. Only the content of the feed changes depending on the status of the account accessing the feed.
The trick, it seems to me, is providing enough enticement to get people to create an account. Presumably, you'd need interesting things to read and probably some percentage of it free so that it leaves people wanting more.
Another problem is preventing redistribution of paid content to free channels. I don't know that there is a way to completely prevent this. You'd need to monitor the usage of your feeds by account to look for access anomalies, but it's a hard problem.
Solution we're currently following:
We'll use the same Atom feed for paid and free content. A paid content entry in the feed will have no content (besides title, summary, etc.). If a user chooses to buy that content, the missing content is fetched from a webservice and inserted into the feed.
Downside: The buying-process is not implemented in any existing feedreader.
Anyone got a better idea?
I was looking for something else, but I've came across with Flattr RSS plugin for WordPress.
I didn't have time to look it through, but maybe you can find some useful ideas in it.

Is there a way to read a browser's history, using Adobe AIR or any other tool?

First of all, I'm not a hacker :)
We're doing a project where we'll award points to users for visiting certain groups of sites.
Obviously there are major privacy concerns, but we have no interest in actually knowing where they've been, just as long as the program we create can check the history and through an algorithm, rank the site/user.
This would be a downloadable application and we'd tell the user how it worked, since transparency is vital.
Now, with that in mind, is there a way for a local program to access the Cache/History of a browser and make a list out of it?
I've read that Firefox uses SQLite to compile their History, which could potentially be parsed using Adobe AIR.
At the same time, Adobe AIR has access to the filesystem, so it could probably check if the usual IE temporary folders have any files stored. If so, try to read the URL they were downloaded from?
I know all of this sounds very dodgy, but try to keep an open mind :)
Thank you all for your help.
Not a full answer to your question, but you might be interested in the CSS History hack. If you already KNOW the sites you want to rank, you will be able to find out which sites the users visited.
Good thing you said something about a LOCAL program, because there are surely ways to read out the SQLite database from Mozilla and IE's history and you can find plenty of implementations using your favorite search engine.
Particularly easy to use are Nirsoft's utilities MozillaHistoryView and IEHistoryView which you could script to output CSV and parse that file afterwards.

Resources