One of the mobile apps I’ve had the most fun with lately is Google’s PhotoScan. Basically, it lets you use your phone’s camera to scan printed pictures rather than take a picture of a picture. Here’s how it’s described on the app page:
Don’t just take a picture of a picture. Create enhanced digital scans, wherever your photos are.
– Get glare-free scans with an easy step-by-step capture flow
– Automatic cropping based on edge detection
– Straight, rectangular scans with perspective correction
– Smart rotation, so your photos stay right-side-up no matter which way you scan them
Scan in seconds
Capture your favorite printed photos quickly and easily, so you can spend less time editing and more time looking at your bad childhood haircut.
The way it works is you fire up the app, make sure the picture you’re scanning is in the “frame” you see on your screen, and then when you tap the shutter button four dots appear on the picture and the app instructs you to move a circle from dot to dot in a certain order. I wondered why the app requires this action, but not enough to acturally research it, but I think I might have stumbled on the answer in this article about Google’s massive book scanning project:
The stations—which didn’t so much scan as photograph books—had been custom-built by Google from the sheet metal up. Each one could digitize books at a rate of 1,000 pages per hour. The book would lie in a specially designed motorized cradle that would adjust to the spine, locking it in place. Above, there was an array of lights and at least $1,000 worth of optics, including four cameras, two pointed at each half of the book, and a range-finding LIDAR that overlaid a three-dimensional laser grid on the book’s surface to capture the curvature of the paper. The human operator would turn pages by hand—no machine could be as quick and gentle—and fire the cameras by pressing a foot pedal, as though playing at a strange piano.
What made the system so efficient is that it left so much of the work to software. Rather than make sure that each page was aligned perfectly, and flattened, before taking a photo, which was a major source of delays in traditional book-scanning systems, cruder images of curved pages were fed to de-warping algorithms, which used the LIDAR data along with some clever mathematics to artificially bend the text back into straight lines.
I don’t know for sure, but it sure sounds like the technology developed for the book scanning project translated nicely to an app that could be used by average people armed with smartphones to scan gazillions of old photos into the great Googleshpere in the sky. Amazing.
Oh, and you should read that article on the book scanning project. It’s a fascinating exploration of a copyright conflict that has resulted in Google having a database of 25 million scanned books that no one is allowed to read.