TensorFlow Image Classifiers on Android, Android Things, and iOS

The TensorFlow repository contains a selection of examples, including sample mobile applications, for Android and iOS. This article compares the TensorFlow image classifier on Android, Android Things, and iOS.

1 — Android

As you’d probably expect from an open source project developed by Google, TensorFlow currently has more sample apps available for Android than iOS. The README explains them all, but today we’re just looking at the image classifier (app pictured on the left).

Collage of various images. Orange coffee mug on tan and black granite countertop and blue rectangle with black text above it; yellow and red abstract droid figure with purple and yellow abstract background; multiple abstract colored images on a wall; dark figures in a park with black trees, and green, yellow, and pink squares outlining the figures; white notepad table with grey borders and grey text

Current TensorFlow Samples https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/README.md

Here’s a demo video of it in action:

https://www.youtube.com/watch?v=4oU4N6bAjR4

It can only classify items that it has been trained on (as explained here), and it generally does a good job.

Pro tip: you can press the hardware volume down button to see diagnostics information on screen like so:

 

Clear water bottle with blue cap standing on a tan and brown granite countertop. In the bottom right corner, the image is repeated at smaller scale. Lines of white code written across the whole image

In the above image you can see:

  • The inference time (bottom left).
  • Preview of the square cropped image used for inference (bottom right).

If you want to know more about how this app works behind the scenes, see Using a Pre-Trained TensorFlow Model on Android.

Performance

~200–300ms per inference on a 2015 Nexus 5.

~100–400ms per inference on a Samsung S8+.

Size

The app is 99MB (includes all four sample apps):

  • Inception model = 53MB.
  • Libraries = 11MB to 17MB per architecture.

Prerequisites

  • Android Studio.

For testing on a real device:

  • Enable Developer Options and Enable USB debugging on the device by following these instructions.

Building

  • Open Android Studio.
  • Press play. 

If you don’t want to build the sample app, you have two options:

Wooden table with black TV remote with white, red, yellow, green, and blue buttons; black tablet next to remote with image of that remote on its screen; black Rainbow Hat androidthings device with green, red, and blue lights and white buttons, and hand holding up smaller green, black, and white computer chip

If you have a hardware screen, you’ll be able to see the photos and classifications there (as above).If you don’t have a hardware screen, you can view the logs in Logcat. You’ll see lots of noisy logs, but the ImageClassifierActivity log is the one to look for:

 
    ...
01-01 00:01:12.596 714-756/com.example.androidthings.imageclassifier D/ImageClassifierActivity: Got the following results from Tensorflow: [[578] remote control (91.3%)]
...
  

I tried taking photos of a few different objects, with varying success:

  • A remote control, laptop screen and water bottle all worked very well.
  • The developer board (i.e. doing a hardware selfie) was classified as holster or switch.
  • Taking photos of people didn’t work. Apparently this is because the early versions of the Inception image classifier model have not been trained on pictures of people, so I guess this is “working as designed” for the moment. :-)

Performance

~2000–5000ms per inference on a Pico board.

Size

The app is 69MB:

  • Inception model = 53MB.
  • Libraries = 11MB per architecture (armv7 and arm64).

Prerequisites

Detailed instructions for the hardware you need, and the steps to set it up, are all in the Android Things Image Classifier codelab. I’ve just listed the highlights here.

  • Hardware (developer board, camera, Rainbow HAT, USB C cable). A screen is optional.
  • The OS image for the hardware.
  • Android Studio 3.0+.
Black rainbow hat device with white buttons, green, black, and white developer board, black USB C cable, and green camera with long rectangular white piece of paper on a brown wooden table

In my case I was using a Pico Pro developer board.

Building

  • Connect everything up:
Close up of black rainbow hat device with white buttons, green, black, and white developer board, black USB C cable, and green camera with long rectangular white piece of paper all connected together, resting on brown wooden table
  • Flash the OS image.
  • Press play in Android Studio.
  • Reboot board (needed to grant camera permission).
  • Press play in Android Studio again.

3 — iOS

There are three sample iOS apps in the TensorFlow repository.

If you don’t have access to a real iOS device, then you’ll only be able to build and run the simple and benchmark projects.

The simple project loads a single image of Grace Hopper, and classifies it, resulting in 51% confidence that it sees “military uniform”, and 10% confidence it sees a “mortarboard”.

Screenshot of tan rectangle with white box containing black code and a red rectangle outlining part code. On right, there is square image of grace hopper wearing a military uniform

The benchmark project is the same, except it also prints out profiling information.

The camera project is basically the same as the Android TF Classify app. It provides super fast, real-time image classification with an additional option to freeze frame it:

Orange coffee mug on tan and black granite countertop, with transparent black status bars with white text above it

Performance

~50ms per inference on an iPhone 7.

Size

The app is 98MB:

  • Inception model = 53MB
  • Libraries = 11MB per architecture (armv7 and arm64)

Prerequisites

The camera sample needs to run on a physical iOS device. If you aren’t familiar with the iOS development practices, you might find some of these steps tricky.

  • Xcode 7.3+.
  • Install CocoaPods (pod).

For testing real devices:

  • Apple Developer Account — $99 / year.
  • Set up some signing certificates and provisioning profiles.
  • Provision your test devices with Apple.

Note — If you’re wondering why you can’t just download a demo app, it’s because the Apple App Store currently does not allow demo/sample apps.

Building

The README provides all the detailed steps, but in summary:

  • Download model, run pod install, downloads ~800MB
  • Open .xcworkspace file (.xcodeproj gives linker errors)
  • Press play

For testing real devices:

  • Select your signing identity in Info.plist

Conclusion

This has been a quick walk through of some of the TensorFlow image classifiers available on Android, Android Things, and iOS. For a full listing and explanation of their respective offerings, check out:

If you liked this article you might enjoy the video of my talk on Applied TensorFlow in Android Apps.


Dan Jarvis, Sr. Mgr, Mobile Software Engineering, Capital One

Machine Learning & Android — https://stackoverflow.com/cv/dj

Related Content

dark figure looking at a wall of white neon text with red backlight.
Article | January 29, 2019
man and woman with blue gradient shading
Article | November 19, 2019
tan wall with rows of windows outlined in yellow. each window is filled with a different color - white, shades of green, or blue
Article | March 20, 2019 |2 min read